Skip to content

acmlab/HoloBrain_HoloGraph

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 

Repository files navigation

Explore brain-inspired machine intelligencefor connecting dots on graphs throughholographic blueprint of oscillatorysynchronizat

HoloBrain_HoloGraph/
 ├─ Holobrain.py                  # Script for computing Holobrain (CFC)
 ├─ source/
 │   ├─ data/
 │   │   ├─ create_dataset.py    
 │   │   └─ dataset.py            # Data loading for different brain data
 │   ├─ modules/
 │   │   ├─ GST.py                # GST module (Graph Sattering Transform)
 │   │   └─ kuramoto_solver.py    # Kuramoto solver for oscillator synchronization
 │   ├─ holograph_holobrain.py                  # The main HoloGraph/HoloBrain model
 │   └─ utils.py                  
 ├─ train_brain.py                # Script for brain data
 ├─ train_cluster.py              # Script for unsupervised clustering
 └─ train_node.py                 # Script for node-level prediction

⚙️ Installation

  1. Create environment

    conda create -n holobrain python=3.10 -y
    conda activate holobrain
  2. Install PyTorch

    pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
  3. Install dependencies

    pip install -r requirements.txt
  4. (Optional) Configure Accelerate

    accelerate config

📂 Data

The script uses:

from source.data.create_dataset import create_dataset
dataset = create_dataset(args.data)
  • Supported datasets are defined in your create_dataset implementation.

  • Example: "Cora", "Wisconsin","HCP-YA".

  • Each dataset must yield tuples of the form:

    (features, adjacency_matrix, target)

🚀 Running

Homophilic graphs (e.g., Cora)

For homophilic graphs such as the Planetoid dataset Cora, you can run:

python train_node.py \
  --data Cora \
  --lr 0.0005 \
  --ch 1024 \
  --Q 8 \
  --homo True \
  --L 3 \
  --weight_decay 0.01 \
  --dropout 0.5 \
  --use_scheduler False

Heterophilic graphs (e.g., Wisconsin)

For heterophilic graphs, you can run:

python train_node.py \
  --data Wisconsin \
  --lr 0.001 \
  --ch 256 \
  --Q 12 \
  --homo False \
  --L 1 \
  --weight_decay 0.001 \
  --dropout 0.5 \
  --use_scheduler False

Single-GPU / CPU

python train_brain.py --L 2 --N 4 --batchsize 256 --T 8 --h 256 --epochs 200 --data HCP-YA --gpu 0

Multi-GPU with Accelerate

accelerate launch --multi_gpu --num_processes 2 --gpu_ids 0,1 --main_process_port 29500 train_brain.py --L 2 --N 4 --batchsize 256 --T 8 --ch 256 --epochs 200 --data HCP-YA  

Key Arguments

  • Training: --epochs, --lr, --ema_decay, --warmup_iters, --batchsize, --num_workers
  • Data/Model: --data, --num_nodes, --feature_dim, --num_class, --L (# solvers), --T (# time steps), --N (oscillator dim), --h (hidden dim)
  • Options: --use_pe (positional encoding), --node_cls (node classification mode), --parcellation (parcellation mode), --y_type (linear|conv), --mapping_type (conv|gconv)

For full list:

python train_brain.py -h

📊 Training Details

  • Cross-validation: 5-fold (default).
  • Optimization: Adam + linear warmup scheduler.
  • EMA: model weights updated with decay factor (--ema_decay).
  • Metrics: Accuracy, Precision, Recall, F1 (weighted).

At the end of training:

  • Best metrics per fold are logged.
  • Average results across folds are reported.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages