Explore brain-inspired machine intelligencefor connecting dots on graphs throughholographic blueprint of oscillatorysynchronizat
HoloBrain_HoloGraph/
├─ Holobrain.py # Script for computing Holobrain (CFC)
├─ source/
│ ├─ data/
│ │ ├─ create_dataset.py
│ │ └─ dataset.py # Data loading for different brain data
│ ├─ modules/
│ │ ├─ GST.py # GST module (Graph Sattering Transform)
│ │ └─ kuramoto_solver.py # Kuramoto solver for oscillator synchronization
│ ├─ holograph_holobrain.py # The main HoloGraph/HoloBrain model
│ └─ utils.py
├─ train_brain.py # Script for brain data
├─ train_cluster.py # Script for unsupervised clustering
└─ train_node.py # Script for node-level prediction
-
Create environment
conda create -n holobrain python=3.10 -y conda activate holobrain
-
Install PyTorch
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
-
Install dependencies
pip install -r requirements.txt
-
(Optional) Configure Accelerate
accelerate config
The script uses:
from source.data.create_dataset import create_dataset
dataset = create_dataset(args.data)-
Supported datasets are defined in your
create_datasetimplementation. -
Example:
"Cora","Wisconsin","HCP-YA". -
Each dataset must yield tuples of the form:
(features, adjacency_matrix, target)
For homophilic graphs such as the Planetoid dataset Cora, you can run:
python train_node.py \
--data Cora \
--lr 0.0005 \
--ch 1024 \
--Q 8 \
--homo True \
--L 3 \
--weight_decay 0.01 \
--dropout 0.5 \
--use_scheduler FalseFor heterophilic graphs, you can run:
python train_node.py \
--data Wisconsin \
--lr 0.001 \
--ch 256 \
--Q 12 \
--homo False \
--L 1 \
--weight_decay 0.001 \
--dropout 0.5 \
--use_scheduler Falsepython train_brain.py --L 2 --N 4 --batchsize 256 --T 8 --h 256 --epochs 200 --data HCP-YA --gpu 0accelerate launch --multi_gpu --num_processes 2 --gpu_ids 0,1 --main_process_port 29500 train_brain.py --L 2 --N 4 --batchsize 256 --T 8 --ch 256 --epochs 200 --data HCP-YA - Training:
--epochs,--lr,--ema_decay,--warmup_iters,--batchsize,--num_workers - Data/Model:
--data,--num_nodes,--feature_dim,--num_class,--L(# solvers),--T(# time steps),--N(oscillator dim),--h(hidden dim) - Options:
--use_pe(positional encoding),--node_cls(node classification mode),--parcellation(parcellation mode),--y_type(linear|conv),--mapping_type(conv|gconv)
For full list:
python train_brain.py -h- Cross-validation: 5-fold (default).
- Optimization: Adam + linear warmup scheduler.
- EMA: model weights updated with decay factor (
--ema_decay). - Metrics: Accuracy, Precision, Recall, F1 (weighted).
At the end of training:
- Best metrics per fold are logged.
- Average results across folds are reported.