Releases: ACEsuit/mace
v0.3.14
MACE v0.3.14 Release Notes
We are excited to announce MACE v0.3.14, featuring significant new capabilities for embedding functionality, GPU acceleration, dielectric properties prediction, and enhanced training options.
🏗️ Foundation Models
MACE-OMOL Foundation Models
Introduced support for MACE-OMOL models trained on the 100M OMOL dataset with charge and spin embeddings, providing improved accuracy for organic molecules with charges and spins.
Example usage:
from mace.calculators import mace_omol
# Load MACE-OMOL model with charge and spin support
calc = mace_omol(model="extra_large", device="cuda")
atoms.calc = calc
# Set charge and spin for the system
atoms.info["charge"] = 1.0 # +1 charge
atoms.info["spin"] = 1.0 # spin multiplicity
energy = atoms.get_potential_energy()
forces = atoms.get_forces()Small OMAT Model
Added omat-small-0 to foundation model shortcuts for faster inference when high accuracy is not critical.
Example usage:
from mace.calculators import mace_mp
calc = mace_mp(model="small-omat-0") # Faster, smaller model🎯 Fine-tuning
Pseudo-label Finetuning
Added multihead pseudo-label finetuning capability, allowing models to be fine-tuned on their own predictions for improved accuracy and easy of use.
Example usage:
python run_train.py \
--foundation_model="medium" \
--train_file=real_data.xyz \
--pt_train_file=replay_data.xyz \
--multiheads_finetuning=True \
--pseudolabel_replay=True \Enhanced Multihead Finetuning
Fixed various issues with multihead finetuning functionality and improved robustness for the code.
Add option to use any of the models from mace_mp as --foundation_model key name, see: https://github.com/ACEsuit/mace/blob/0139da1b864b29054f07db6627887a847a42050e/mace/calculators/foundations_models.py#L19. For example, --foundation_model="small-omat-0".
🔬 Models
Embedding Functionality for MACE
Added support for embedding additional properties like total charge, total spin, and electronic temperature into MACE models, enabling more accurate predictions for charged and magnetic systems.
Training example with custom embeddings:
python run_train.py \
--train_file=data.xyz \
--embedding_specs='{"charge": {"embed_type": "continuous", "min": -5, "max": 5}, "spin": {"embed_type": "continuous", "min": 0, "max": 4}, "temperature": {"embed_type": "continuous", "min": 0, "max": 3000}}' \
--use_embedding_readout \
--model=MACEDielectric MACE for Polarizability Prediction
Added support for the Dielectric MACE model to predict molecular polarizabilities alongside energies and forces.
Reference: Kapil, et al, "First-principles spectroscopy of aqueous interfaces using machine-learned electronic and quantum nuclear effects"
Example usage:
# Train dielectric MACE model
python run_train.py \
--model="AtomicDielectricMACE" \
--train_file=data.xyz \
--loss="dipole_polar" \
--polarizability_key="REF_polarizability" \
--error_table="DipolePolarRMSE" \LES Models for Coulomb Interactions
Implemented support for LES models for improved description of Coulomb interactions.
Reference: Bingqing Cheng, "Latent Ewald summation for machine learning of long-range interactions"
Example usage:
# Train MACELES model
python run_train.py \
--model=MACELES \
--train_file=data.xyz \New Non-linear Blocks
Introduced improved non-linear readout blocks that provide better accuracy when training MACE on large datasets. See code.
⚡ Performance Improvements
OpenEquivariance Support
Integrated OpenEquivariance support for MACE kernels, enabling acceleration on AMD GPUs and providing alternative compute backends. Use --enable_oeq=True for training with openequivariance, and use enable_oeq=True to evaluate ASE calc with openequivariance.
CuEq Fused Operations
Added CuEq fused operations for tensor product and scatter sum operations, significantly improving training and inference speed on CUDA devices.
Refactored Clebsch-Gordan Coefficients
Implemented option to use refactored CG coefficients with fewer computational paths for improved efficiency.
Example usage:
python run_train.py \
--train_file=data.xyz \
--use_reduced_cgElement Agnostic Product Basis
Added option for element-agnostic product basis, reducing memory usage and improving scalability for systems with many element types.
Example usage:
python run_train.py \
--train_file=data.xyz \
--use_agnostic_product🔧 Training and Infrastructure Improvements
Improved Multi-GPU Support
- Fixed support for
torchrunand MPI multi-GPU training - Dynamically set backend for distributed training based on device type
- Better handling of distributed training configurations
Intel GPU Support
Added IPEX (Intel Extension for PyTorch) support for Intel GPU inference and training.
🐛 Bug Fixes and Improvements
- Fixed bug preventing building 1-layer (no message passing) MACE models
- Improved handling of E0s specification via JSON in preprocessing
- Better pathlib usage for file suffix handling in finetuning_select
- Fixed OEQ import handling on macOS without CUDA
- Corrected loss output to log files during fine-tuning
- Improved behavior of auto-downloaded MP data when using pseudolabels
- Enhanced handling of atomic numbers in multihead training
📚 Additional Features
- Added support for calculating descriptors in eval_configs
- Enhanced node energy output capabilities
- Improved interaction energy plotting functionality
- Better filtering of training quantities with zero weights
🙏 Acknowledgments
We thank all contributors to this release, including new contributors @Alexsp32, @Enry99, @vbharadwaj-bk, @naik-aakash, and @Nilsgoe.
Full Changelog: v0.3.13...v0.3.14
For detailed documentation and examples, visit our GitHub repository and documentation.
v0.3.13
MACE 0.3.13 Release Notes
🚀 LAMMPS Integration Enhancements
- Added new MLIAP interface for LAMMPS, enabling significantly improved performance and flexibility. (Documentation)
- Implemented CuEquivariance support in LAMMPS models for GPU acceleration.
- Added multi-GPU inference support for large-scale molecular dynamics simulations using MPI message passing of intermediate tensors for effecient scaling.
- Improved timing and profiling capabilities via environment variable controls.
- New command-line option in
create_lammps_model.pyto select betweenlibtorch(legacy) andMLIAPformats:
# Convert model to MLIAP format
python -m mace.cli.create_lammps_model model.pt --format=mliap🧮 Atomic Stresses Computation
- Added support for computing atomic stresses and atomic virials.
- Useful for analyzing local stress distributions in materials simulations.
from mace.calculators import mace_mp
from ase import build
# Create structure with 10 atoms
atoms = build.bulk("Al", "fcc", a=4.05, cubic=True)
atoms = atoms.repeat((2, 2, 2))
calc = mace_mp(device="cpu", compute_atomic_stresses=True)
atoms.set_calculator(calc)
atoms.get_potential_energy()
stress = atoms.get_stress()
stresses = atoms.get_stresses()
print("Stress tensor:\n", stress.shape)
print("Stresses tensor:\n", stresses.shape)🗝️ Property Keys System Redesign
- Reworked the property keys system for improved flexibility and maintainability.
- Introduced
KeySpecificationclass to manage mappings between data formats. - Added
DefaultKeysenum to standardize access patterns. - Enhanced error reporting when keys are missing.
- Fully backward compatible with previous key conventions.
🧱 New Foundation Models
- MACE-MATPES: New foundation models finetuned on matpes dataset.
- New PBE model without +U inconsistencies and a new R2SCAN model.
- Better transferability for fine-tuning on domain-specific datasets.
| Model | Method | Energy (meV/atom) | Force (meV/Å) | Stress (GPa) |
|---|---|---|---|---|
| M3GNet | PBE | 45 | 177 | 0.898 |
| CHGNet | PBE | 32 | 124 | 0.617 |
| TensorNet | PBE | 36 | 138 | 0.695 |
| MACE | PBE | 34 | 122 | 0.296 |
| MACE-MATPES-0 | PBE | 23 | 107 | 0.304 |
| M3GNet | r2SCAN | 45 | 208 | 0.982 |
| CHGNet | r2SCAN | 27 | 150 | 0.705 |
| TensorNet | r2SCAN | 34 | 163 | 0.754 |
| MACE-MAPTES-0 | r2SCAN | 19 | 119 | 0.265 |
🧠 Improved Model Head Selection
- Added explicit head selection in the MACE calculator for multi-head models.
- You can now directly specify the head:
# Example of selecting a specific head
calc = MACECalculator(model_path="model.pt", head="DFT")- Automatically falls back to
"default"head if not specified. - Clear error messages when requested head is unavailable.
- Head-specific configurations supported during inference.
🧪 MACE Fine-Tuning Preselection CLI
MACE a tool for selecting configurations when fine-tuning foundation models.
🔍 Key Features
-
Multiple Filtering Strategies:
combinations: Only elements in your subsetexclusive: Exactly your elementsinclusive: All your elements plus potentially others
-
Selection Methods:
fps(Farthest Point Sampling) for maximum diversityrandomfor uniform random sampling
-
Weighting Control:
- Adjust the importance of pretraining vs. fine-tuning data during selection
💡 Usage Example
python -m mace.cli.fine_tuning_select \
--configs_pt path/to/pretraining_data.xyz \
--atomic_numbers "[1, 6, 8]" \
--num_samples 5000 \
--filtering_type combinations \
--output selected_configs.xyzThis functionality is also available directly in run_train.py when using the --atomic_numbers parameter. One can no longer do the filtering from run_train without adding the atomic numbers.
🧠 L-BFGS Training Support
MACE now supports L-BFGS optimization for enhanced convergence in energy and force training. This second-order optimizer can refine models beyond what first-order methods like Adam achieve.
🚀 Key Points
- Ideal for final-stage training (1–2 epochs) after Adam
- Often achieves lower energy errors and better balance of energy vs. force loss
- Supports multi-GPU training
- Slower per epoch but may require fewer total epochs
🛠️ Usage
First, pretrain with Adam:
python -m mace.cli.run_train --optimizer adam [...other options...]Then refine with L-BFGS:
python -m mace.cli.run_train --lbfgs --restart_latest [...other options...]ℹ️ Uses
history_size=200andmax_iter=20with strong Wolfe line search.
Requires more memory as it processes the full dataset per update.
📦 Installation
pip install --upgrade mace-torch
# For CUDA acceleration (CUDA 12)
pip install cuequivariance cuequivariance-torch cuequivariance-ops-torch-cu12
# For CUDA 11
pip install cuequivariance-ops-torch-cu11For complete usage and documentation, please see our official documentation.
If you encounter any issues, please report them on our GitHub Issues page.
🔄 Full Changelog
Full Changelog: v0.3.12...v0.3.13
v0.3.12
MACE 0.3.12 Release Notes
Versions support updates
- Hot fix for test files that are not lists.
- Standard variation is now extracted from the head config.
Please see our documentation for complete details on features and usage. If you encounter any issues, please report them on our GitHub Issues page.
v0.3.11
MACE 0.3.11 Release Notes
Versions support updates
- Added support for new version of cuEquivariance 0.2.0 and 0.3.0 (breaking support for 0.1.0). (See https://mace-docs.readthedocs.io/en/latest/guide/cuda_acceleration.html)
- Added support for Pytorch 2.6.0.
Enhanced Dataset Processing
- Added support for LMDB data format for efficient large dataset handling (like fairchem datasets). Supports folders of .aselmdb or .aselmdb files. (See https://mace-docs.readthedocs.io/en/latest/guide/multipreprocessing.html#lmdb-files-preprocessing).
# Example configuration for LMDB
train_file: "/path/data1/:/path/data2/:/path/data3/"
valid_file: "/path/valid1.lmdb:/path/valid2.lmdb"- Streamlined workflow for training on multiple datasets simultaneously. Can now add lists of different datastream (ASE, lmdb and xyz). (See https://mace-docs.readthedocs.io/en/latest/guide/heterogeneous_data.html).
# Example configuration for different datasets
heads:
QM7:
train_file: ["data/qm7/train.xyz", "data/qm7/train.h5", "data/qm7/database/"]
valid_file: ["data/qm7/valid.xyz", "data/qm7_h5/valid/"]
test_file: ["data/qm7/test.xyz", "data/qm7/test.h5"]New Foundation Models
- MACE-MPA-0: Improved accuracy for materials. New default foundation model when calling
mace_mp(). - MACE-OMAT-0: New model trained on the OMAT dataset with DFT (PBE+U) VASP 54.
Plotting training
- Add automatic plotting of training, validation and test statistics after a training run. Two arguments added to mace_run_train command line interface:
--plot # activate or not the plotting, default True
--plot_frequency # If zero only at the end, else, each N epoch
Support for higher L and max_ell
- Switch to cuequivariance generalized CG for L, max_ell or correlation that needs to use CG that were not pre computed in e3nn 0.4.4. Now can train models with arbitrary L and max_ell.
Installation
pip install --upgrade mace-torch
# For CUDA acceleration (CUDA 12)
pip install cuequivariance cuequivariance-torch cuequivariance-ops-torch-cu12
# For CUDA 11, use cuequivariance-ops-torch-cu11 insteadPlease see our documentation for complete details on features and usage. If you encounter any issues, please report them on our GitHub Issues page.
Full Changelog: v0.3.10...v0.3.11
v0.3.10
What's Changed
- now actually download the new models by @alinelena in #721
- Evaluate test sets separately for different heads by @ThomasWarford in #681
- Add pre-processing config file option by @ElliottKasoar in #664
- Add mace_mp medium performance benchmark by @hatemhelal in #647
- change learning rate for multihead ft by @ilyes319 in #727
- add option to rescale number of ft sample by @ilyes319 in #736
- Clean up unused Polynomial Cutoff Class from ZBLBasis, remove r_max argument. by @CompRhys in #569
- allow custom cache based on XDG_CACHE_HOME env variable, addresses #724 by @alinelena in #755
- Fix default outpout_file in select_head, and add argument to list heads by @bernstei in #772
- change default mp model to mpa model+bump version by @ilyes319 in #758
- solve jit backward compatibility by @ilyes319 in #778
- fix the reshape irreps for jit backward by @ilyes319 in #779
- fix formatting by @ilyes319 in #780
- Develop by @ilyes319 in #781
- Develop by @ilyes319 in #785
- make cueq optional dep and add special test by @ilyes319 in #786
Full Changelog: v0.3.9...v0.3.10
v0.3.9
What's Changed
- Enable swa if start_swa was given by @vue1999 in #697
- Update foundations models to contain 0b by @alinelena in #691
- extract_invariant now correctly extracts first layer when num_layers=1 by @RokasEl in #695
- Add cuequivariance support by @ilyes319 in #704
- Add cuequivariance support by @ilyes319 in #709
- Fix-extract-equivariant-features-with-num-layers-1 by @RokasEl in #703
- add stress to the cueq test by @ilyes319 in #712
- add option pt_train_file == "mp" by @ilyes319 in #716
- bum version by @ilyes319 in #719
Full Changelog: v0.3.8...v0.3.9
v0.3.8
What's Changed
- Fix compile_mode in MACECalculator by @hatemhelal in #632
- Update README.md using bibtex by @jcwang587 in #628
- Fix multihead prediction for eval_configs.py by @beckobert in #642
- Updating compiler support and test cases by @hatemhelal in #634
- Turn json keys representing elements into ints by @ThomasWarford in #652
- Fix file name of swa checkpoints by @beckobert in #648
- Increasing loss output digits by @vue1999 in #658
- Fix model paths by @ElliottKasoar in #654
- Density normalization by @ilyes319 in #667
- Bug fix and density normalization by @ilyes319 in #668
- List default arguments with --help by @vue1999 in #670
- Fixing multihead finetuning with density normalization by @CheukHinHoJerry in #682
- Add default argparser to main by @ilyes319 in #671
- fix case with multihead foundation model by @ilyes319 in #687
- Add DOI badge by @ilyes319 in #688
- fix name in foundation head args by @ilyes319 in #689
- bump version by @ilyes319 in #690
New Contributors
- @jcwang587 made their first contribution in #628
- @beckobert made their first contribution in #642
- @ThomasWarford made their first contribution in #652
- @CheukHinHoJerry made their first contribution in #682
Full Changelog: v0.3.7...v0.3.8
v0.3.7
Main new features
- Finetuning with multiheads : https://mace-docs.readthedocs.io/en/latest/guide/finetuning.html
- Training with multiheads: https://mace-docs.readthedocs.io/en/latest/guide/multihead_training.html
What's Changed
- Add Intel XPU device by @jharrymoore in #340
- Intel by @ilyes319 in #356
- Fix logic that deals with IsolatedAtom and 1-atom configuration by @bernstei in #393
- Fix bug that overwrote REF_* keys when those were the explicitly specified keys for the training reference quantities by @bernstei in #403
- Fix check for patience that fails if swa start was not specified by @bernstei in #405
- fix incorrect conversion of forces from calculator to atoms.arrays for finetuning pbe mp replay head by @bernstei in #409
- Better fix for PR #405, fix patience check when swa is not active by @bernstei in #411
- fix np.random.choice issue when subselecting from pre-trained configs for multihead fine tune by @bernstei in #448
- validation loss printed during training is not actually for the specific head by @bernstei in #449
- get rid of all stress/n_atoms by @bernstei in #443
- Stage two weights by @vue1999 in #522
- remove mypy from checks by @ilyes319 in #529
- Fix MAE table for universal loss by @ilyes319 in #548
- Create py.typed by @Andrew-S-Rosen in #554
- Improve Log Details by @vue1999 in #553
- Backward the log changes by @ilyes319 in #571
- Merge develop, pylint + log changes by @ilyes319 in #572
- Multihead finetuning and training by @ilyes319 in #575
- multihead finetuning by @ilyes319 in #578
- Fix bug about undefined swa by @Hongyu-yu in #584
- Fix for Issue 596 by @stenczelt in #597
- Split
download_mace_mp_checkpointout ofmace_mpby @janosh in #602 - Allow creation of MACECalculator without needing to write a checkpoint to disk. by @CompRhys in #535
- fix swa bug and remove e3nn fixed version by @ilyes319 in #589
- Change type hint syntax in MACECalculator to work with python39 by @bernstei in #618
- fix swa bigger than epoch by @ilyes319 in #616
- add option to return raw model in mace_mp by @ilyes319 in #619
New Contributors
- @Hongyu-yu made their first contribution in #584
- @CompRhys made their first contribution in #535
Full Changelog: v0.3.6...v0.3.7
v0.3.6
What's Changed
- bugfix for stress loss by @JPDarby in #438
- change stress normalization + setup.cfg with numpy<2.0 by @ilyes319 in #467
- fix universal loss stress by default by @ilyes319 in #472
- Support for schedulefree optimizer by @RokasEl in #477
- Add Command-Line Argument for Specifying Wandb Directory by @nqhq-lou in #466
- Add DOI badge and automated release by @ElliottKasoar in #404
- Merge latest develop by @ilyes319 in #480
- remove DOI from badge by @ilyes319 in #482
- add finetuning to readme TOC by @ilyes319 in #485
- add support for hessian in calculator by @ilyes319 in #486
- prepare the model for hessian by @ilyes319 in #490
- BUG: Single atoms not isolated incorrectly removed from training set by @jameskermode in #492
- Fix bug with isolated atoms by @ilyes319 in #493
- correct pbc extension by @ilyes319 in #506
- change dynamo to optional by @ilyes319 in #510
- Simplify installation steps by @vue1999 in #514
- Develop by @ilyes319 in #515
- remove Literal for python >3.9 by @ilyes319 in #516
New Contributors
- @JPDarby made their first contribution in #438
- @nqhq-lou made their first contribution in #466
- @ElliottKasoar made their first contribution in #404
- @jameskermode made their first contribution in #492
Full Changelog: v0.3.5...v0.3.6
v0.3.5
What's Changed
- on the fly data loading by @davkovacs in #73
- On the fly dataloading by @davkovacs in #81
- Make matscipy default neighbour list by @ilyes319 in #64
- Multi gpu by @ilyes319 in #99
- Slight modification to HDF5Dataset by @sivonxay in #117
- Multi-node, multi-GPU data parallel training. by @samwaltonnorwood in #105
- Multi-processing by @mavaylon1 in #133
- Develop by @ilyes319 in #207
- load_foundations support for max_L other than 2 by @bernstei in #273
- print validation error before any training by @bernstei in #288
- update foundations with develop by @ilyes319 in #299
- Ilyes319 patch 3 by @ilyes319 in #305
- Warning on float64 training with PyTorch 2.1 by @ilyes319 in #306
- Foundations by @ilyes319 in #307
- initial torch.compile support (inference only) by @hatemhelal in #300
- Handle a wider range of possible slurm env vars by @bernstei in #332
- Update plot_train.py by @zhubonan in #352
- add note on mp corrections on energies by @chiang-yuan in #342
- Update compile test case to use larger test system by @hatemhelal in #310
- Adding pre-commit checks and github action runners by @hatemhelal in #302
- Add torch.compile compatiblity by @ilyes319 in #354
- Add an option to set the D3 damping function, which defaults to D3(BJ) by @Andrew-S-Rosen in #268
- Use HTTPS for MACE MP checkpoint URLs by @janosh in #318
- Add multi-GPU, data parallelisation, pair repulsion and distance transforms. by @ilyes319 in #359
- Add multi-GPU dataparallel by @ilyes319 in #74
- Add mutli-GPU training, data parrallelisation, pair repulsion, distance transform and other features by @ilyes319 in #363
- Add metadata saving to compiled model and hot fix stress extraction by @ilyes319 in #387
- Fix workflows check by @ilyes319 in #388
- add GitPython to deps by @ilyes319 in #389
- add pylint to pre-commit by @ilyes319 in #391
- fixed readme typos, and commit by @ilyes319 in #399
- make cli scripts importable from python by @alinelena in #398
- Develop by @ilyes319 in #418
- Improve warning for "energy", "forces" key by @ilyes319 in #419
- log errors and handle checkpoint io on rank 0 only by @jharrymoore in #420
- Fix checkpoint for multi-GPU by @ilyes319 in #421
- removed wront argument parsing in preprocess_data by @VondrakMar in #423
- Fix preprocess data and pylint by @ilyes319 in #424
- fix atom e0s being subtracted twice in mace calculator by @RokasEl in #425
- fix atom e0s being subtracted twice in mace calculator by @ilyes319 in #426
- fix bug test set stress key by @ilyes319 in #428
New Contributors
- @samwaltonnorwood made their first contribution in #105
- @mavaylon1 made their first contribution in #133
- @hatemhelal made their first contribution in #300
- @zhubonan made their first contribution in #352
- @Andrew-S-Rosen made their first contribution in #268
- @alinelena made their first contribution in #398
- @VondrakMar made their first contribution in #423
- @RokasEl made their first contribution in #425
Full Changelog: v0.3.4...v0.3.5
