Skip to content

Releases: ACEsuit/mace

v0.3.14

05 Aug 18:03
9e223a1

Choose a tag to compare

MACE v0.3.14 Release Notes

We are excited to announce MACE v0.3.14, featuring significant new capabilities for embedding functionality, GPU acceleration, dielectric properties prediction, and enhanced training options.

🏗️ Foundation Models

MACE-OMOL Foundation Models

Introduced support for MACE-OMOL models trained on the 100M OMOL dataset with charge and spin embeddings, providing improved accuracy for organic molecules with charges and spins.

Example usage:

from mace.calculators import mace_omol

# Load MACE-OMOL model with charge and spin support
calc = mace_omol(model="extra_large", device="cuda")
atoms.calc = calc

# Set charge and spin for the system
atoms.info["charge"] = 1.0      # +1 charge
atoms.info["spin"] = 1.0        # spin multiplicity

energy = atoms.get_potential_energy()
forces = atoms.get_forces()

Small OMAT Model

Added omat-small-0 to foundation model shortcuts for faster inference when high accuracy is not critical.

Example usage:

from mace.calculators import mace_mp
calc = mace_mp(model="small-omat-0")  # Faster, smaller model

🎯 Fine-tuning

Pseudo-label Finetuning

Added multihead pseudo-label finetuning capability, allowing models to be fine-tuned on their own predictions for improved accuracy and easy of use.

Example usage:

python run_train.py \
  --foundation_model="medium" \
  --train_file=real_data.xyz \
  --pt_train_file=replay_data.xyz \
  --multiheads_finetuning=True \
  --pseudolabel_replay=True \

Enhanced Multihead Finetuning

Fixed various issues with multihead finetuning functionality and improved robustness for the code.
Add option to use any of the models from mace_mp as --foundation_model key name, see: https://github.com/ACEsuit/mace/blob/0139da1b864b29054f07db6627887a847a42050e/mace/calculators/foundations_models.py#L19. For example, --foundation_model="small-omat-0".

🔬 Models

Embedding Functionality for MACE

Added support for embedding additional properties like total charge, total spin, and electronic temperature into MACE models, enabling more accurate predictions for charged and magnetic systems.

Training example with custom embeddings:

python run_train.py \
  --train_file=data.xyz \
  --embedding_specs='{"charge": {"embed_type": "continuous", "min": -5, "max": 5}, "spin": {"embed_type": "continuous", "min": 0, "max": 4}, "temperature": {"embed_type": "continuous", "min": 0, "max": 3000}}' \
  --use_embedding_readout \
  --model=MACE

See code and test.

Dielectric MACE for Polarizability Prediction

Added support for the Dielectric MACE model to predict molecular polarizabilities alongside energies and forces.

Reference: Kapil, et al, "First-principles spectroscopy of aqueous interfaces using machine-learned electronic and quantum nuclear effects"

Example usage:

# Train dielectric MACE model
python run_train.py \
  --model="AtomicDielectricMACE" \
  --train_file=data.xyz \
  --loss="dipole_polar" \
  --polarizability_key="REF_polarizability" \
  --error_table="DipolePolarRMSE" \

See code and test.

LES Models for Coulomb Interactions

Implemented support for LES models for improved description of Coulomb interactions.

Reference: Bingqing Cheng, "Latent Ewald summation for machine learning of long-range interactions"

Example usage:

# Train MACELES model
python run_train.py \
  --model=MACELES \
  --train_file=data.xyz \

See code and test.

New Non-linear Blocks

Introduced improved non-linear readout blocks that provide better accuracy when training MACE on large datasets. See code.

⚡ Performance Improvements

OpenEquivariance Support

Integrated OpenEquivariance support for MACE kernels, enabling acceleration on AMD GPUs and providing alternative compute backends. Use --enable_oeq=True for training with openequivariance, and use enable_oeq=True to evaluate ASE calc with openequivariance.

CuEq Fused Operations

Added CuEq fused operations for tensor product and scatter sum operations, significantly improving training and inference speed on CUDA devices.

Refactored Clebsch-Gordan Coefficients

Implemented option to use refactored CG coefficients with fewer computational paths for improved efficiency.

Example usage:

python run_train.py \
  --train_file=data.xyz \
  --use_reduced_cg

Element Agnostic Product Basis

Added option for element-agnostic product basis, reducing memory usage and improving scalability for systems with many element types.

Example usage:

python run_train.py \
  --train_file=data.xyz \
  --use_agnostic_product

🔧 Training and Infrastructure Improvements

Improved Multi-GPU Support

  • Fixed support for torchrun and MPI multi-GPU training
  • Dynamically set backend for distributed training based on device type
  • Better handling of distributed training configurations

Intel GPU Support

Added IPEX (Intel Extension for PyTorch) support for Intel GPU inference and training.

🐛 Bug Fixes and Improvements

  • Fixed bug preventing building 1-layer (no message passing) MACE models
  • Improved handling of E0s specification via JSON in preprocessing
  • Better pathlib usage for file suffix handling in finetuning_select
  • Fixed OEQ import handling on macOS without CUDA
  • Corrected loss output to log files during fine-tuning
  • Improved behavior of auto-downloaded MP data when using pseudolabels
  • Enhanced handling of atomic numbers in multihead training

📚 Additional Features

  • Added support for calculating descriptors in eval_configs
  • Enhanced node energy output capabilities
  • Improved interaction energy plotting functionality
  • Better filtering of training quantities with zero weights

🙏 Acknowledgments

We thank all contributors to this release, including new contributors @Alexsp32, @Enry99, @vbharadwaj-bk, @naik-aakash, and @Nilsgoe.

Full Changelog: v0.3.13...v0.3.14


For detailed documentation and examples, visit our GitHub repository and documentation.

v0.3.13

30 Apr 21:30
b5faaa0

Choose a tag to compare

MACE 0.3.13 Release Notes

🚀 LAMMPS Integration Enhancements

  • Added new MLIAP interface for LAMMPS, enabling significantly improved performance and flexibility. (Documentation)
  • Implemented CuEquivariance support in LAMMPS models for GPU acceleration.
  • Added multi-GPU inference support for large-scale molecular dynamics simulations using MPI message passing of intermediate tensors for effecient scaling.
  • Improved timing and profiling capabilities via environment variable controls.
  • New command-line option in create_lammps_model.py to select between libtorch (legacy) and MLIAP formats:
# Convert model to MLIAP format
python -m mace.cli.create_lammps_model model.pt --format=mliap

🧮 Atomic Stresses Computation

  • Added support for computing atomic stresses and atomic virials.
  • Useful for analyzing local stress distributions in materials simulations.
from mace.calculators import mace_mp
from ase import build

# Create structure with 10 atoms
atoms = build.bulk("Al", "fcc", a=4.05, cubic=True)
atoms = atoms.repeat((2, 2, 2))

calc = mace_mp(device="cpu", compute_atomic_stresses=True)
atoms.set_calculator(calc)

atoms.get_potential_energy()
stress = atoms.get_stress()
stresses = atoms.get_stresses()

print("Stress tensor:\n", stress.shape)
print("Stresses tensor:\n", stresses.shape)

🗝️ Property Keys System Redesign

  • Reworked the property keys system for improved flexibility and maintainability.
  • Introduced KeySpecification class to manage mappings between data formats.
  • Added DefaultKeys enum to standardize access patterns.
  • Enhanced error reporting when keys are missing.
  • Fully backward compatible with previous key conventions.

🧱 New Foundation Models

  • MACE-MATPES: New foundation models finetuned on matpes dataset.
  • New PBE model without +U inconsistencies and a new R2SCAN model.
  • Better transferability for fine-tuning on domain-specific datasets.
Model Method Energy (meV/atom) Force (meV/Å) Stress (GPa)
M3GNet PBE 45 177 0.898
CHGNet PBE 32 124 0.617
TensorNet PBE 36 138 0.695
MACE PBE 34 122 0.296
MACE-MATPES-0 PBE 23 107 0.304
M3GNet r2SCAN 45 208 0.982
CHGNet r2SCAN 27 150 0.705
TensorNet r2SCAN 34 163 0.754
MACE-MAPTES-0 r2SCAN 19 119 0.265

🧠 Improved Model Head Selection

  • Added explicit head selection in the MACE calculator for multi-head models.
  • You can now directly specify the head:
# Example of selecting a specific head
calc = MACECalculator(model_path="model.pt", head="DFT")
  • Automatically falls back to "default" head if not specified.
  • Clear error messages when requested head is unavailable.
  • Head-specific configurations supported during inference.

🧪 MACE Fine-Tuning Preselection CLI

MACE a tool for selecting configurations when fine-tuning foundation models.

🔍 Key Features

  • Multiple Filtering Strategies:

    • combinations: Only elements in your subset
    • exclusive: Exactly your elements
    • inclusive: All your elements plus potentially others
  • Selection Methods:

    • fps (Farthest Point Sampling) for maximum diversity
    • random for uniform random sampling
  • Weighting Control:

    • Adjust the importance of pretraining vs. fine-tuning data during selection

💡 Usage Example

python -m mace.cli.fine_tuning_select \
  --configs_pt path/to/pretraining_data.xyz \
  --atomic_numbers "[1, 6, 8]" \
  --num_samples 5000 \
  --filtering_type combinations \
  --output selected_configs.xyz

This functionality is also available directly in run_train.py when using the --atomic_numbers parameter. One can no longer do the filtering from run_train without adding the atomic numbers.

🧠 L-BFGS Training Support

MACE now supports L-BFGS optimization for enhanced convergence in energy and force training. This second-order optimizer can refine models beyond what first-order methods like Adam achieve.

🚀 Key Points

  • Ideal for final-stage training (1–2 epochs) after Adam
  • Often achieves lower energy errors and better balance of energy vs. force loss
  • Supports multi-GPU training
  • Slower per epoch but may require fewer total epochs

🛠️ Usage

First, pretrain with Adam:

python -m mace.cli.run_train --optimizer adam [...other options...]

Then refine with L-BFGS:

python -m mace.cli.run_train --lbfgs --restart_latest [...other options...]

ℹ️ Uses history_size=200 and max_iter=20 with strong Wolfe line search.
Requires more memory as it processes the full dataset per update.

📦 Installation

pip install --upgrade mace-torch

# For CUDA acceleration (CUDA 12)
pip install cuequivariance cuequivariance-torch cuequivariance-ops-torch-cu12

# For CUDA 11
pip install cuequivariance-ops-torch-cu11

For complete usage and documentation, please see our official documentation.
If you encounter any issues, please report them on our GitHub Issues page.


🔄 Full Changelog

Full Changelog: v0.3.12...v0.3.13

v0.3.12

16 Mar 00:50
c1114f2

Choose a tag to compare

MACE 0.3.12 Release Notes

Versions support updates

  • Hot fix for test files that are not lists.
  • Standard variation is now extracted from the head config.

Please see our documentation for complete details on features and usage. If you encounter any issues, please report them on our GitHub Issues page.

v0.3.11

14 Mar 10:49
53df9ea

Choose a tag to compare

MACE 0.3.11 Release Notes

Versions support updates

Enhanced Dataset Processing

# Example configuration for LMDB
train_file: "/path/data1/:/path/data2/:/path/data3/"
valid_file: "/path/valid1.lmdb:/path/valid2.lmdb"
# Example configuration for different datasets
heads:
  QM7:
    train_file: ["data/qm7/train.xyz", "data/qm7/train.h5", "data/qm7/database/"]
    valid_file: ["data/qm7/valid.xyz", "data/qm7_h5/valid/"]
    test_file:  ["data/qm7/test.xyz", "data/qm7/test.h5"]

New Foundation Models

  • MACE-MPA-0: Improved accuracy for materials. New default foundation model when calling mace_mp().
  • MACE-OMAT-0: New model trained on the OMAT dataset with DFT (PBE+U) VASP 54.

Plotting training

  • Add automatic plotting of training, validation and test statistics after a training run. Two arguments added to mace_run_train command line interface:
--plot # activate or not the plotting, default True
--plot_frequency # If zero only at the end, else, each N epoch

image

Support for higher L and max_ell

  • Switch to cuequivariance generalized CG for L, max_ell or correlation that needs to use CG that were not pre computed in e3nn 0.4.4. Now can train models with arbitrary L and max_ell.

Installation

pip install --upgrade mace-torch

# For CUDA acceleration (CUDA 12)
pip install cuequivariance cuequivariance-torch cuequivariance-ops-torch-cu12
# For CUDA 11, use cuequivariance-ops-torch-cu11 instead

Please see our documentation for complete details on features and usage. If you encounter any issues, please report them on our GitHub Issues page.

Full Changelog: v0.3.10...v0.3.11

v0.3.10

04 Feb 11:55
0bcbdb6

Choose a tag to compare

What's Changed

Full Changelog: v0.3.9...v0.3.10

v0.3.9

27 Nov 17:36
1ea5c55

Choose a tag to compare

What's Changed

Full Changelog: v0.3.8...v0.3.9

v0.3.8

12 Nov 20:43
bd41231

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.3.7...v0.3.8

v0.3.7

02 Oct 18:02
118a514

Choose a tag to compare

Main new features

What's Changed

New Contributors

Full Changelog: v0.3.6...v0.3.7

v0.3.6

16 Jul 10:55
2d89108

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.3.5...v0.3.6

v0.3.5

10 Jun 10:20
dee204f

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.3.4...v0.3.5