The PyPI package installs the Prefect workflow framework and all task definitions. It does not bundle pretrained model weights — you can still bring your own ASE-compatible calculator.
pip install mlip-arena
This install is sufficient to run workflows with any custom ASE Calculator and to orchestrate tasks with Prefect. To use the integrated pretrained models (MACE, CHGNet, eSEN, etc.) you need the source installation below.
The source installation clones the repository and uses uv to install all compiled pretrained models with minimal dependency conflicts.
We strongly recommend a clean virtual environment before proceeding. Multiple popular MLIPs have conflicting transitive dependencies; installing into an existing environment is likely to break things.
1
Install uv (optional but strongly recommended)
uv is a fast Python package manager that dramatically reduces install time and resolves conflicts better than pip.
If disk space is tight, installing all compiled models can fill up local storage quickly. Pass --no-cache to uv pip install or run uv cache clean afterwards to reclaim space.
You can install support for individual model families instead of running the full install script. Each extra pins the version tested in MLIP Arena:
Extra
Package pinned
Models unlocked
mace
mace-torch==0.3.12
MACE-MP(M), MACE-MPA, MACE-OFF(M)
matgl
matgl==1.2.6
M3GNet, CHGNet (via matgl)
fairchem
fairchem-core==1.10.0
eqV2(OMat), eSEN, EquiformerV2, eSCN
orb
orb-models==0.4.0
ORB, ORBv2
deepmd
deepmd-kit@git (v3.0.0b4)
DeepMD
# Install a single extra, e.g. MACE onlypip install "mlip-arena[mace]"# Install multiple extras at oncepip install "mlip-arena[mace,matgl,fairchem]"
The deepmd extra pins torch==2.2.0 and installs deepmd-kit directly from GitHub. Install it in isolation to avoid overwriting the PyTorch version required by other models.
The eqV2(OMat) and eSEN checkpoints are gated behind a HuggingFace model repository. You must:
1
Request access
Visit the facebook/OMAT24 model repo on HuggingFace and request downloading access. Note: you need access to the model repo, not the dataset repo.
2
Authenticate locally
Log in to HuggingFace Hub on your machine:
huggingface-cli login
This writes a token to ~/.cache/huggingface/token. The fairchem loader will pick it up automatically at import time.
Skipping authentication will cause the fairchem models to fail at checkpoint download with a 401 Unauthorized error even after a successful pip install mlip-arena[fairchem].