get_freer_device
- CUDA GPUs — queries free memory for every visible GPU and returns the one with the most free memory.
- Apple MPS — if no CUDA GPU is found but
torch.backends.mps.is_available()isTrue, returnsmps. - CPU — fallback when neither CUDA nor MPS is available.
MLIPCalculator.__init__ calls this function automatically when no device is supplied, so most users never need to call it directly.
Parameters
This function takes no arguments.Return value
The selected device. One of:
torch.device("cuda:N")— the CUDA GPU at indexNwith the most free memory.torch.device("mps")— Apple Metal Performance Shaders (Apple Silicon / AMD GPU on macOS).torch.device("cpu")— fallback when no accelerator is available.
Examples
Automatic device selection (recommended)
You normally do not need to callget_freer_device directly. get_calculator (and MLIPCalculator) call it for you:
Explicit device selection
Callget_freer_device when you want to pin the device yourself before passing it downstream:
Checking the selected device without constructing a calculator
Best practices
- Multi-GPU nodes —
get_freer_devicepicks the GPU with the most free memory at the moment it is called. On busy shared nodes, re-call it immediately before starting a long run rather than caching the result at startup. - Pinning a specific GPU — bypass
get_freer_deviceand passdevice="cuda:2"(or whichever index you want) directly toget_calculator. - CPU-only environments — no configuration needed.
get_freer_devicefalls back to CPU automatically, so code is portable without changes. - Mixed-precision — after selecting the device, cast your model weights to
torch.float32ortorch.float16explicitly if the default precision causes numerical issues on your hardware.
How MLIPCalculator uses device selection
MLIPCalculator.__init__ (in mlip_arena/models/__init__.py) follows the same logic:
get_calculator), pass device to avoid an unintended GPU selection: