torch.load of state_dict.pt in optimate neural_magic_training.pyMITRE service request: 1987825
Status: RESERVED (pending a qualifying public reference per CNA Rules §5.3).
The _load_model() function in the neural_magic_training.py script of the optimate project in commit a6d302f912b481c94370811af6b11402f51d377f (2024-07-21) is vulnerable to insecure deserialization (CWE-502). When loading a model state dictionary from a state_dict.pt file via torch.load(), the function does not enable the weights_only=True security parameter. This allows the deserialization of arbitrary Python objects through the Pickle module. A remote attacker can exploit this by providing a maliciously crafted state_dict.pt file within a directory specified via the –model argument, leading to arbitrary code execution during the deserialization process on the victim’s system.
When --model references a directory layout that includes state_dict.pt, the loader calls torch.load without weights_only=True, enabling classic pickle gadgets embedded in the tensor archive. Combined with untrusted model directories, this yields remote-code-execution-on-open.
a6d302f912b481c94370811af6b11402f51d377f.neural_magic_training.py --model <dir> where attacker placed a malicious state_dict.pt.torch.load default semantics deserialize arbitrary objects.High — equivalent to handing an attacker a Python shell on the training host.
weights_only=True when compatible, or refuse pickled blobs from untrusted trees.-model directories with checksum manifests.