CVE-2025-66448
vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named NemotronNanoVLConfig. When vllm loads a model config that contains an automap entry, the config class resolves that mapping with getclassfromdynamicmodule(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the automap string. Crucially, this happens even when the caller explicitly sets trustremotecode=False in vllm.transformersutils.config.getconfig. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via automap to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
Package Versions Affected
Automatically patch vulnerabilities without upgrading
CVSS Version



Related Resources
References
https://github.com/CVEProject/cvelistV5/tree/main/cves/2025/66xxx/CVE-2025-66448.json, https://github.com/vllm-project/vllm/commit/ffb08379d8870a1a81ba82b72797f196838d0c86, https://github.com/vllm-project/vllm/pull/28126, https://github.com/vllm-project/vllm/security/advisories/GHSA-8fr4-5q9j-m8gm, https://nvd.nist.gov/vuln/detail/CVE-2025-66448
