vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(…) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
Weakness
The product constructs all or part of a code segment using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the syntax or behavior of the intended code segment.
Affected Software
| Name |
Vendor |
Start Version |
End Version |
| Vllm |
Vllm |
* |
0.11.1 (excluding) |
| Red Hat AI Inference Server 3.2 |
RedHat |
rhaiis/vllm-cuda-rhel9:sha256:bddcf7ab6d576572b6d60822c313ffebcd9869e4fde93e32ac327821f93cf32b |
* |
| Red Hat AI Inference Server 3.2 |
RedHat |
rhaiis/vllm-rocm-rhel9:sha256:7856bdb7ae0d643a7b9362c164d4d4fe3c0c7186f5fff73a7ae9835b3df52e57 |
* |
| Red Hat AI Inference Server 3.2 |
RedHat |
rhaiis/model-opt-cuda-rhel9:sha256:14e32e88f1b89f59ed34a6d712746b82a6a54c6ed4727784f18aeff853abbdc7 |
* |
Potential Mitigations
- Run your code in a “jail” or similar sandbox environment that enforces strict boundaries between the process and the operating system. This may effectively restrict which code can be executed by your product.
- Examples include the Unix chroot jail and AppArmor. In general, managed code may provide some protection.
- This may not be a feasible solution, and it only limits the impact to the operating system; the rest of your application may still be subject to compromise.
- Be careful to avoid CWE-243 and other weaknesses related to jails.
- Assume all input is malicious. Use an “accept known good” input validation strategy, i.e., use a list of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.
- When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, “boat” may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as “red” or “blue.”
- Do not rely exclusively on looking for malicious or malformed inputs. This is likely to miss at least one undesirable input, especially if the code’s environment changes. This can give attackers enough room to bypass the intended validation. However, denylists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.
- To reduce the likelihood of code injection, use stringent allowlists that limit which constructs are allowed. If you are dynamically constructing code that invokes a function, then verifying that the input is alphanumeric might be insufficient. An attacker might still be able to reference a dangerous function that you did not intend to allow, such as system(), exec(), or exit().
- For Python programs, it is frequently encouraged to use the ast.literal_eval() function instead of eval, since it is intentionally designed to avoid executing code. However, an adversary could still cause excessive memory or stack consumption via deeply nested structures [REF-1372], so the python documentation discourages use of ast.literal_eval() on untrusted data [REF-1373].
References