vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(…) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
The product constructs all or part of a code segment using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the syntax or behavior of the intended code segment.
| Name | Vendor | Start Version | End Version |
|---|---|---|---|
| Vllm | Vllm | * | 0.11.1 (excluding) |
| Red Hat AI Inference Server 3.2 | RedHat | rhaiis/vllm-cuda-rhel9:sha256:bddcf7ab6d576572b6d60822c313ffebcd9869e4fde93e32ac327821f93cf32b | * |
| Red Hat AI Inference Server 3.2 | RedHat | rhaiis/vllm-rocm-rhel9:sha256:7856bdb7ae0d643a7b9362c164d4d4fe3c0c7186f5fff73a7ae9835b3df52e57 | * |
| Red Hat AI Inference Server 3.2 | RedHat | rhaiis/model-opt-cuda-rhel9:sha256:14e32e88f1b89f59ed34a6d712746b82a6a54c6ed4727784f18aeff853abbdc7 | * |
| Red Hat AI Inference Server 3.2 | RedHat | rhaiis/vllm-cuda-rhel9:sha256:f0ab1b678e9447eae4b6b2fe5c58531aa8524133db157f196726164e4dc20492 | * |
| Red Hat AI Inference Server 3.2 | RedHat | rhaiis/vllm-rocm-rhel9:sha256:e3b3efcdd86f60b90664a249d45918b2ac5f45bae5eed5399e310d63e878b287 | * |
| Red Hat AI Inference Server 3.2 | RedHat | rhaiis/vllm-tpu-rhel9:sha256:64796b48c68d31973a08e22c9530c39b1bc3ba9f376bbefa57643ef0fc857534 | * |
| Red Hat AI Inference Server 3.2 | RedHat | rhaiis/vllm-rocm-rhel9:sha256:c5efe40fa2a6e98d7d3d6676befff0dbbd87b2887769bb7e5856c5b0b0ada125 | * |
| Red Hat AI Inference Server 3.2 | RedHat | rhaiis/vllm-cuda-rhel9:sha256:fa844e16d06e871f1a5dbc2fd5b3882d28112eee8d6bee601d94c96295c5e24f | * |
| Red Hat AI Inference Server 3.2 | RedHat | rhaiis/vllm-rocm-rhel9:sha256:53007894763e03f609c35c727cb738db3c2130b19fa0e1069c24240e0870fb7a | * |
| Red Hat OpenShift AI 2.25 | RedHat | rhoai/odh-vllm-cpu-rhel9:sha256:10ea60405654199ff5d09b50fc8b83f6d9bb9dda8057e18441dead800b8fa974 | * |
| Red Hat OpenShift AI 3.3 | RedHat | rhoai/odh-kserve-agent-rhel9:sha256:8a8b9aa606fadb92796dc1310c4604c669570da12735fe73f73b65386c439556 | * |
| Red Hat OpenShift AI 3.3 | RedHat | rhoai/odh-kserve-controller-rhel9:sha256:861d9c9ff292c8baf9f541a384ab323943c1d0aec29349dde7ac957d2dde7ee7 | * |
| Red Hat OpenShift AI 3.3 | RedHat | rhoai/odh-kserve-router-rhel9:sha256:25306815d697653646bda1a84d7efc28403e1c62b3bb8a144319854ad527771d | * |
| Red Hat OpenShift AI 3.3 | RedHat | rhoai/odh-kserve-storage-initializer-rhel9:sha256:b67292b8828b41361925def921ba2713b4d7eaa83b2088e0ad21e44ba52eb228 | * |
| Red Hat OpenShift AI 3.3 | RedHat | rhoai/odh-vllm-gaudi-rhel9:sha256:30dd95f0c900b81b80e435796d82dd556814dd6d46c6b43b7dd879bcfdb8420e | * |