vLLM is an inference and serving engine for large language models (LLMs). Version 0.8.0 up to but excluding 0.9.0 have a Denial of Service (ReDoS) that causes the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22xg/CVE-2025-48942, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.
An exception is thrown from a function, but it is not caught.
| Name | Vendor | Start Version | End Version |
|---|---|---|---|
| Vllm | Vllm | 0.8.0 (including) | 0.9.0 (excluding) |