CVE Vulnerabilities

CVE-2024-34359

Improper Neutralization of Equivalent Special Elements

Published: May 14, 2024 | Modified: May 14, 2024
CVSS 3.x
N/A
Source:
NVD
CVSS 2.x
RedHat/V2
RedHat/V3
Ubuntu

llama-cpp-python is the Python bindings for llama.cpp. llama-cpp-python depends on class Llama in llama.py to load .gguf llama.cpp or Latency Machine Learning Models. The __init__ constructor built in the Llama takes several parameters to configure the loading and running of the model. Other than NUMA, LoRa settings, loading tokenizers, and hardware settings, __init__ also loads the chat template from targeted .gguf s Metadata and furtherly parses it to llama_chat_format.Jinja2ChatFormatter.to_chat_handler() to construct the self.chat_handler for this model. Nevertheless, Jinja2ChatFormatter parse the chat template within the Metadate with sandbox-less jinja2.Environment, which is furthermore rendered in __call__ to construct the prompt of interaction. This allows jinja2 Server Side Template Injection which leads to remote code execution by a carefully constructed payload.

Weakness

The product correctly neutralizes certain special elements, but it improperly neutralizes equivalent special elements.

Potential Mitigations

References