parlai is a framework for training and evaluating AI models on a variety of openly available dialogue datasets. In affected versions the package is vulnerable to YAML deserialization attack caused by unsafe loading which leads to Arbitary code execution. This security bug is patched by avoiding unsafe loader users should update to version above v1.1.0. If upgrading is not possible then users can change the Loader used to SafeLoader as a workaround. See commit 507d066ef432ea27d3e201da08009872a2f37725 for details.
The product deserializes untrusted data without sufficiently verifying that the resulting data will be valid.
Name | Vendor | Start Version | End Version |
---|---|---|---|
Parlai | * | 1.1.0 (excluding) |
It is often convenient to serialize objects for communication or to save them for later use. However, deserialized data or code can often be modified without using the provided accessor functions if it does not use cryptography to protect itself. Furthermore, any cryptography would still be client-side security – which is a dangerous security assumption. Data that is untrusted can not be trusted to be well-formed. When developers place no restrictions on “gadget chains,” or series of instances and method invocations that can self-execute during the deserialization process (i.e., before the object is returned to the caller), it is sometimes possible for attackers to leverage them to perform unauthorized actions, like generating a shell.