Meta’s large language model (LLM) framework, Llama, suffers a typical open-source coding oversight, potentially allowing arbitrary code execution on servers leading to resource theft, data breaches, and AI model takeover.
The flaw, tracked as CVE-2024-50050, is a critical deserialization bug belonging to a class of vulnerabilities arising from the improper use of the open-source library (pyzmq) in AI frameworks.
“The Oligo research team has discovered a critical vulnerability in meta-llama, an open-source framework from Meta for building and deploying Gen AI applications,” said Oligo’s security researchers in a blog post. “The vulnerability, CVE-2024-50050 enables attackers to execute arbitrary code on the llama-stack inference server from the network.”
Following Oligo’s report on the flaw, Meta’s security team promptly patched Llama Stack, by switching the serialization format for socket communication from pickle to JSON.
0 seconds of 18 minutes, 31 secondsVolume 0%
A typical AI-framework flaw
According to Oligo’s research, a number of open-source AI frameworks leverage an open-source messaging library (pyzmq) in an “unsafe way”, allowing remote code execution.
The problem stems from Llama Stack using pickle, a Python module for serialization and deserialization of Python objects, within its “inference API” implementation, a functionality Llama has for organizations to bring their own ML models into the application pipeline.
Pickle, which automatically deserializes Python objects, is inherently capable of executing arbitrary codes while deserializing untrusted data (crafted) sent by attackers, particularly with exposed pyzmq (a Python binding for ZeroMQ) implementation.
“In scenarios where the ZeroMQ socket is exposed over the network, attackers could exploit this vulnerability by sending crafted malicious objects to the socket,” the researchers said, adding that unpickling these objects could allow attackers to “achieve arbitrary code execution (RCE) on the host machine.
Stay updated with SOC News for cutting-edge security innovations and expert industry insights!
Source : https://www.csoonline.com/article/3810362/a-pickle-in-metas-llm-code-could-allow-rce-attacks.html