Synopsis
LLaVA-NeXT is an open-source project focused on large multimodal models.
The project suffers from a sensitive information disclosure due to an hardcoded HuggingFace token with privileged permissions exposed. By exploiting this information, a remote and unauthenticated attacker could conduct supply chain attacks and compromise the affected HuggingFace's organizations to perform malicious operations.
Solution
No official remediation is available at the time of the writing. Until the issue is resolved, any models or artifacts belonging to HuggingFace's organizations llms-lab, LongVa and Evo-LMM should be treated as untrusted.
A pull-request has been made against the project GitHub repository to help notifying maintainers about this vulnerability given the lack of previous responses.
Additional References
https://github.com/LLaVA-VL/LLaVA-NeXThttps://github.com/LLaVA-VL/LLaVA-NeXT/pull/462
Disclosure Timeline
All information within TRA advisories is provided “as is”, without warranty of any kind, including the implied warranties of merchantability and fitness for a particular purpose, and with no guarantee of completeness, accuracy, or timeliness. Individuals and organizations are responsible for assessing the impact of any actual or potential security vulnerability.
Tenable takes product security very seriously. If you believe you have found a vulnerability in one of our products, we ask that you please work with us to quickly resolve it in order to protect customers. Tenable believes in responding quickly to such reports, maintaining communication with researchers, and providing a solution in short order.
For more details on submitting vulnerability information, please see our Vulnerability Reporting Guidelines page.
If you have questions or corrections about this advisory, please email [email protected]