2024-05-25 21:48:12 +02:00
|
|
|
### [CVE-2023-29374](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-29374)
|
|
|
|

|
|
|
|

|
|
|
|

|
|
|
|
|
|
|
|
### Description
|
|
|
|
|
|
|
|
In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
|
|
|
|
|
|
|
|
### POC
|
|
|
|
|
|
|
|
#### Reference
|
|
|
|
- https://github.com/hwchase17/langchain/issues/1026
|
|
|
|
|
|
|
|
#### Github
|
|
|
|
- https://github.com/cckuailong/awesome-gpt-security
|
|
|
|
- https://github.com/corca-ai/awesome-llm-security
|
2024-08-10 19:04:30 +00:00
|
|
|
- https://github.com/invariantlabs-ai/invariant
|
2024-05-28 08:49:17 +00:00
|
|
|
- https://github.com/zgimszhd61/llm-security-quickstart
|
2024-05-25 21:48:12 +02:00
|
|
|
|