Live data from GitHub and PyPI, updated daily.
Data last fetched: 2026-05-15
10 active CVEs reported via OSV.dev
vLLM affected by RCE via auto_map dynamic module loading during model initialization
vLLM is vulnerable to Server-Side Request Forgery (SSRF) through `MediaConnector` class
vLLM: Unauthenticated OOM Denial of Service via Unbounded `n` Parameter in OpenAI API Server
Potential Timing Side-Channel Vulnerability in vLLMโs Chunk-Based Prefix Caching
vLLM has RCE In Video Processing
vLLM Deserialization of Untrusted Data vulnerability
vLLM vulnerable to DoS via large Chat Completion or Tokenization requests with specially crafted `chat_template_kwargs`
vLLM: Resource-Exhaustion (DoS) through Malicious Jinja Template in OpenAI-Compatible Server
vLLM DOS: Remotely kill vllm over http with invalid JSON schema
vLLM has Hardcoded Trust Override in Model Files Enables RCE Despite Explicit User Opt-Out
Other Machine Learning projects in the Python ecosystem worth evaluating.
Get SLA-backed support, security patches, and direct access to senior engineers for vLLM โ without relying on volunteer maintainers.
Talk to an Expert โ