vLLM
vLLM is a high-throughput and memory-efficient inference and serving engine for large language models (LLMs).

Contribute
Become a financial contributor.
Financial Contributions
Top financial contributors
Individuals
$100,000 USD since Jun 2024
$100,000 USD since Aug 2024
$10,000 USD since May 2024
$7,500 USD since Dec 2024
$520 USD since Mar 2025
$100 USD since May 2024
$100 USD since Jun 2024
$100 USD since Jun 2024
$30 USD since Feb 2025
$25 USD since Jul 2024
$20 USD since May 2024
$20 USD since Jul 2024
$20 USD since Sep 2024
$20 USD since Nov 2024
$20 USD since Jan 2025
Organizations
$100,000 USD since Jun 2024
$1,266.15 USD since Jul 2024
$200 USD since Aug 2024
vLLM is all of us
Our contributors 24
Thank you for supporting vLLM.
Simon Mo
Sequoia
$100,000 USD
Dropbox Inc.
$10,000 USD
Aman Bhargava
$520 USD
Rymon Yu
$100 USD
Calvin Zhou
$25 USD
Valerian
$20 USD
Woosuk Kwon
Guest
$100,000 USD
Kindroid
$7,500 USD
Yotta Labs
$200 USD
Marut Pandya
$100 USD
Ce Gao
$5 USD
Zhuohan Li
SKYWORK AI
$100,000 USD
GitHub Sponsors
$1,266 USD
Tianle Cai
$100 USD
Hui Liu
$30 USD
Ruiqi
$20 USD
congwang
$1 USD

Budget
Transparent and open finances.
$277,421.82 USD
$287,631.16 USD
$10,209.34 USD
$320,187.15 USD
