vLLM is a fast and easy-to-use library for LLM inference and serving. High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more.

Features

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Optimized CUDA kernels
  • Seamless integration with popular HuggingFace models
  • Tensor parallelism support for distributed inference

Project Samples

Project Activity

See All Activity >

License

Apache License V2.0

Follow vLLM

vLLM Web Site

Other Useful Business Software
Queue Management System for Busy Service Providers | WaitWell Icon
Queue Management System for Busy Service Providers | WaitWell

The queue management system that perfectly adapts to your workflows

The queue management system that perfectly adapts to your workflows. Improve operational efficiency in weeks with the most configurable enterprise queue system.
Learn More
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of vLLM!

Additional Project Details

Programming Language

Python

Related Categories

Python Large Language Models (LLM), Python LLM Inference Tool

Registered

2023-08-21