Cosmos-RL is a flexible and scalable Reinforcement Learning framework
A simple yet powerful agent framework that delivers with models
Extension of Google Research’s PaperBanana
Language Model Reinforcement Learning Environments frameworks
A Unified Framework for Text-to-3D and Image-to-3D Generation
PPTAgent: Generating and Evaluating Presentations
Finding the Scaling Law of Agents. A multi-agent framework
Flexible and powerful framework for managing multiple AI agents
Neural Network Compression Framework for enhanced OpenVINO
Framework for orchestrating role-playing, autonomous AI agents
Algorithms for outlier, adversarial and drift detection
Official Repo for ICML 2024 paper
Streamlines and simplifies prompt design for both developers
Build multimodal language agents for fast prototype and production
Federated Learning (FL) experiment simulation in Python
Framework for validating and controlling LLM outputs in AI apps
AI agents running research on single-GPU nanochat training
Bindu: Turn any AI agent into a living microservice
The first open-source agentic AI physicist
Deploy your agentic worfklows to production
Build effective agents using Model Context Protocol
The no-nonsense RAG chunking library
A text-to-speech, speech-to-text and speech-to-speech library
Composable building blocks to build Llama Apps
Framework for building AI agents that automate complex web tasks