Overview
nanobot is an ultra-lightweight personal AI assistant that delivers core agent functionality in approximately 4,000 lines of code (officially verified at 3,428 lines) — 99% smaller than comparable solutions like Clawdbot. Built for researchers, developers, and anyone who needs a fast, modular AI assistant, nanobot combines clean architecture with powerful capabilities including 24/7 market analysis, software development assistance, routine automation, and personal knowledge management.
As an open-source project under MIT license, nanobot gives you full control over your AI assistant. It integrates seamlessly with popular chat platforms like Telegram, Discord, WhatsApp, and Feishu, while supporting both cloud-based LLMs (via OpenRouter, Anthropic, OpenAI, DeepSeek, Groq, Gemini) and local models through vLLM. Whether you're prototyping research ideas or building a production-ready assistant, nanobot's minimal footprint means faster startup times, lower resource usage, and easier debugging.
The project is actively maintained by HKUDS and has gained significant traction with over 11,000 stars on GitHub, making it a proven choice for developers who value simplicity without sacrificing functionality.
Key Features
Ultra-Lightweight Architecture — Core agent logic in approximately 4,000 lines of code (verified at 3,428 lines), making it 99% smaller than alternatives and dramatically easier to understand, modify, and extend for research or production use.
Multi-Platform Chat Integration — Connect nanobot to Telegram, Discord, WhatsApp, or Feishu with simple configuration, enabling you to interact with your AI assistant from any device, anywhere.
Local & Cloud LLM Support — Run your assistant on cloud providers (OpenRouter, Anthropic, OpenAI, DeepSeek, Groq, Gemini) or deploy local models using vLLM for complete privacy and cost control.
Built-In Task Automation — Schedule recurring tasks using cron syntax or interval-based scheduling, automate daily routines, and let nanobot proactively manage your workflow without manual intervention.
Research-Ready Codebase — Clean, readable code with minimal dependencies makes it ideal for academic research, rapid prototyping, and AI code development where transparency matters.
Docker & One-Click Deployment — Deploy nanobot in minutes using Docker or install directly from PyPI with a single command, with all configurations managed through a simple JSON file.
Pricing & Plans
nanobot is completely free and open-source under the MIT license. There are no subscription fees, usage limits, or premium tiers.
What You Need:
LLM API Keys — You'll need to bring your own API keys from providers like:
- OpenRouter (recommended for access to multiple models)
- Anthropic (Claude)
- OpenAI (GPT)
- DeepSeek
- Groq (includes free Whisper voice transcription)
- Gemini
- Or run local models with vLLM (completely free after setup)
Optional Services:
- Brave Search API (for web search capabilities) — free tier available
- Chat platform accounts (Telegram, Discord, WhatsApp, Feishu) — all free
Cost Estimate:
Your actual costs depend entirely on your LLM provider and usage. For example, using OpenRouter with Claude models might cost $0.01-0.10 per conversation depending on length. Local models via vLLM eliminate third-party API fees, but require self-hosted compute resources (local GPU or cloud instances with associated hardware/electricity costs).
Pros & Cons
Pros:
- Extremely lightweight codebase (~4,000 lines) makes debugging and customization trivial
- Open-source with MIT license gives complete freedom to modify and deploy
- Fast startup and low resource usage compared to heavyweight alternatives
- Active development with frequent updates (recent releases in Feb 2026)
- Supports both cloud and local LLM deployments for flexibility
- Multi-platform chat integrations with official setup guides for each platform
Cons:
- Requires technical knowledge to set up API keys and configure providers
- No built-in GUI — primarily command-line and chat-based interaction
- Documentation assumes familiarity with Python development workflows
- Feature set is intentionally minimal compared to enterprise solutions
Best For
- Researchers and academics who need a transparent, modifiable AI agent codebase for experiments and publications
- Developers prototyping AI applications who want to understand exactly how agent loops, tool execution, and memory management work under the hood
- Privacy-conscious users who prefer running local LLMs and controlling all data flows
- Technical users managing multiple chat platforms who want a unified AI assistant accessible from Telegram, Discord, WhatsApp, or Feishu
- Teams building custom AI workflows who need a lightweight foundation to extend rather than a bloated framework to work around
FAQ
Is nanobot really free?
Yes, nanobot itself is completely free and open-source under MIT license. However, you'll need to provide your own LLM API keys, which may have associated costs depending on your provider. You can also run local models via vLLM, which eliminates API fees but requires self-hosted compute resources.
What programming skills do I need to use nanobot?
You should be comfortable with command-line interfaces and basic Python package management. Setting up API keys, editing JSON configuration files, and running terminal commands are essential. No coding is required for basic usage, but extending functionality requires Python knowledge.
Can I use nanobot without connecting to chat apps?
Yes, you can interact with nanobot directly via command line using nanobot agent -m "your message" or enter interactive chat mode with nanobot agent. Chat platform integrations are optional.
How does nanobot compare to ChatGPT or Claude?
nanobot is a framework for building your own AI assistant, not a hosted service. You can configure it to use ChatGPT, Claude, or other LLMs as its "brain," but you control the deployment, data flow, integrations, and customizations. It's more comparable to frameworks like LangChain than to consumer AI chatbots. Learn more about Model Context Protocol for AI integration patterns.
What chat platforms does nanobot support?
nanobot officially supports Telegram (easiest setup), Discord (requires bot token and intents), WhatsApp (requires QR code scanning and Node.js), and Feishu (uses WebSocket long connection). Each platform has detailed setup instructions in the documentation.
Can I run nanobot entirely offline?
Yes, by using vLLM to run local language models. You'll still need internet access for initial setup and dependency installation, but once configured with a local model, nanobot can operate without external API calls. This approach aligns with emerging agent-to-agent communication standards for autonomous AI systems.
How do I schedule recurring tasks?
Use the built-in cron functionality: nanobot cron add --name "daily" --message "Good morning!" --cron "0 9 * * *" for cron syntax, or nanobot cron add --name "hourly" --message "Check status" --every 3600 for interval-based scheduling.
Is my data secure when using nanobot?
Security depends on your deployment choices. If you run local LLMs, all data stays on your machine. If you use cloud providers, data is sent to those APIs subject to their privacy policies. For production deployments, enable workspace restrictions by setting "tools": { "restrictToWorkspace": true } in your configuration to sandbox agent file operations.