Full-Stack Chat-Bot Framework
Self-hosted, multi-LLM chatbot with a live Admin UI, Telegram adapter, and Dockerized deployment
TL;DR
An extensible, self-hosted chat-bot that streams responses, supports multiple LLMs (OpenAI, Gemini, Anthropic), runs on Docker with Redis for sessions, and includes a secure Admin UI for personas, config, and rate limits—no code edits required.
The Story
Key Features
Live Admin UI: edit personas, LLM/provider/model, keys, rate limits, auth
Multi-LLM support: OpenAI, Gemini, Anthropic (plug-in client pattern)
Telegram adapter out of the box; stubs and base classes for more channels
Real-time streaming replies with token-aware conversation trimming
Session persistence via Redis + basic user-level rate limiting
Secure FastAPI admin endpoints with basic auth and locked docs
Config persistence to config_overrides.json and prompt_profiles.json
Dockerized deployment with docker-compose.yml (bot + Redis + volumes)
Health check endpoint (/healthz) and structured JSON logging
Ready for teams: tests via Pytest, clear project structure, env-driven config