A starter template for making Telegram bots with Honcho that are deployed to fly.io.
The template uses openrouter for LLM inferences and supports all the LLMs on there.
- AI-powered conversations using OpenAI-compatible APIs
- Memory management with Honcho for persistent chat sessions
- Group chat support with mention detection
- Command handling including
/dialectic
for conversation history queries - Message splitting for long responses
- Docker support for easy deployment
First, you need to create a Telegram bot and get your bot token:
- Open Telegram and search for @BotFather
- Start a chat with BotFather and send
/newbot
- Follow the prompts to choose a name and username for your bot
- BotFather will give you a token that looks like
123456789:ABCdefGhIJKlmNoPQRsTUVwxyZ
- Save this token - you'll need it for the
BOT_TOKEN
environment variable
- Sign up at OpenRouter
- Go to your API Keys page
- Create a new API key
- Save this key - you'll need it for the
MODEL_API_KEY
environment variable
Install the project dependencies using uv:
uv sync
The repo contains a .env.template
file that shows all the default environment
variables used by the Telegram bot. Make a copy of this template and fill out the
.env
with your own values.
cp .env.template .env
Edit the .env
file with your values:
# Your Telegram bot token from BotFather
BOT_TOKEN=<your-token>
# AI model to use (see OpenRouter for available models)
MODEL_NAME=<your-model>
# Your OpenRouter API key
MODEL_API_KEY=<your-openrouter-api-key>
Caution
Make sure you do not push your .env
file to GitHub or any other version
control. These should remain secret. By default the included .gitignore
file
should prevent this.
source .venv/bin/activate
python src/bot.py
The project offers Docker for packaging the bot code
and providing a single executable to start the bot. The below commands will
build the docker image and then run the bot using a local .env
file.
docker build -t telegram-bot . && docker run --env-file .env telegram-bot
- Send any message directly to your bot for AI-powered conversations
- The bot will remember your conversation history
- Add your bot to a group chat
- The bot will only respond when:
- You mention it:
@yourbotname Hello!
- You reply to one of its messages
- You mention it:
/start
- Get a welcome message and usage instructions/dialectic <query>
- Search through your conversation history with the bot
From here you can edit the src/bot.py
file to add whatever logic you want. The main areas to customize are:
llm()
function: Modify the AI prompt and behaviorvalidate_message()
function: Change when the bot responds to messages- Command handlers: Add new commands or modify existing ones
Additional functionality can be added to the bot. Refer to the python-telegram-bot documentation for more features.
The project contains a generic fly.toml
that will run a single process for the
Telegram bot.
To launch the bot for the first time, run fly launch
.
Use cat .env | fly secrets import
to add the environment variables to fly.
By default, fly.toml
will automatically stop the machine if inactive. This
doesn't work well with a Telegram bot, so remove that line and change min_machines_running
to 1
.
After launching, use fly deploy
to update your deployment.
- Make sure you've added the bot to the group
- Ensure you're mentioning the bot with
@botname
or replying to its messages - Check that the bot has permission to read messages in the group
- Verify your
BOT_TOKEN
is correct and the bot is active - Check your
MODEL_API_KEY
is valid and has sufficient credits - Ensure the
MODEL_NAME
is available on OpenRouter
- The bot uses Honcho for conversation memory
- Each chat session is stored separately by chat ID
- Memory persists between bot restarts