BotAnya is a Telegram bot for immersive role-playing with support for multiple worlds, characters, scene generation, and translation. It uses local models via Ollama and remote API through OpenAI and GigaChat. It supports ChatML, custom JSON scenarios, history management, and logging.
- Multiple worlds and roles via JSON scenarios.
- Multi-character mode with dynamic character selection.
- Support for models through Ollama, OpenAI and GigaChat.
- ChatML and plain-text message formats.
- Commands for retry, edit, continue, and history control.
- Automatic translation (RU ↔ EN).
- Persistent history and JSONL logs.
- Atmospheric scene generation via
/scene
. - Safe MarkdownV2 formatting for messages.
- Install dependencies:
pip install -r requirements.txt
- Install and configure Ollama, then pull the required model, for example:
ollama pull llama3.2
- Prepare directories:
/scenarios/ — JSON scenario files (*.json) /secrets/ — contains credentials.json /history.json — auto-generated conversation history /user_roles.json — auto-generated user roles and settings /chat_logs/ — JSONL logs of interactions
- Create configuration file based on the provided examples:
config.json
in the project root.secrets/credentials.json
fromcredentials_example.json
.
- Store API Key:
In your
secrets/credentials.json
, add your Telegram and API keys. - Run the bot:
python BotAnya.py
You can use run_bot.bat for automatic starting ollama plus bot.
config.json
defines the bot’s behavior and service endpoints. It contains the following top-level keys:
default_service
(string): Key of the service used by default when starting the bot.debug_mode
(boolean): Iftrue
, enables verbose debug output in logs and console.credentials_path
(string): File path to the OAuth or API credentials JSON.services
(object): A mapping of service keys to service configuration objects.
Each entry under services
must include the following fields:
Key | Type | Description |
---|---|---|
name |
string | Human-readable identifier for the service. |
type |
string | Service type (ollama or gigachat ). |
model |
string | Model identifier or name used by the service. |
url |
string | API endpoint for generating completions. |
auth_url |
string | OAuth token endpoint (required for Gigachat). |
scope |
string | OAuth scope for token requests (Gigachat). |
temperature |
number | Sampling temperature for token generation. |
top_p |
number | Nucleus sampling threshold (total probability mass). |
min_p |
number | Minimum probability filter for tokens (optional). |
num_predict |
integer | Maximum number of tokens to generate in a single request. |
max_tokens |
integer | Maximum number of context tokens allowed in the prompt. |
stop |
array | List of stop sequences that signal the model to stop generation. |
repeat_penalty |
number | Penalty factor applied to repeated tokens. |
frequency_penalty |
number | Penalty based on token frequency to reduce repetition. |
presence_penalty |
number | Penalty for new token presence to encourage topic variation. |
chatml |
boolean | Whether to format prompts using ChatML (true ) or plain text (false ). |
timeout |
integer | HTTP request timeout in seconds (optional; default may apply). |
Command | Description |
---|---|
/start |
Initialize or resume the dialogue. |
/scenario |
Select a world scenario. |
/role |
Select a character role. |
/scene |
Generate an atmospheric scene. |
/whoami |
Display current world, character, and service information. |
/service |
Switch the LLM service. |
/lang |
Toggle automatic translation (RU ↔ EN). |
/retry |
Regenerate the last bot response. |
/continue |
Continue the last response thread. |
/edit |
Edit your last message before sending to the model. |
/history |
View the conversation history. |
/reset |
Clear the history and restart the scenario. |
/help |
Show help information, including available roles. |
Example scenario file (.json
):
{
"world": {
"name": "Example World",
"description": "A brief description of the setting.",
"emoji": "🌍",
"intro_scene": "Initial scene description.",
"system_prompt": "System-level instructions for the scenario.",
"user_emoji": "😺",
"user_role": "Adventurer"
},
"characters": {
"luna": {
"name": "Luna",
"emoji": "🌙",
"description": "Shy cat-eared girl.",
"prompt": "You are a shy neko girl..."
}
}
}
If chatml
key in the config.json file set to true
ChatML tags <|im_start|>
and <|im_end|>
will be added to structure system, user, and assistant messages when.
Enable translation by toggling /lang
. When enabled, prompts are translated to English before sending and back to Russian upon receipt.
BotAnya uses deep_translator under the hood and lets you choose among multiple translation engines without touching code.
In your config.json, set:
"translation_service": "google"
Key | deep_translator class |
---|---|
google |
GoogleTranslator |
deepl |
DeeplTranslator |
mymemory |
MyMemoryTranslator |
yandex |
YandexTranslator |
microsoft |
MicrosoftTranslator |
Some translation service expects different API key names in secrets/credentials.json
.
BotAnya.py — Entry point for the bot
config.json — Configuration for services and settings
secrets/credentials.json— OAuth/API credentials for services
utils.py — Utility modules (Markdown escape, prompt builders)
config.py — Path and constant definitions
bot_state.py — State management and persistence
openai_client.py — OpenAI integration
gigachat_client.py — Sber GigaChat integration
ollama_client.py — Ollama integration
telegram_handlers.py — Command and message handlers
translate_utils.py — Automatic translation helpers
README.md — Project documentation
scenarios/ — JSON world and character files
history.json — Conversation history (generated)
user_roles.json — User roles and settings (generated)
chat_logs/ — JSONL files with interaction logs
This project is licensed under the MIT License – see the LICENSE file for details.