Build an Intelligent Telegram Chatbot in 30 Minutes (Code Included)
Reading time: 12 minutes | Level: Beginner | Last updated: February 2025
Have you always wanted to create your own AI assistant on Telegram? A bot that truly understands your questions, remembers previous conversations, and can perform concrete actions?
I've created dozens of Telegram bots over the years, and I'll show you exactly how to build an intelligent chatbot in less than 30 minutes. No marketing bullshit, no "contact us for more information" — just Python code you can copy-paste and adapt.
By the end of this tutorial, you'll have a working bot that:
- Responds intelligently using an LLM (OpenAI, Claude, or Gemini)
- Remembers conversation context
- Handles custom commands (/start, /reset, /stats)
- Can be extended with any functionality
Why Telegram? Because their API is simple, free, and well-documented. Unlike WhatsApp or Messenger, which require bureaucratic validation, you can launch a Telegram bot in 2 minutes flat.
What You'll Need
Before you begin, make sure you have:
- Python 3.9+ installed on your machine
- A Telegram account (obviously)
- An API key from an LLM provider:
- OpenAI (GPT-4, GPT-3.5) — OpenAI Platform
- Anthropic (Claude) — Anthropic Console
- Google (Gemini) — Google AI Studio
- 10-15 minutes of focus
Budget: If you use GPT-3.5-turbo or Gemini Flash, expect less than €1 for 1,000 conversations. GPT-4 or Claude Sonnet cost more (~€0.10 per conversation) but offer superior quality.
💡 Pro Tip: Start with Gemini Flash (free up to 1,500 requests/day) for testing, then switch to a paid model once you've validated your use case.
Step 1: Create Your Telegram Bot (2 Minutes)
Open Telegram and search for @BotFather — the official Telegram bot for creating... bots.
Type /newbot and follow the instructions:
You: /newbot
BotFather: Alright, a new bot. How are we going to call it?
You: My AI Assistant
BotFather: Good. Now choose a username for your bot. It must end in 'bot'.
You: my_ai_assistant_bot
BotFather: Done! Your token is: 1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
Save this token carefully — it's your bot's access key. Never share it publicly.
Let's configure some useful settings:
/setdescription — A short description of your bot
/setabouttext — Text displayed in the profile
/setuserpic — Profile picture (optional)
/setcommands — List of commands (we'll come back to this)
For /setcommands, enter:
start - Start a conversation
reset - Reset memory
stats - View usage statistics
help - Show help
There you go — your bot exists. Now, let's give it a brain.
Step 2: Bot Architecture (Understand Before Coding)
Before diving into the code, let's understand the architecture. An intelligent Telegram chatbot consists of three components:
1. The Telegram Connector (python-telegram-bot)
Handles receiving messages, sending responses, and commands. This is the bot's "skin."
2. The LLM Brain (OpenAI/Claude/Gemini)
Generates intelligent responses. It receives the user's message + conversation history and responds.
3. Memory (SQLite or JSON Files)
Stores conversation history per user. Without this, the bot forgets everything between messages.
Simplified Diagram:
User → Telegram API → Python Bot → [Retrieves history]
↓
LLM (OpenAI/Claude)
↓
[Saves response]
↓
Telegram API → User
Now that we have the map, let's build the road.
Step 3: Install Dependencies
Create a folder for your project:
mkdir telegram-ai-chatbot
cd telegram-ai-chatbot
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
Install the required libraries:
pip install python-telegram-bot openai anthropic python-dotenv
Package Details:
- python-telegram-bot — Official wrapper for the Telegram API (20.8+)
- openai — Client for OpenAI and compatible models
- anthropic — Client for Claude (optional if using OpenAI)
- python-dotenv — Environment variable management
Create a .env file to securely store your API keys:
TELEGRAM_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxx
# Or if using Claude:
# ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxx
🔒 Security: NEVER add
.envto your Git repository. Add it to.gitignore.
Step 4: The Complete Code (Copy-Paste)
Create a bot.py file and paste this code. I'll comment it line by line so you understand everything:
```python
!/usr/bin/env python3
"""
Intelligent Telegram Chatbot with Memory and LLM
Author: AI-master.dev
License: MIT
"""
import os
import json
import logging
from datetime import datetime
from pathlib import Path
from dotenv import load_dotenv
from telegram import Update
from telegram.ext import (
Application,
CommandHandler,
MessageHandler,
filters,
ContextTypes
)
import openai
Logging configuration (for debugging)
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO
)
logger = logging.getLogger(name)
Load environment variables
load_dotenv()
TELEGRAM_TOKEN = os.getenv('TELEGRAM_TOKEN')
OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')
OpenAI configuration
openai.api_key = OPENAI_API_KEY
Directory to store conversations
MEMORY_DIR = Path("conversations")
MEMORY_DIR.mkdir(exist_ok=True)
class ConversationMemory:
"""Manages conversation memory per user"""
def __init__(self, user_id: int):
self.user_id = user_id
self.file_path = MEMORY_DIR / f"{user_id}.json"
self.messages = self._load()
def _load(self) -> list:
"""Loads history from JSON file"""
if self.file_path.exists():
with open(self.file_path, 'r', encoding='utf-8') as f:
return json.load(f)
return []
def _save(self):
"""Saves history to JSON file"""
with open(self.file_path, 'w', encoding='utf-8') as f:
json.dump(self.messages, f, ensure_ascii=False, indent=2)
def add_message(self, role: str, content: str):
"""Adds a message to history"""
self.messages.append({
"role": role,
"content": content,
"timestamp": datetime.now().isoformat()
})
# Limit to 50 messages to avoid overly long contexts
if len(self.messages) > 50:
# Keep the first message (system prompt) + last 49
self.messages = [self.messages[0]] + self.messages[-49:]
self._save()
def reset(self):
"""Resets memory"""
self.messages = []
if self.file_path.exists():
self.file_path.unlink()
def get_context(self) -> list:
"""Returns history in OpenAI format"""
# Filters only role and content (removes timestamp)
return [
{"role": msg["role"], "content": msg["content"]}
for msg in self.messages
]
async def start_command(update: Update, context: ContextTypes.DEFAULT_TYPE):
"""/start command"""
user = update.effective_user
welcome_message = (
f"👋 Hello {user.first_name}!\n\n"
"I'm your personal AI assistant. I can:\n"
"• Answer your questions\n"
"• Remember our conversations\n"
"• Help with your daily tasks\n\n"
"Ask me anything!\n\n"
"Available commands:\n"
"/reset - Reset our conversation\n"
"/stats - View your statistics\n"
"/help - Show help"
)
await update.message.reply_text(welcome_message)
async def reset_command(update: Update, context: ContextTypes.DEFAULT_TYPE):
"""/reset command - Clears memory"""
user_id = update.effective_user.id
memory = ConversationMemory(user_id)
memory.reset()
await update.message.reply_text(
"🔄 Memory reset! Our conversation starts fresh."
)
async def stats_command(update: Update, context: ContextTypes.DEFAULT_TYPE):
"""/stats command - Shows statistics"""
user_id = update.effective_user.id
memory = ConversationMemory(user_id)
total_messages = len(memory.messages)
user_messages = sum(1 for m in memory.messages if m["role"] == "user")
assistant_messages = sum(1 for m in memory.messages if m["role"] == "assistant")
stats_text = (
f"📊 **Your Statistics**\n\n"
f"Total messages: {total_messages}\n"
f"Your messages: {user_messages}\n"
f"My responses: {assistant_messages}\n"
)
if memory.messages:
first_msg = memory.messages[0].get("timestamp", "")
stats_text += f"\nFirst conversation: {first_msg[:10]}"
await update.message.reply_text(stats_text, parse_mode='Markdown')
async def help_command(update: Update, context: ContextTypes.DEFAULT_TYPE):
"""/help command"""
help_text = (
"🤖 User Guide\n\n"
"Just send me a message and I'll respond!\n\n"
"Available Commands:\n"
"/start - Start the bot\n"
"/reset - Clear conversation memory\n"
"/stats - View usage statistics\n"
"/help - Show this message\n\n"
"Example Questions:\n"
"• Explain blockchain in simple terms\n"
"• Give me a carbonara pasta recipe\n"
"• Help me draft a professional email"
)
await update.message.reply_text(help_text, parse_mode='Markdown')
async def handle_message(update: Update, context: ContextTypes.DEFAULT_TYPE):
"""Handles text messages and generates LLM responses"""
user_id = update.effective_user.id
user_message = update.message.text
# Show "typing..." indicator
await update.message.chat.send_action("typing")
try:
# Load user memory
memory = ConversationMemory(user_id)
# Add user message
memory.add_message("user", user_message)
# Prepare context for API
messages = [
{
"role": "system",
"content": (
"You are a helpful, friendly, and concise AI assistant. "
"You respond in clear, natural French. "
"You're here to help the user as best as possible."
)
}
] + memory.get_context()
# Call OpenAI API
response = openai.chat.completions.create(
model="gpt-3.5-turbo", # or "gpt-4", "gpt-4-turbo"
messages=messages,
temperature=0.7,
max_tokens=1000
)
# Extract response
assistant_reply = response.choices[0].message.content
# Save response to memory
memory.add_message("assistant", assistant_reply)
# Send response to user
await update.message.reply_text(assistant_reply)
except Exception as e:
logger.error(f"Error processing message: {e}")
await update.message.reply_text(
"⚠️ Sorry, I encountered an error. Please try again later."
)