Multi-agent Systems: Collaborating with Multiple AIs
What if a single AI is no longer enough? Welcome to the era of multi-agent systems—where multiple artificial intelligences collaborate, divide tasks, and produce results that a single agent could never achieve alone.
In this advanced guide, we explore multi-agent architectures, major frameworks (CrewAI, AutoGen, LangGraph), and build a concrete pipeline with three specialized agents.
🏗️ Why Multi-agent Systems?
A single AI agent is powerful. But as soon as the task becomes complex—writing a SEO-optimized article in 3 languages, analyzing a dataset and generating a report, orchestrating a deployment—the limitations become apparent:
- Saturated context window: a single agent doing everything accumulates unnecessary context
- Impossible specialization: the "SEO expert + writer + translator" prompt yields mediocre results everywhere
- No cross-checking: no one reviews the work
- Limited scalability: impossible to parallelize
Multi-agent systems solve all this by applying a principle as old as the world: division of labor.
| Approach | Advantage | Limitation |
|---|---|---|
| Single agent | Simple, quick to set up | Limited in complexity |
| Sequential multi-agent | Specialization, quality | Slower |
| Parallel multi-agent | Fast + specialized | Complex orchestration |
| Hybrid multi-agent | Best of both worlds | Advanced setup |
🧠 The 3 Fundamental Architectures
Orchestrator/Workers Architecture
This is the most common model. An orchestrator agent (the "project manager") distributes tasks to specialized worker agents.
┌─────────────────┐
│ Orchestrator │
│ (Project Manager)│
└────────┬────────┘
│
┌────┼────┐
▼ ▼ ▼
┌──────┐┌──────┐┌──────┐
│Writer ││ SEO ││Trans.│
│Agent ││Agent ││Agent │
└──────┘└──────┘└──────┘
Operation:
1. The orchestrator receives the overall task
2. It breaks it down into subtasks
3. Each worker executes its part
4. The orchestrator aggregates and validates the results
Advantages:
- Centralized control
- Easy to debug (single decision point)
- Workers are interchangeable
Disadvantages:
- Single point of failure (the orchestrator)
- The orchestrator must be highly competent
- Latency if everything goes through it
Peer-to-Peer Architecture
Agents communicate directly with each other, without a central leader. Each agent decides when to hand off.
┌──────┐ ┌──────┐
│Agent A│◄──►│Agent B│
└──┬───┘ └───┬──┘
│ │
└────┬───────┘
▼
┌──────┐
│Agent C│
└──────┘
Use case: AI debates, brainstorming, cross-checking.
Advantages:
- No central bottleneck
- Resilient (if one agent fails, others continue)
- Emergence of creative solutions
Disadvantages:
- Risk of infinite loops (agents keep restarting endlessly)
- Difficult to control and predict
- Complex debugging
Hierarchical Architecture
A mix of both: multiple levels of management. A main orchestrator delegates to sub-orchestrators, who in turn manage workers.
┌──────────────┐
│ Main Orchestrator │
└──────┬───────┘
┌───┴───┐
▼ ▼
┌──────┐┌──────┐
│Manager││Manager│
│Content││Distrib│
└──┬───┘└──┬───┘
│ │
┌─┼─┐ ┌─┼─┐
▼ ▼ ▼ ▼ ▼ ▼
W W W W W W
Use case: Complex projects with many agents (10+).
| Architecture | Complexity | Control | Scalability | Use Case |
|---|---|---|---|---|
| Orchestrator/Workers | Medium | Strong | Medium | Linear pipelines |
| Peer-to-Peer | High | Weak | High | Debates, creativity |
| Hierarchical | Very high | Strong | Very high | Complex projects |
🛠️ Multi-agent Frameworks
CrewAI — The Ready-to-use AI Team
CrewAI is the most accessible framework. It models a team (Crew) composed of agents with roles, goals, and tools.
from crewai import Agent, Task, Crew
# Define agents
writer = Agent(
role="Expert Web Writer",
goal="Write engaging and informative articles",
backstory="You are a web writer with 10 years of experience in tech.",
verbose=True,
llm="gpt-4o"
)
seo_expert = Agent(
role="SEO Expert",
goal="Optimize content for search engines",
backstory="You are an SEO consultant specializing in tech content.",
verbose=True,
llm="gpt-4o"
)
translator = Agent(
role="FR→EN Translator",
goal="Produce natural and faithful translations",
backstory="You are a professional bilingual French-English translator.",
verbose=True,
llm="claude-3-5-sonnet"
)
# Define tasks
writing_task = Task(
description="Write a 2000-word article on {topic}",
expected_output="Article in markdown, structured with H2/H3",
agent=writer
)
seo_task = Task(
description="Optimize the article: title, meta, keywords, structure",
expected_output="Optimized article + SEO suggestions",
agent=seo_expert
)
translation_task = Task(
description="Translate the optimized article into English",
expected_output="Article translated into natural English",
agent=translator
)
# Assemble the team
crew = Crew(
agents=[writer, seo_expert, translator],
tasks=[writing_task, seo_task, translation_task],
verbose=True
)
# Launch
result = crew.kickoff(inputs={"topic": "AI Agents in 2025"})
print(result)
Strengths of CrewAI:
- Simple and intuitive API
- Automatic context management between agents
- Support for many LLMs (OpenAI, Anthropic, local)
- Built-in tools (web search, file reading)
Limitations:
- Less flexible than LangGraph for complex flows
- No native conditional branching
AutoGen — The Microsoft Framework
AutoGen from Microsoft Research is designed for multi-agent conversations. Agents discuss with each other to solve a problem.
from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager
# LLM Configuration
llm_config = {
"model": "gpt-4o",
"api_key": "sk-..."
}
# Agents
assistant = AssistantAgent(
name="assistant",
system_message="You are an AI assistant expert in programming.",
llm_config=llm_config
)
critic = AssistantAgent(
name="critic",
system_message="You review code and report bugs and improvements.",
llm_config=llm_config
)
user_proxy = UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=5,
code_execution_config={"work_dir": "coding"}
)
# Group chat
group_chat = GroupChat(
agents=[user_proxy, assistant, critic],
messages=[],
max_round=10
)
manager = GroupChatManager(
groupchat=group_chat,
llm_config=llm_config
)
# Start conversation
user_proxy.initiate_chat(
manager,
message="Write a Python script that scrapes prices from Amazon"
)
Strengths of AutoGen:
- Natural conversations between agents
- Integrated code execution
- GroupChat for multi-agent discussions
- Great for AI pair-programming
Limitations:
- More complex API
- Less structured than CrewAI for pipelines
LangGraph — The Workflow Graph
LangGraph (by LangChain) models workflows as directed graphs. Each node is an agent or function, edges define the flow.
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
class ArticleState(TypedDict):
topic: str
draft: str
seo_article: str
translated_article: str
status: str
def writer_agent(state: ArticleState) -> ArticleState:
"""Agent that writes the draft"""
# LLM call to write
draft = call_llm(
f"Write an article on: {state['topic']}"
)
return {"draft": draft, "status": "written"}
def seo_agent(state: ArticleState) -> ArticleState:
"""Agent that optimizes SEO"""
seo_article = call_llm(
f"Optimize this article for SEO:\n{state['draft']}"
)
return {"seo_article": seo_article, "status": "optimized"}
def translator_agent(state: ArticleState) -> ArticleState:
"""Agent that translates"""
translated = call_llm(
f"Translate to English:\n{state['seo_article']}"
)
return {"translated_article": translated, "status": "translated"}
def check_quality(state: ArticleState) -> str:
"""Decides whether to continue or restart"""
score = evaluate_quality(state["seo_article"])
if score > 0.8:
return "translate"
return "rewrite"
# Build the graph
workflow = StateGraph(ArticleState)
# Add nodes
workflow.add_node("write", writer_agent)
workflow.add_node("optimize_seo", seo_agent)
workflow.add_node("translate", translator_agent)
# Define transitions
workflow.set_entry_point("write")
workflow.add_edge("write", "optimize_seo")
workflow.add_conditional_edges(
"optimize_seo",
check_quality,
{
"translate": "translate",
"rewrite": "write"
}
)
workflow.add_edge("translate", END)
# Compile and execute
app = workflow.compile()
result = app.invoke({"topic": "AI Agents in 2025"})
Strengths of LangGraph:
- Conditional branching (if quality insufficient → restart)
- Graph visualization of workflow
- Typed shared state
- Persistence and error recovery
Limitations:
- Steeper learning curve
- Verbose for simple cases
Framework Comparison
| Criteria | CrewAI | AutoGen | LangGraph |
|---|---|---|---|
| Ease | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ |
| Flexibility | ⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
| Conversations | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ |
| Workflows | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
| Debug | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐ |
| Community | Large | Large | Very large |
| Ideal for | Simple pipelines | Pair-programming | Complex workflows |
🔧 Concrete Case: Writer + SEO + Translator Pipeline
Let's put this into practice. We'll build a complete pipeline with CrewAI that:
- Writer Agent → writes a blog article
- SEO Agent → optimizes the title, H2s, keywords
- Translator Agent → translates to English
Setup
# Installation
pip install crewai crewai-tools langchain-openai
# Environment variables
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
Complete Code
```python
"""
Multi-agent pipeline: Writer → SEO → Translator
Uses CrewAI with different LLMs per agent
"""
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool
Web search tool for the writer
search_tool = SerperDevTool()
═══════════════════════════════════════
AGENTS
═══════════════════════════════════════
writer = Agent(
role="Senior Web Writer",
goal=(
"Write captivating, well-structured blog articles "
"with concrete examples and an accessible tone."
),
backstory=(
"You've been a web writer for 10 years, specializing in "
"tech and AI. You write in French with an engaging style. "
"You use analogies to explain complex concepts."
),
tools=[search_tool],
verbose=True,
llm="gpt-4o",
max_iter=3
)
seo_expert = Agent(
role="Tech SEO Consultant",
goal=(
"Optimize each article to rank on the first page of Google "
"for targeted keywords."
),
backstory=(
"You're an SEO consultant with expertise in tech content. "
"You know the latest guidelines"
),