๐ณ Cookbook: Multi-Agent Collaboration¶
Modern AI tasks often require different "experts" working together. This guide shows you how to build a collaborative team where agents hand off tasks to each other.
๐ฏ Goal¶
Build a "Doc Team" consisting of: 1. Researcher: Searches for facts and gathers data. 2. Writer: Transforms raw data into a beautiful Markdown report. 3. Orchestrator: Manages the hand-off between them.
๐๏ธ Phase 1: Define Your Agents¶
In the app/agents/ directory, we define our specialists by extending BaseAgent.
1. The Researcher¶
# app/agents/researcher.py
from app.agents.base import BaseAgent, AgentResult
class Researcher(BaseAgent):
def __init__(self, **kwargs):
super().__init__(
name="researcher",
system_prompt="You are a meticulous researcher. Find facts and provide raw bullet points.",
**kwargs
)
async def run(self, topic: str, **kwargs) -> AgentResult:
# researcher would typically use search tools here
content, tokens, latency = await self._call_llm(f"Research this: {topic}")
return AgentResult(content=content, tokens_used=tokens)
2. The Writer¶
# app/agents/writer.py
class Writer(BaseAgent):
def __init__(self, **kwargs):
super().__init__(
name="writer",
system_prompt="You are a professional technical writer. Turn raw facts into polished Markdown.",
**kwargs
)
async def run(self, raw_facts: str, **kwargs) -> AgentResult:
content, tokens, latency = await self._call_llm(f"Write a report from these facts: {raw_facts}")
return AgentResult(content=content, tokens_used=tokens)
๐งต Phase 2: Implementing the Handoff¶
The Orchestrator acts as the project manager. It calls the Researcher first, then passes the output to the Writer.
# app/services/doc_workflow.py
from app.agents.researcher import Researcher
from app.agents.writer import Writer
async def run_doc_team(topic: str):
researcher = Researcher()
writer = Writer()
print(f"๐ Researcher is starting on: {topic}")
research_result = await researcher.run(topic)
print(f"โ๏ธ Writer is formatting the findings...")
final_report = await writer.run(research_result.content)
return final_report.content
โก Phase 3: Using the Built-in Spec Workflow¶
The Traylinx Template includes a powerful SpecWorkflow that automates this using a v-model (Plan โ Design โ Execute โ Evaluate).
You can run this via the CLI without writing any code:
How it works:¶
- Planner: Breaks the request into atomic tasks.
- Designer: Figures out the "architecture" of the answer.
- Executor: Performs the actual work (calling tools/LLMs).
- Evaluator: Double-checks the final output for quality.
๐งต Phase 4: Stateful Multi-Agent Threads¶
If your collaboration requires multiple turns or "pausing" for human feedback, use the Thread System.
# Create a stateful thread
poetry run agentic thread "Analyze this legal document" --namespace="legal-dept"
Threads live in memory (Redis) or database (Postgres), allowing multiple agents to contribute to a shared context over time.
๐ก Pro Tip: Agent Discovery¶
In a production Stargate network, your Researcher might be a different agent running on a different server. You can use the SearchAgentsTool to find a researcher live on the network:
from app.tools.a2a import SearchAgentsTool, RemoteAgentCallTool
# 1. Find a specialist
discovery = SearchAgentsTool()
agents = await discovery.execute(query="expert web researcher")
# 2. Call the remote expert
caller = RemoteAgentCallTool()
remote_result = await caller.execute(
target_url=agents[0].base_url,
message="Research topic X"
)
This transforms your local team into a Distributed Agent Swarm. ๐