Skip to content

๐Ÿณ Cookbook: Multi-Agent Collaboration

Modern AI tasks often require different "experts" working together. This guide shows you how to build a collaborative team where agents hand off tasks to each other.

๐ŸŽฏ Goal

Build a "Doc Team" consisting of: 1. Researcher: Searches for facts and gathers data. 2. Writer: Transforms raw data into a beautiful Markdown report. 3. Orchestrator: Manages the hand-off between them.


๐Ÿ—๏ธ Phase 1: Define Your Agents

In the app/agents/ directory, we define our specialists by extending BaseAgent.

1. The Researcher

# app/agents/researcher.py
from app.agents.base import BaseAgent, AgentResult

class Researcher(BaseAgent):
    def __init__(self, **kwargs):
        super().__init__(
            name="researcher",
            system_prompt="You are a meticulous researcher. Find facts and provide raw bullet points.",
            **kwargs
        )

    async def run(self, topic: str, **kwargs) -> AgentResult:
        # researcher would typically use search tools here
        content, tokens, latency = await self._call_llm(f"Research this: {topic}")
        return AgentResult(content=content, tokens_used=tokens)

2. The Writer

# app/agents/writer.py
class Writer(BaseAgent):
    def __init__(self, **kwargs):
        super().__init__(
            name="writer",
            system_prompt="You are a professional technical writer. Turn raw facts into polished Markdown.",
            **kwargs
        )

    async def run(self, raw_facts: str, **kwargs) -> AgentResult:
        content, tokens, latency = await self._call_llm(f"Write a report from these facts: {raw_facts}")
        return AgentResult(content=content, tokens_used=tokens)

๐Ÿงต Phase 2: Implementing the Handoff

The Orchestrator acts as the project manager. It calls the Researcher first, then passes the output to the Writer.

# app/services/doc_workflow.py
from app.agents.researcher import Researcher
from app.agents.writer import Writer

async def run_doc_team(topic: str):
    researcher = Researcher()
    writer = Writer()

    print(f"๐Ÿ” Researcher is starting on: {topic}")
    research_result = await researcher.run(topic)

    print(f"โœ๏ธ Writer is formatting the findings...")
    final_report = await writer.run(research_result.content)

    return final_report.content

โšก Phase 3: Using the Built-in Spec Workflow

The Traylinx Template includes a powerful SpecWorkflow that automates this using a v-model (Plan โ†’ Design โ†’ Execute โ†’ Evaluate).

You can run this via the CLI without writing any code:

poetry run agentic workflow "Research the latest trends in A2A protocols and write a summary"

How it works:

  1. Planner: Breaks the request into atomic tasks.
  2. Designer: Figures out the "architecture" of the answer.
  3. Executor: Performs the actual work (calling tools/LLMs).
  4. Evaluator: Double-checks the final output for quality.

๐Ÿงต Phase 4: Stateful Multi-Agent Threads

If your collaboration requires multiple turns or "pausing" for human feedback, use the Thread System.

# Create a stateful thread
poetry run agentic thread "Analyze this legal document" --namespace="legal-dept"

Threads live in memory (Redis) or database (Postgres), allowing multiple agents to contribute to a shared context over time.


๐Ÿ’ก Pro Tip: Agent Discovery

In a production Stargate network, your Researcher might be a different agent running on a different server. You can use the SearchAgentsTool to find a researcher live on the network:

from app.tools.a2a import SearchAgentsTool, RemoteAgentCallTool

# 1. Find a specialist
discovery = SearchAgentsTool()
agents = await discovery.execute(query="expert web researcher")

# 2. Call the remote expert
caller = RemoteAgentCallTool()
remote_result = await caller.execute(
    target_url=agents[0].base_url,
    message="Research topic X"
)

This transforms your local team into a Distributed Agent Swarm. ๐Ÿš€