Traylinx Agent Template¶
A production-ready template for building AI agent workflows with multi-provider LLM support.
Features¶
- π A2A & Sentinel Auth - Optional integration with Traylinx Agent-to-Agent protocol
- π§ Spec-Driven Planning - Structured workflow: Requirements β Design β Tasks
- π§΅ Async Threads - Stateful multi-agent conversations with pause/resume support
- ποΈ Consolidated Memory - Namespace-based isolation for STM (Redis) and LTM (Postgres)
- π€ Multi-Agent Architecture - Orchestrator, Planner, Designer, Validator, Executor, Evaluator
- π Multi-Provider Support - SwitchAI, OpenAI, Anthropic, Groq, Gemini
- π οΈ Function Calling - Built-in tool/function calling support
- π Dual Interface - Both FastAPI HTTP and Typer CLI
- β‘ Async First - Fully asynchronous for performance
- π¦ Type-Safe - Pydantic models, mypy, ruff
Quick Start¶
# Install with Poetry
poetry install
# Configure environment
cp .env.example .env
# Edit .env with your API key
# Run spec-driven workflow
poetry run agentic workflow "Build a web scraper" -v
# Run async thread
poetry run agentic thread "Build a CLI tool" --namespace=my-tenant
# Or start the API server
poetry run agentic serve
π³ Docker Quick Start¶
Preferred method β Run anywhere with zero dependencies.
# Run with Docker Compose (includes Redis for caching)
traylinx run
# Or manually:
docker compose up -d
# View logs
traylinx logs
# Stop
traylinx stop
Publishing to GHCR¶
Share your agent with anyone:
# Build multi-arch image and push to registry
traylinx publish
# β Building for linux/amd64,linux/arm64...
# β Pushing to ghcr.io/traylinx/my-agent:1.0.0
# β β Published!
# Anyone can now run your agent:
traylinx pull my-agent
Docker Compose Files¶
| File | Description |
|---|---|
docker-compose.yml |
Development (Redis) |
docker-compose.prod.yml |
Production (Redis + Postgres) |
GitHub Actions CI/CD¶
This template includes a workflow at .github/workflows/docker-publish.yml that:
- Builds multi-architecture images (AMD64 + ARM64)
- Pushes to GHCR on version tags (v*)
- Runs Trivy security scanning
Project Structure¶
app/
βββ core/ # Config, errors, logging
βββ llm/ # LLM client and router
βββ agents/ # 6 specialized agents
βββ prompts/ # Modular system prompts
βββ models/ # Pydantic spec models
βββ services/ # Services: Memory, Threads, Workflow
βββ tools/ # Function calling tools
βββ api/ # FastAPI application
βββ main.py # CLI entry point
CLI Commands¶
# Run full spec-driven workflow (PlanβDesignβValidateβExecuteβEvaluate)
agentic workflow "Build a web scraper"
# Create and run an async thread
agentic thread "Build a REST API" --namespace=tenant-1
# Run thread without auto-execution (just create)
agentic thread "Complex Task" --no-run
# Create a structured spec (Requirements + Tasks)
agentic plan "Build a REST API"
# Create technical design from requirements
agentic design "FR1: API for user auth..."
# Validate a spec for consistency
agentic validate "Full spec here..."
# Execute a specific action
agentic execute "Write a Python function to sort a list"
# Start API server
agentic serve --port 8000
API Endpoints¶
| Endpoint | Description |
|---|---|
GET /health |
Health check |
POST /v1/threads |
Create thread |
POST /v1/threads/{id}/run |
Run thread |
POST /v1/threads/{id}/pause |
Pause thread |
GET /v1/threads |
List threads |
POST /v1/orchestrate |
Run orchestrator agent |
POST /v1/plan |
Run planner agent |
POST /v1/execute |
Run executor agent |
POST /v1/evaluate |
Run evaluator agent |
Configuration¶
Set these in .env:
| Variable | Description | Default |
|---|---|---|
LLM_PROVIDER |
switchai, openai, anthropic, groq, gemini |
switchai |
MODEL_FAST |
Fast model for quick tasks | openai/openai/gpt-oss-20b |
MODEL_BALANCED |
Balanced model for general tasks | openai/openai/gpt-oss-120b |
MODEL_POWERFUL |
Powerful model for complex tasks | openai/llama-3.3-70b-versatile |
LLM_MODEL |
Override model (optional) | - |
LLM_API_KEY |
Your API key | Required |
LLM_BASE_URL |
Custom endpoint URL | - |
TRAYLINX_CLIENT_ID |
Optional: Enable A2A Auth | - |
TRAYLINX_CLIENT_SECRET |
Optional: A2A Secret | - |
Authentication¶
This API supports three authentication methods:
Option 1: API Key Authentication¶
For simple service-to-service authentication:
curl -X POST "http://localhost:8000/v1/execute" \
-H "Authorization: Bearer <api_key>" \
-H "Content-Type: application/json" \
-d '{"input": "Write a Python function"}'
API keys are configured via the API_KEYS environment variable (comma-separated list).
Option 2: User Authentication (Human API)¶
For human users accessing via UI or apps:
curl -X POST "http://localhost:8000/v1/execute" \
-H "Authorization: Bearer <user_token>" \
-H "Content-Type: application/json" \
-d '{"input": "Write a Python function"}'
User tokens are validated against the Auth Service.
Option 3: Agent Authentication (A2A / Machine-to-Machine)¶
For other agents or services calling this API via Traylinx Sentinel:
curl -X POST "http://localhost:8000/v1/execute" \
-H "X-Agent-Secret-Token: <agent_token>" \
-H "X-Agent-User-Id: <agent_id>" \
-H "Content-Type: application/json" \
-d '{"input": "Write a Python function"}'
| Header | Value | Description |
|---|---|---|
X-Agent-Secret-Token |
Agent's secret token | Obtained from Traylinx Sentinel |
X-Agent-User-Id |
Agent's UUID | Agent identifier registered with Sentinel |
Authentication Flow¶
Request arrives
β
ββ Has X-Agent-Secret-Token? βββ AGENT MODE (validate via Sentinel)
β
ββ Has Authorization: Bearer?
β β
β ββ Token in API_KEYS? βββ API_KEY MODE (direct access)
β β
β ββ Not in API_KEYS? βββββ USER MODE (validate via Auth Service)
β
ββ No auth headers? βββββββββββββ 401 Unauthorized
Adding Custom Agents¶
from app.agents import BaseAgent, AgentResult
class MyAgent(BaseAgent):
def __init__(self):
super().__init__(
name="my-agent",
system_prompt="You are a helpful assistant.",
)
async def run(self, input_text: str, **kwargs) -> AgentResult:
content, tokens, latency = await self._call_llm(input_text)
return AgentResult(content=content, tokens_used=tokens)
Adding Custom Tools¶
from app.tools.base import BaseTool, ToolResult, register_tool
class WebSearchTool(BaseTool):
name = "web_search"
description = "Search the web"
parameters = {
"type": "object",
"properties": {
"query": {"type": "string"}
},
"required": ["query"]
}
async def execute(self, query: str) -> ToolResult:
# Your implementation
return ToolResult(success=True, data={"results": [...]})
register_tool(WebSearchTool())
Stargate Integration¶
This template supports integration with the Traylinx Stargate network for agent-to-agent (A2A) communication.
The @tool() Decorator¶
Register tools using the decorator-based approach for maximum developer ergonomics:
from app.tools.base import tool
@tool(name="calculate_revenue", category="finance")
def get_revenue(company: str, year: int) -> float:
"""Calculates yearly revenue for a given company."""
return 100.50
# Async tools are also supported
@tool(name="fetch_data", category="data")
async def fetch_data(url: str) -> dict:
"""Fetches data from a URL."""
async with httpx.AsyncClient() as client:
response = await client.get(url)
return response.json()
The decorator automatically: - Extracts the description from the function's docstring - Generates JSON Schema from type hints - Validates that all parameters have type annotations - Registers the tool in the global Tool Registry
CLI Commands for Stargate¶
# Register agent with Stargate Registry
agentic register [-v]
# Remove agent from Stargate Registry
agentic unpublish [-v]
# Check agent connectivity and registration status
agentic status [-v]
agentic register¶
Publishes your agent to the Stargate Registry:
1. Discovers all @tool() decorated functions
2. Generates an AgentCard with capabilities and endpoints
3. Authenticates with Sentinel
4. POSTs the AgentCard to the registry
agentic unpublish¶
Removes your agent from the Stargate Registry: 1. Authenticates with Sentinel 2. Sends DELETE request to remove the agent 3. Agent is no longer discoverable in the network
agentic status¶
Checks agent health and connectivity: - Server: Local server health check - Auth: Sentinel token validity - Registry: Agent listing status
A2A Tools¶
Two built-in tools enable agent-to-agent communication:
SearchAgentsTool¶
Discover agents in the Stargate network:
from app.tools.a2a import SearchAgentsTool
tool = SearchAgentsTool()
result = await tool.execute(
query="data processing",
capabilities=["transform", "validate"]
)
# Returns list of AgentInfo with base_url and agent_key
RemoteAgentCallTool¶
Call remote agents via A2A protocol:
from app.tools.a2a import RemoteAgentCallTool
tool = RemoteAgentCallTool()
result = await tool.execute(
target_url="https://other-agent.example.com",
message="Process this data",
context={"key": "value"}
)
# Returns A2AResponse with result, trace_id, and agent_id
Both tools automatically: - Attach Sentinel authentication headers - Propagate X-Correlation-Id for distributed tracing - Include trace_id in responses for observability
Stargate Environment Variables¶
| Variable | Description | Required |
|---|---|---|
TRAYLINX_REGISTRY_URL |
Stargate Registry endpoint | Yes (for A2A) |
AGENT_ID |
Unique agent identifier | Yes (for A2A) |
AGENT_SECRET |
Agent authentication secret | Yes (for A2A) |
TRAYLINX_CLIENT_ID |
Sentinel client ID | Yes (for auth) |
TRAYLINX_CLIENT_SECRET |
Sentinel client secret | Yes (for auth) |
LANGFUSE_PUBLIC_KEY |
Langfuse public key for tracing | No |
LANGFUSE_SECRET_KEY |
Langfuse secret key | No |
LANGFUSE_HOST |
Langfuse host URL | No |
LLM Resilience¶
The LLM client includes automatic retry with exponential backoff: - Retries transient failures up to 3 times - Uses exponential backoff with jitter to prevent thundering herd - Falls back gracefully when Redis is unavailable (STM uses local dict)
Observability with Langfuse¶
When LANGFUSE_PUBLIC_KEY is set, the LLM client automatically enables Langfuse tracing:
- Logs all prompts and completions
- Tracks tool calls and their results
- Propagates correlation IDs across A2A calls
License¶
MIT License - see LICENSE