Level 4: Agent Teams that can reason and collaborate
Agents are the atomic unit of work, and work best when they have a narrow scope and a small number of tools. When the number of tools grows beyond what the model can handle or you need to handle multiple concepts, use a team of agents to spread the load.Agno provides an industry leading multi-agent architecture that allows you to build Agent Teams that can reason, collaborate and coordinate. In this example, we’ll build a team of 2 agents to analyze the semiconductor market performance, reasoning step by step.
level_4_team.py
Copy
Ask AI
from agno.agent import Agentfrom agno.models.anthropic import Claudefrom agno.models.openai import OpenAIChatfrom agno.team.team import Teamfrom agno.tools.duckduckgo import DuckDuckGoToolsfrom agno.tools.reasoning import ReasoningToolsweb_agent = Agent( name="Web Search Agent", role="Handle web search requests and general research", model=OpenAIChat(id="gpt-4.1"), tools=[DuckDuckGoTools()], instructions="Always include sources", add_datetime_to_instructions=True,)news_agent = Agent( name="News Agent", role="Handle news requests and current events analysis", model=OpenAIChat(id="gpt-4.1"), tools=[DuckDuckGoTools(search=True, news=True)], instructions=[ "Use tables to display news information and findings.", "Clearly state the source and publication date.", "Focus on delivering current and relevant news insights.", ], add_datetime_to_instructions=True,)reasoning_research_team = Team( name="Reasoning Research Team", mode="coordinate", model=Claude(id="claude-sonnet-4-20250514"), members=[web_agent, news_agent], tools=[ReasoningTools(add_instructions=True)], instructions=[ "Collaborate to provide comprehensive research and news insights", "Consider both current events and trending topics", "Use tables and charts to display data clearly and professionally", "Present findings in a structured, easy-to-follow format", "Only output the final consolidated analysis, not individual agent responses", ], markdown=True, show_members_responses=True, enable_agentic_context=True, add_datetime_to_instructions=True, success_criteria="The team has provided a complete research analysis with data, visualizations, trend assessment, and actionable insights supported by current information and reliable sources.",)if __name__ == "__main__": reasoning_research_team.print_response("""Research and compare recent developments in renewable energy: 1. Get latest news about renewable energy innovations 2. Analyze recent developments in the renewable sector 3. Compare different renewable energy technologies 4. Recommend future trends to watch""", stream=True, show_full_reasoning=True, stream_intermediate_steps=True, )
Level 5: Agentic Workflows with state and determinism
Workflows are deterministic, stateful, multi-agent programs built for production applications. We write the workflow in pure python, giving us extreme control over the execution flow.Having built 100s of agentic systems, no framework or step based approach will give you the flexibility and reliability of pure-python. Want loops - use while/for, want conditionals - use if/else, want exceptional handling - use try/except.
Because the workflow logic is a python function, AI code editors can vibe code workflows for you.Add https://docs-v1.agno.com as a document source and vibe away.
Here’s a simple workflow that caches previous outputs, you control every step: what gets cached, what gets streamed, what gets logged and what gets returned.
level_5_workflow.py
Copy
Ask AI
from typing import Iteratorfrom agno.agent import Agent, RunResponsefrom agno.models.openai import OpenAIChatfrom agno.utils.log import loggerfrom agno.utils.pprint import pprint_run_responsefrom agno.workflow import Workflowclass CacheWorkflow(Workflow): # Add agents or teams as attributes on the workflow agent = Agent(model=OpenAIChat(id="gpt-4o-mini")) # Write the logic in the `run()` method def run(self, message: str) -> Iterator[RunResponse]: logger.info(f"Checking cache for '{message}'") # Check if the output is already cached if self.session_state.get(message): logger.info(f"Cache hit for '{message}'") yield RunResponse( run_id=self.run_id, content=self.session_state.get(message) ) return logger.info(f"Cache miss for '{message}'") # Run the agent and yield the response yield from self.agent.run(message, stream=True) # Cache the output after response is yielded self.session_state[message] = self.agent.run_response.contentif __name__ == "__main__": workflow = CacheWorkflow() # Run workflow (this is takes ~1s) response: Iterator[RunResponse] = workflow.run(message="Tell me a joke.") # Print the response pprint_run_response(response, markdown=True, show_time=True) # Run workflow again (this is immediate because of caching) response: Iterator[RunResponse] = workflow.run(message="Tell me a joke.") # Print the response pprint_run_response(response, markdown=True, show_time=True)