Introduction
An AI agent is more than just an LLM. It's a system of interconnected components, each with a specific responsibility. Understanding these components and how they interact is essential for building effective agents.
Think of an Agent as an Organization: The LLM is the brain that makes decisions. Tools are the hands that do work. Memory is the mind that retains knowledge. The planner sets strategy. The orchestrator coordinates everything.
Component Map
Here's how the core components relate to each other:
📝component_architecture.txt
1┌────────────────────────────────────────────────────────────┐
2│ ORCHESTRATOR │
3│ (Coordinates all components) │
4├────────────────────────────────────────────────────────────┤
5│ │
6│ ┌─────────────────────────────────────────────────────┐ │
7│ │ PLANNER │ │
8│ │ (Sets goals and strategies) │ │
9│ └───────────────────────┬─────────────────────────────┘ │
10│ │ │
11│ ┌───────────────────────▼─────────────────────────────┐ │
12│ │ LLM │ │
13│ │ (Reasoning and decisions) │ │
14│ └───────┬───────────────┬─────────────────┬───────────┘ │
15│ │ │ │ │
16│ ┌───────▼───────┐ ┌─────▼─────┐ ┌───────▼───────┐ │
17│ │ TOOLS │ │ MEMORY │ │ PROMPTS │ │
18│ │ (Actions) │ │ (Context) │ │ (Templates) │ │
19│ └───────────────┘ └───────────┘ └───────────────┘ │
20│ │
21└────────────────────────────────────────────────────────────┘| Component | Responsibility | Analogy |
|---|---|---|
| LLM | Reasoning and decision-making | Brain |
| Tools | Interacting with the world | Hands |
| Memory | Storing and retrieving information | Mind |
| Planner | Breaking down goals into steps | Strategist |
| Orchestrator | Coordinating all components | Manager |
| Prompts | Guiding LLM behavior | Instructions |
The LLM (Brain)
The LLM is the reasoning engine at the heart of every agent. It:
- Interprets the current context and goal
- Decides which action to take next
- Generates parameters for tool calls
- Synthesizes information into responses
- Reflects on errors and adjusts strategy
🐍llm_component.py
1from abc import ABC, abstractmethod
2from dataclasses import dataclass
3from typing import Any
4
5
6@dataclass
7class LLMResponse:
8 """Structured response from an LLM."""
9 text: str
10 tool_calls: list[dict] | None = None
11 usage: dict | None = None
12 model: str | None = None
13
14
15class LLM(ABC):
16 """Abstract base class for LLM providers."""
17
18 @abstractmethod
19 def generate(
20 self,
21 prompt: str,
22 system_prompt: str | None = None,
23 max_tokens: int = 4096,
24 temperature: float = 0.7,
25 ) -> LLMResponse:
26 """Generate a response to a prompt."""
27 pass
28
29 @abstractmethod
30 def generate_with_tools(
31 self,
32 prompt: str,
33 tools: list[dict],
34 system_prompt: str | None = None,
35 ) -> LLMResponse:
36 """Generate a response that may include tool calls."""
37 pass
38
39
40# Example usage
41class ClaudeLLM(LLM):
42 def __init__(self, model: str = "claude-sonnet-4-20250514"):
43 self.client = anthropic.Anthropic()
44 self.model = model
45
46 def generate(self, prompt: str, **kwargs) -> LLMResponse:
47 response = self.client.messages.create(
48 model=self.model,
49 max_tokens=kwargs.get("max_tokens", 4096),
50 system=kwargs.get("system_prompt", ""),
51 messages=[{"role": "user", "content": prompt}],
52 )
53 return LLMResponse(
54 text=response.content[0].text,
55 usage={
56 "input": response.usage.input_tokens,
57 "output": response.usage.output_tokens,
58 },
59 model=self.model,
60 )LLM Selection Matters
Different LLMs have different strengths. Claude excels at long-context reasoning. GPT-4o is strong on multimodal tasks. o3 shines at complex problem-solving. Choose based on your agent's needs.
Tools (Hands)
Tools are the agent's interface to the world. They allow the agent to:
| Category | Examples | Purpose |
|---|---|---|
| Read | read_file, search_web, query_database | Gather information |
| Write | write_file, send_email, create_record | Create or modify |
| Execute | run_command, execute_code, call_api | Perform actions |
| Communicate | ask_user, send_notification | Interact with humans |
🐍tool_component.py
1from abc import ABC, abstractmethod
2from dataclasses import dataclass
3
4
5@dataclass
6class ToolResult:
7 """Result of a tool execution."""
8 success: bool
9 output: str
10 error: str | None = None
11
12
13class Tool(ABC):
14 """Base class for all tools."""
15
16 name: str
17 description: str
18
19 @abstractmethod
20 def get_schema(self) -> dict:
21 """Return the JSON schema for this tool."""
22 pass
23
24 @abstractmethod
25 def execute(self, **kwargs) -> ToolResult:
26 """Execute the tool with given parameters."""
27 pass
28
29
30class ToolRegistry:
31 """Registry for managing available tools."""
32
33 def __init__(self):
34 self.tools: dict[str, Tool] = {}
35
36 def register(self, tool: Tool) -> None:
37 """Register a tool."""
38 self.tools[tool.name] = tool
39
40 def get(self, name: str) -> Tool | None:
41 """Get a tool by name."""
42 return self.tools.get(name)
43
44 def get_schemas(self) -> list[dict]:
45 """Get schemas for all tools (for LLM function calling)."""
46 return [tool.get_schema() for tool in self.tools.values()]
47
48 def execute(self, name: str, **kwargs) -> ToolResult:
49 """Execute a tool by name."""
50 tool = self.get(name)
51 if not tool:
52 return ToolResult(
53 success=False,
54 output="",
55 error=f"Unknown tool: {name}",
56 )
57 return tool.execute(**kwargs)Tool Design is Critical
Well-designed tools are the difference between an agent that works and one that struggles. Clear descriptions, precise schemas, and robust error handling are essential.
Memory (Mind)
Memory allows agents to maintain context beyond the immediate conversation:
| Type | Duration | Use Case |
|---|---|---|
| Working Memory | Single action | Current tool call context |
| Short-term | Current session | Recent actions and results |
| Long-term | Persistent | Learned patterns, user preferences |
| Episodic | Per task | Task-specific experiences |
🐍memory_component.py
1from abc import ABC, abstractmethod
2from dataclasses import dataclass
3from datetime import datetime
4
5
6@dataclass
7class MemoryEntry:
8 """A single memory entry."""
9 content: str
10 timestamp: datetime
11 metadata: dict | None = None
12 embedding: list[float] | None = None
13
14
15class Memory(ABC):
16 """Abstract base class for memory systems."""
17
18 @abstractmethod
19 def add(self, content: str, metadata: dict | None = None) -> None:
20 """Add a new memory."""
21 pass
22
23 @abstractmethod
24 def recall(self, query: str, k: int = 5) -> list[MemoryEntry]:
25 """Retrieve relevant memories."""
26 pass
27
28 @abstractmethod
29 def clear(self) -> None:
30 """Clear all memories."""
31 pass
32
33
34class HybridMemory(Memory):
35 """Memory system with both recency and relevance."""
36
37 def __init__(self, vector_store, max_short_term: int = 100):
38 self.vector_store = vector_store
39 self.short_term: list[MemoryEntry] = []
40 self.max_short_term = max_short_term
41
42 def add(self, content: str, metadata: dict | None = None) -> None:
43 entry = MemoryEntry(
44 content=content,
45 timestamp=datetime.now(),
46 metadata=metadata,
47 )
48
49 # Add to short-term
50 self.short_term.append(entry)
51 if len(self.short_term) > self.max_short_term:
52 # Move oldest to long-term
53 oldest = self.short_term.pop(0)
54 self.vector_store.add(oldest)
55
56 def recall(self, query: str, k: int = 5) -> list[MemoryEntry]:
57 # Get recent entries
58 recent = self.short_term[-5:]
59
60 # Get semantically similar from long-term
61 relevant = self.vector_store.search(query, k=k)
62
63 # Combine and deduplicate
64 return self._merge_memories(recent, relevant, k)Planner (Strategy)
The planner breaks down complex goals into achievable steps:
🐍planner_component.py
1@dataclass
2class Step:
3 """A single step in a plan."""
4 description: str
5 tool_hint: str | None = None
6 dependencies: list[int] | None = None
7 status: str = "pending" # pending, in_progress, completed, failed
8
9
10@dataclass
11class Plan:
12 """A plan for achieving a goal."""
13 goal: str
14 steps: list[Step]
15 current_step: int = 0
16
17
18class Planner:
19 """Creates and manages execution plans."""
20
21 def __init__(self, llm: LLM):
22 self.llm = llm
23
24 def create_plan(self, goal: str, context: str = "") -> Plan:
25 """Create a plan for achieving a goal."""
26 prompt = f"""
27Create a step-by-step plan to achieve this goal:
28
29Goal: {goal}
30
31Context: {context}
32
33For each step, specify:
341. A clear description of what needs to be done
352. What tool might be needed (optional)
363. Dependencies on other steps (optional)
37
38Return the plan as a numbered list.
39"""
40 response = self.llm.generate(prompt)
41 steps = self._parse_steps(response.text)
42 return Plan(goal=goal, steps=steps)
43
44 def replan(self, plan: Plan, feedback: str) -> Plan:
45 """Create a new plan based on feedback."""
46 prompt = f"""
47The original plan failed. Create a new plan.
48
49Original goal: {plan.goal}
50Completed steps: {[s for s in plan.steps if s.status == 'completed']}
51Failure feedback: {feedback}
52
53Create a revised plan that accounts for what we learned.
54"""
55 response = self.llm.generate(prompt)
56 steps = self._parse_steps(response.text)
57 return Plan(goal=plan.goal, steps=steps)
58
59 def get_next_step(self, plan: Plan) -> Step | None:
60 """Get the next step to execute."""
61 for i, step in enumerate(plan.steps):
62 if step.status == "pending":
63 # Check dependencies
64 if step.dependencies:
65 deps_complete = all(
66 plan.steps[d].status == "completed"
67 for d in step.dependencies
68 )
69 if not deps_complete:
70 continue
71 return step
72 return NoneOrchestrator (Coordinator)
The orchestrator ties everything together, managing the agent loop and coordinating components:
🐍orchestrator_component.py
1class Orchestrator:
2 """Coordinates all agent components."""
3
4 def __init__(
5 self,
6 llm: LLM,
7 tools: ToolRegistry,
8 memory: Memory,
9 planner: Planner,
10 config: AgentConfig,
11 ):
12 self.llm = llm
13 self.tools = tools
14 self.memory = memory
15 self.planner = planner
16 self.config = config
17
18 def run(self, goal: str) -> AgentResult:
19 """Execute the agent to achieve a goal."""
20
21 # Create initial plan
22 plan = self.planner.create_plan(goal)
23
24 # Initialize state
25 state = AgentState(goal=goal, plan=plan)
26
27 for iteration in range(self.config.max_iterations):
28 # Get current step
29 step = self.planner.get_next_step(plan)
30 if not step:
31 return self._finish_success(state)
32
33 # Build context
34 context = self._build_context(state, step)
35
36 # Reason about action
37 action = self._decide_action(context)
38
39 # Execute action
40 result = self.tools.execute(action.name, **action.params)
41
42 # Update memory
43 self.memory.add(
44 f"Action: {action.name}, Result: {result.output}"
45 )
46
47 # Update state
48 state = self._update_state(state, step, result)
49
50 # Check for replanning
51 if not result.success:
52 plan = self.planner.replan(plan, result.error)
53 state.plan = plan
54
55 return self._finish_timeout(state)
56
57 def _build_context(self, state: AgentState, step: Step) -> dict:
58 """Build context for decision-making."""
59 return {
60 "goal": state.goal,
61 "current_step": step.description,
62 "progress": state.progress_summary(),
63 "memories": self.memory.recall(step.description),
64 "tools": self.tools.get_schemas(),
65 }
66
67 def _decide_action(self, context: dict) -> Action:
68 """Use LLM to decide the next action."""
69 prompt = self._format_decision_prompt(context)
70 response = self.llm.generate_with_tools(
71 prompt=prompt,
72 tools=context["tools"],
73 )
74 return self._parse_action(response)Summary
The core components of an agent system are:
- LLM: The reasoning engine that makes decisions
- Tools: The interface to the world (read, write, execute)
- Memory: Context storage and retrieval (short and long-term)
- Planner: Strategy for breaking down complex goals
- Orchestrator: Coordinator that runs the agent loop
Building Block by Block: In the following sections, we'll dive deep into each component, starting with the LLM as reasoning engine.