Introduction
A well-organized project structure is crucial for building maintainable AI agents. This section presents a project layout that scales from simple single-agent experiments to complex multi-agent systems.
Design Principle: Separate concerns early. Keep your agent logic, tools, memory systems, and configuration in distinct modules. This makes testing, debugging, and extending your agents much easier.
Project Layout
Recommended Structure
📝project_structure.txt
1agentic-ai-project/
2├── .env # Environment variables (API keys)
3├── .env.example # Template for .env
4├── .gitignore
5├── README.md
6├── pyproject.toml # Project metadata and dependencies
7├── requirements.txt # Pip dependencies (alternative)
8│
9├── src/
10│ └── agent/
11│ ├── __init__.py
12│ ├── core/ # Core agent components
13│ │ ├── __init__.py
14│ │ ├── agent.py # Main agent class
15│ │ ├── loop.py # Agent loop implementation
16│ │ └── state.py # Agent state management
17│ │
18│ ├── llm/ # LLM providers
19│ │ ├── __init__.py
20│ │ ├── base.py # Abstract LLM interface
21│ │ ├── claude.py # Claude implementation
22│ │ ├── openai.py # OpenAI implementation
23│ │ └── gemini.py # Gemini implementation
24│ │
25│ ├── tools/ # Agent tools
26│ │ ├── __init__.py
27│ │ ├── base.py # Tool base class
28│ │ ├── file_ops.py # File operations
29│ │ ├── web.py # Web search/fetch
30│ │ ├── code.py # Code execution
31│ │ └── registry.py # Tool registry
32│ │
33│ ├── memory/ # Memory systems
34│ │ ├── __init__.py
35│ │ ├── base.py # Memory interface
36│ │ ├── short_term.py
37│ │ ├── long_term.py
38│ │ └── vector.py # Vector store integration
39│ │
40│ ├── planning/ # Planning systems
41│ │ ├── __init__.py
42│ │ ├── planner.py
43│ │ └── strategies.py
44│ │
45│ └── config/ # Configuration
46│ ├── __init__.py
47│ ├── settings.py
48│ └── prompts.py # System prompts
49│
50├── tests/ # Test suite
51│ ├── __init__.py
52│ ├── conftest.py # Pytest fixtures
53│ ├── test_agent.py
54│ ├── test_tools.py
55│ └── test_memory.py
56│
57├── examples/ # Example scripts
58│ ├── simple_agent.py
59│ ├── coding_agent.py
60│ └── research_agent.py
61│
62├── scripts/ # Utility scripts
63│ ├── setup.sh
64│ └── run_agent.py
65│
66└── docs/ # Documentation
67 ├── architecture.md
68 └── tools.mdStart Simple, Grow Organically
You don't need all these directories from day one. Start with
src/agent/core/, tools/, and llm/. Add memory, planning, and other modules as your agent evolves.Core Modules Explained
1. Core Module (agent/core/)
The heart of your agent system. Contains the main agent class and loop logic.
🐍src/agent/core/agent.py
1"""Main Agent class - the entry point for all agent operations."""
2
3from dataclasses import dataclass, field
4from typing import Optional
5
6from agent.llm.base import LLM
7from agent.tools.registry import ToolRegistry
8from agent.memory.base import Memory
9from agent.core.loop import AgentLoop
10from agent.core.state import AgentState
11
12
13@dataclass
14class AgentConfig:
15 """Configuration for an agent instance."""
16 name: str = "Agent"
17 max_iterations: int = 50
18 verbose: bool = True
19
20
21class Agent:
22 """
23 The main Agent class.
24
25 Orchestrates the interaction between LLM, tools, and memory
26 to accomplish user-specified goals.
27 """
28
29 def __init__(
30 self,
31 llm: LLM,
32 tools: ToolRegistry,
33 memory: Optional[Memory] = None,
34 config: Optional[AgentConfig] = None,
35 ):
36 self.llm = llm
37 self.tools = tools
38 self.memory = memory
39 self.config = config or AgentConfig()
40 self.loop = AgentLoop(self)
41
42 def run(self, goal: str) -> str:
43 """Execute the agent loop to accomplish a goal."""
44 state = AgentState(goal=goal)
45 return self.loop.execute(state)
46
47 def chat(self, message: str) -> str:
48 """Single-turn chat interaction."""
49 response = self.llm.generate(message)
50 return response2. LLM Module (agent/llm/)
Abstracts away the differences between LLM providers:
🐍src/agent/llm/base.py
1"""Base LLM interface that all providers implement."""
2
3from abc import ABC, abstractmethod
4from dataclasses import dataclass
5from typing import Optional
6
7
8@dataclass
9class LLMConfig:
10 """Configuration for LLM instances."""
11 model: str
12 max_tokens: int = 4096
13 temperature: float = 0.7
14 system_prompt: Optional[str] = None
15
16
17class LLM(ABC):
18 """Abstract base class for LLM providers."""
19
20 def __init__(self, config: LLMConfig):
21 self.config = config
22
23 @abstractmethod
24 def generate(self, prompt: str) -> str:
25 """Generate a response for the given prompt."""
26 pass
27
28 @abstractmethod
29 def generate_with_tools(
30 self,
31 prompt: str,
32 tools: list[dict],
33 ) -> dict:
34 """Generate a response that may include tool calls."""
35 pass🐍src/agent/llm/claude.py
1"""Claude (Anthropic) LLM implementation."""
2
3import anthropic
4from agent.llm.base import LLM, LLMConfig
5
6
7class ClaudeLLM(LLM):
8 """Claude API wrapper."""
9
10 def __init__(self, config: LLMConfig):
11 super().__init__(config)
12 self.client = anthropic.Anthropic()
13
14 def generate(self, prompt: str) -> str:
15 response = self.client.messages.create(
16 model=self.config.model,
17 max_tokens=self.config.max_tokens,
18 system=self.config.system_prompt or "",
19 messages=[{"role": "user", "content": prompt}],
20 )
21 return response.content[0].text
22
23 def generate_with_tools(
24 self,
25 prompt: str,
26 tools: list[dict],
27 ) -> dict:
28 response = self.client.messages.create(
29 model=self.config.model,
30 max_tokens=self.config.max_tokens,
31 system=self.config.system_prompt or "",
32 messages=[{"role": "user", "content": prompt}],
33 tools=tools,
34 )
35 return self._parse_response(response)
36
37 def _parse_response(self, response) -> dict:
38 """Parse response into structured format."""
39 result = {"text": "", "tool_calls": []}
40
41 for block in response.content:
42 if block.type == "text":
43 result["text"] = block.text
44 elif block.type == "tool_use":
45 result["tool_calls"].append({
46 "id": block.id,
47 "name": block.name,
48 "input": block.input,
49 })
50
51 return result3. Tools Module (agent/tools/)
Tools are the agent's interface to the world:
🐍src/agent/tools/base.py
1"""Base tool interface."""
2
3from abc import ABC, abstractmethod
4from dataclasses import dataclass
5from typing import Any
6
7
8@dataclass
9class ToolResult:
10 """Result of a tool execution."""
11 success: bool
12 output: str
13 error: str | None = None
14
15
16class Tool(ABC):
17 """Base class for all tools."""
18
19 name: str
20 description: str
21
22 @abstractmethod
23 def get_schema(self) -> dict:
24 """Return the tool schema for LLM function calling."""
25 pass
26
27 @abstractmethod
28 def execute(self, **kwargs: Any) -> ToolResult:
29 """Execute the tool with given parameters."""
30 pass🐍src/agent/tools/file_ops.py
1"""File operation tools."""
2
3from pathlib import Path
4from agent.tools.base import Tool, ToolResult
5
6
7class ReadFileTool(Tool):
8 """Read contents of a file."""
9
10 name = "read_file"
11 description = "Read the contents of a file at the given path"
12
13 def get_schema(self) -> dict:
14 return {
15 "name": self.name,
16 "description": self.description,
17 "input_schema": {
18 "type": "object",
19 "properties": {
20 "file_path": {
21 "type": "string",
22 "description": "Path to the file to read",
23 },
24 },
25 "required": ["file_path"],
26 },
27 }
28
29 def execute(self, file_path: str) -> ToolResult:
30 try:
31 path = Path(file_path)
32 if not path.exists():
33 return ToolResult(
34 success=False,
35 output="",
36 error=f"File not found: {file_path}",
37 )
38
39 content = path.read_text()
40 return ToolResult(success=True, output=content)
41
42 except Exception as e:
43 return ToolResult(
44 success=False,
45 output="",
46 error=str(e),
47 )4. Memory Module (agent/memory/)
🐍src/agent/memory/base.py
1"""Memory interface for agents."""
2
3from abc import ABC, abstractmethod
4from dataclasses import dataclass
5from datetime import datetime
6
7
8@dataclass
9class MemoryEntry:
10 """A single memory entry."""
11 content: str
12 timestamp: datetime
13 metadata: dict | None = None
14
15
16class Memory(ABC):
17 """Abstract base class for memory systems."""
18
19 @abstractmethod
20 def add(self, content: str, metadata: dict | None = None) -> None:
21 """Add a new memory entry."""
22 pass
23
24 @abstractmethod
25 def recall(self, query: str, k: int = 5) -> list[MemoryEntry]:
26 """Retrieve relevant memories for a query."""
27 pass
28
29 @abstractmethod
30 def clear(self) -> None:
31 """Clear all memories."""
32 passConfiguration Files
pyproject.toml
📄pyproject.toml
1[project]
2name = "agentic-ai"
3version = "0.1.0"
4description = "AI Agent Framework"
5requires-python = ">=3.11"
6dependencies = [
7 "anthropic>=0.40.0",
8 "openai>=1.50.0",
9 "google-generativeai>=0.8.0",
10 "httpx>=0.27.0",
11 "pydantic>=2.0.0",
12 "python-dotenv>=1.0.0",
13]
14
15[project.optional-dependencies]
16dev = [
17 "pytest>=8.0.0",
18 "pytest-asyncio>=0.23.0",
19 "black>=24.0.0",
20 "mypy>=1.0.0",
21 "ruff>=0.4.0",
22]
23
24[tool.black]
25line-length = 88
26target-version = ["py311"]
27
28[tool.ruff]
29line-length = 88
30select = ["E", "F", "I", "N", "W"]
31
32[tool.mypy]
33python_version = "3.11"
34strict = true.gitignore
📝.gitignore
1# Environment
2.env
3.venv/
4venv/
5
6# Python
7__pycache__/
8*.py[cod]
9*.egg-info/
10dist/
11build/
12
13# IDE
14.idea/
15.vscode/
16*.swp
17
18# Testing
19.pytest_cache/
20.coverage
21htmlcov/
22
23# Misc
24.DS_Store
25*.logStarter Template
Here's a minimal working example that ties everything together:
🐍examples/simple_agent.py
1"""A minimal agent example to verify your setup."""
2
3import os
4from dotenv import load_dotenv
5
6from agent.core.agent import Agent, AgentConfig
7from agent.llm.claude import ClaudeLLM, LLMConfig
8from agent.tools.registry import ToolRegistry
9from agent.tools.file_ops import ReadFileTool
10
11# Load environment variables
12load_dotenv()
13
14
15def main():
16 # 1. Configure the LLM
17 llm_config = LLMConfig(
18 model="claude-sonnet-4-20250514",
19 max_tokens=4096,
20 system_prompt="You are a helpful coding assistant.",
21 )
22 llm = ClaudeLLM(llm_config)
23
24 # 2. Set up tools
25 tools = ToolRegistry()
26 tools.register(ReadFileTool())
27
28 # 3. Create the agent
29 agent = Agent(
30 llm=llm,
31 tools=tools,
32 config=AgentConfig(
33 name="SimpleAgent",
34 max_iterations=10,
35 verbose=True,
36 ),
37 )
38
39 # 4. Run a simple task
40 result = agent.chat("What can you help me with today?")
41 print(result)
42
43
44if __name__ == "__main__":
45 main()⚡run_example.sh
1# Make sure you're in your virtual environment
2source venv/bin/activate
3
4# Run the example
5python examples/simple_agent.pyBuilding Block by Block
We'll implement each module step by step throughout this book. By the end, you'll have a complete, production-ready agent framework.
Summary
You now have a blueprint for organizing your agent projects:
- Core - Agent class, loop, and state management
- LLM - Provider abstractions for Claude, OpenAI, Gemini
- Tools - Extensible tool system with registry
- Memory - Short and long-term memory interfaces
- Config - Centralized settings and prompts
Ready to Build: With your environment set up and project structure in place, you're ready to dive into the agentic AI revolution. In the next chapter, we'll explore what makes AI agents different from chatbots and why this distinction matters.