Chapter 7
12 min read
Section 39 of 175

Why Agents Need Tools

Tool Use and Function Calling

Introduction

LLMs are remarkably capable at reasoning, but they exist in a closed world of text. Tools are what connect agents to reality, transforming them from sophisticated chatbots into systems that can actually do things.

The Core Insight: An LLM without tools is like a brilliant mind trapped in a room with no windows or doors. It can think deeply, but it can't see or touch the world. Tools are the windows and doors.

Limitations Without Tools

Pure LLMs, no matter how advanced, face fundamental limitations:

Knowledge Limitations

📝knowledge_limitations.txt
1USER: What is the current stock price of Apple?
2
3LLM WITHOUT TOOLS:
4"Based on my training data from [cutoff date], Apple's stock
5was around $X. However, I cannot provide the current price
6as I don't have access to real-time market data."
7
8→ The LLM can only reference its training data
9→ No way to access current information
10→ Answer is always potentially outdated

Action Limitations

📝action_limitations.txt
1USER: Send an email to john@example.com about the meeting.
2
3LLM WITHOUT TOOLS:
4"I can help you draft the email, but I cannot actually send it.
5Here's a draft you could use:
6
7Subject: Meeting Reminder
8Dear John,
9..."
10
11→ Can write the email content
12→ Cannot actually send it
13→ User must copy and send manually

Computation Limitations

📝computation_limitations.txt
1USER: Calculate the sum of all prime numbers under 10,000.
2
3LLM WITHOUT TOOLS:
4"Let me try to calculate this...
52 + 3 + 5 + 7 + 11 + 13...
6
7[May make errors in mental arithmetic]
8[Takes many tokens to reason through]
9[Could hallucinate the answer]
10
11I believe the sum is approximately 5,736,396."
12→ (Actual answer: 5,736,396 - but confidence is low)
13
14LLM WITH CALCULATOR TOOL:
15Action: calculate(sum_primes_below=10000)
16Result: 5736396
17
18→ Guaranteed correct
19→ Fast execution
20→ Reliable result
LimitationExampleImpact
Knowledge cutoffCurrent news, prices, eventsOutdated or incorrect info
No persistenceCan't save or rememberStateless conversations
No external accessCan't fetch web pagesLimited to training data
No executionCan't run codeTheory only, no verification
No side effectsCan't send emailsPurely advisory role

What Tools Enable

Tools bridge the gap between reasoning and reality:

Real-Time Information

🐍realtime_example.py
1# Without tools: Guess based on training data
2# With tools: Get actual current information
3
4async def answer_current_question(question: str) -> str:
5    """Answer questions requiring current information."""
6
7    # Agent reasons about what information it needs
8    thought = "This question requires current data. I'll search."
9
10    # Agent takes action
11    search_results = await tools.web_search(
12        query=extract_search_query(question)
13    )
14
15    # Agent synthesizes with real information
16    return llm.generate(f"""
17Based on these current search results:
18{search_results}
19
20Answer the question: {question}
21""")

Verified Computation

🐍verified_computation.py
1# Without tools: Error-prone mental math
2# With tools: Guaranteed correct results
3
4async def solve_math_problem(problem: str) -> str:
5    """Solve math problems with verified computation."""
6
7    # Agent reasons about the approach
8    thought = "I need to solve this step by step."
9
10    # Generate solution code
11    code = llm.generate(f"Write Python to solve: {problem}")
12
13    # Execute for guaranteed correctness
14    result = await tools.execute_python(code)
15
16    return f"Solution: {result}"

Real-World Actions

🐍real_actions.py
1# Without tools: Provide instructions
2# With tools: Actually perform the task
3
4async def handle_email_request(request: str) -> str:
5    """Actually send emails, not just draft them."""
6
7    # Agent understands intent
8    email_details = extract_email_intent(request)
9
10    # Actually send the email
11    result = await tools.send_email(
12        to=email_details.recipient,
13        subject=email_details.subject,
14        body=email_details.body
15    )
16
17    return f"Email sent successfully to {email_details.recipient}"

Persistent Memory

🐍persistent_memory.py
1# Without tools: Forget after context window
2# With tools: Remember across sessions
3
4async def remember_user_preferences(user_id: str, info: dict):
5    """Store information for later retrieval."""
6
7    # Save to persistent storage
8    await tools.database.upsert(
9        collection="user_preferences",
10        key=user_id,
11        data=info
12    )
13
14async def recall_user_context(user_id: str) -> dict:
15    """Retrieve saved information."""
16
17    return await tools.database.get(
18        collection="user_preferences",
19        key=user_id
20    )

Categories of Tools

Tools generally fall into several categories:

Information Retrieval Tools

ToolPurposeExample Use
web_searchSearch the internetFind current events, facts
read_fileRead local filesAccess code, documents
query_databaseQuery databasesGet structured data
fetch_urlFetch web pagesRead documentation, articles
api_callCall external APIsGet weather, stocks, etc.

Action Tools

ToolPurposeExample Use
write_fileWrite to filesSave code, documents
send_emailSend emailsNotify users
execute_codeRun codeVerify solutions
create_issueCreate ticketsTrack bugs, tasks
deployDeploy codeShip features

Computation Tools

ToolPurposeExample Use
calculatorMath operationsComplex calculations
code_interpreterRun PythonData analysis
regex_enginePattern matchingText extraction
json_parserParse JSONAPI response handling

Communication Tools

ToolPurposeExample Use
ask_userGet user inputClarify requests
notifySend notificationsAlert users
delegateCall other agentsSpecialized tasks

How LLMs Use Tools

Modern LLMs have been specifically trained to use tools through function calling:

🐍tool_flow.py
1# The basic flow of LLM tool usage
2
3from dataclasses import dataclass
4from typing import Callable, Any
5
6@dataclass
7class Tool:
8    """A tool the LLM can use."""
9    name: str
10    description: str
11    parameters: dict
12    function: Callable
13
14@dataclass
15class ToolCall:
16    """LLM's request to use a tool."""
17    name: str
18    arguments: dict
19
20def agent_with_tools(prompt: str, tools: list[Tool]) -> str:
21    """Show how agents use tools."""
22
23    # Step 1: LLM receives prompt and tool descriptions
24    response = llm.generate(
25        messages=[{"role": "user", "content": prompt}],
26        tools=[{
27            "name": t.name,
28            "description": t.description,
29            "parameters": t.parameters
30        } for t in tools]
31    )
32
33    # Step 2: LLM decides whether to use a tool
34    if response.tool_calls:
35        results = []
36        for call in response.tool_calls:
37            # Step 3: Execute the tool
38            tool = find_tool(call.name, tools)
39            result = tool.function(**call.arguments)
40            results.append({
41                "tool": call.name,
42                "result": result
43            })
44
45        # Step 4: LLM processes tool results
46        final_response = llm.generate(
47            messages=[
48                {"role": "user", "content": prompt},
49                {"role": "assistant", "tool_calls": response.tool_calls},
50                {"role": "tool", "content": str(results)}
51            ],
52            tools=[...]
53        )
54        return final_response.content
55
56    return response.content

The Tool Selection Process

📝tool_selection.txt
1USER: "What's the weather in Tokyo?"
2
3LLM REASONING PROCESS:
41. Parse request: User wants current weather for Tokyo
52. Check capabilities: Do I have a weather tool?
63. Select tool: Yes → get_weather(location: str)
74. Generate call: get_weather(location="Tokyo, Japan")
85. Wait for result: {"temp": 22, "condition": "Partly cloudy"}
96. Formulate response: "It's 22°C and partly cloudy in Tokyo."
10
11The LLM learns through training:
12- When tools are needed vs. internal knowledge
13- Which tool to select for each task
14- How to format arguments correctly
15- How to interpret results

Tool Selection is Learned

Modern LLMs learn tool selection during training. They see millions of examples of tools being used correctly and learn to match requests to appropriate tools with proper arguments.

Summary

Why agents need tools:

  1. Knowledge gaps: LLMs have knowledge cutoffs and can't access real-time data
  2. Action limitations: Without tools, LLMs can only advise, not act
  3. Computation reliability: Tools provide guaranteed correct calculations
  4. Persistence: Tools enable memory beyond the conversation
  5. Real impact: Tools let agents actually change the world
Next: Let's dive into how function calling works at a technical level across different LLM providers.