Introduction
Function calling (also called tool use) is the mechanism by which LLMs can request the execution of external functions. Understanding how this works across providers is essential for building robust agents.
Key Concept: Function calling is not the LLM running code. The LLM outputs a structured request, your application executes it, and returns the result to the LLM.
How Function Calling Works
The function calling flow involves three parties: the user, the LLM, and your application:
📝function_calling_flow.txt
1┌─────────────────────────────────────────────────────────────┐
2│ FUNCTION CALLING FLOW │
3└─────────────────────────────────────────────────────────────┘
4
51. USER → APP: "What's the weather in Paris?"
6
72. APP → LLM:
8 {
9 "messages": [{"role": "user", "content": "..."}],
10 "tools": [
11 {"name": "get_weather", "parameters": {...}}
12 ]
13 }
14
153. LLM → APP:
16 {
17 "tool_calls": [{
18 "name": "get_weather",
19 "arguments": {"location": "Paris, France"}
20 }]
21 }
22
234. APP executes: get_weather("Paris, France")
24 Result: {"temp": 18, "condition": "Sunny"}
25
265. APP → LLM:
27 {
28 "messages": [
29 ...previous messages...,
30 {"role": "tool", "content": '{"temp": 18, ...}'}
31 ]
32 }
33
346. LLM → APP:
35 {
36 "content": "It's currently 18°C and sunny in Paris."
37 }
38
397. APP → USER: "It's currently 18°C and sunny in Paris."Key Insights
- The LLM never executes functions directly
- Your application is responsible for execution
- You control security by deciding what to execute
- Results are fed back as context for the LLM
Anthropic Tool Use
Anthropic's Claude uses a tools array with JSON Schema for parameters:
🐍anthropic_tools.py
1import anthropic
2
3client = anthropic.Anthropic()
4
5# Define tools with JSON Schema
6tools = [
7 {
8 "name": "get_weather",
9 "description": "Get current weather for a location",
10 "input_schema": {
11 "type": "object",
12 "properties": {
13 "location": {
14 "type": "string",
15 "description": "City and country, e.g., 'Paris, France'"
16 },
17 "unit": {
18 "type": "string",
19 "enum": ["celsius", "fahrenheit"],
20 "description": "Temperature unit"
21 }
22 },
23 "required": ["location"]
24 }
25 },
26 {
27 "name": "search_web",
28 "description": "Search the web for information",
29 "input_schema": {
30 "type": "object",
31 "properties": {
32 "query": {
33 "type": "string",
34 "description": "Search query"
35 },
36 "max_results": {
37 "type": "integer",
38 "description": "Maximum results to return"
39 }
40 },
41 "required": ["query"]
42 }
43 }
44]
45
46# Make API call with tools
47response = client.messages.create(
48 model="claude-sonnet-4-20250514",
49 max_tokens=1024,
50 tools=tools,
51 messages=[
52 {"role": "user", "content": "What's the weather in Tokyo?"}
53 ]
54)Handling Tool Calls
🐍anthropic_tool_handling.py
1def handle_anthropic_response(response):
2 """Process Claude's response for tool calls."""
3
4 # Check if response contains tool use
5 for block in response.content:
6 if block.type == "tool_use":
7 tool_name = block.name
8 tool_input = block.input
9 tool_use_id = block.id
10
11 # Execute the tool
12 result = execute_tool(tool_name, tool_input)
13
14 # Return result to Claude
15 return client.messages.create(
16 model="claude-sonnet-4-20250514",
17 max_tokens=1024,
18 tools=tools,
19 messages=[
20 {"role": "user", "content": "What's the weather?"},
21 {"role": "assistant", "content": response.content},
22 {
23 "role": "user",
24 "content": [
25 {
26 "type": "tool_result",
27 "tool_use_id": tool_use_id,
28 "content": json.dumps(result)
29 }
30 ]
31 }
32 ]
33 )
34
35 # No tool call, return text response
36 return response.content[0].text
37
38
39def execute_tool(name: str, inputs: dict) -> dict:
40 """Execute a tool and return results."""
41 if name == "get_weather":
42 return fetch_weather(inputs["location"], inputs.get("unit", "celsius"))
43 elif name == "search_web":
44 return search(inputs["query"], inputs.get("max_results", 5))
45 else:
46 raise ValueError(f"Unknown tool: {name}")Claude Tool Use Response Structure
{}claude_tool_response.json
1{
2 "id": "msg_01XFDUDYJgAACzvnptvVoYEL",
3 "type": "message",
4 "role": "assistant",
5 "content": [
6 {
7 "type": "tool_use",
8 "id": "toolu_01A09q90qw90lq917835lgs0",
9 "name": "get_weather",
10 "input": {
11 "location": "Tokyo, Japan",
12 "unit": "celsius"
13 }
14 }
15 ],
16 "stop_reason": "tool_use",
17 "usage": {
18 "input_tokens": 250,
19 "output_tokens": 45
20 }
21}OpenAI Function Calling
OpenAI uses a similar but slightly different structure:
🐍openai_functions.py
1from openai import OpenAI
2
3client = OpenAI()
4
5# Define tools (OpenAI calls them "functions")
6tools = [
7 {
8 "type": "function",
9 "function": {
10 "name": "get_weather",
11 "description": "Get current weather for a location",
12 "parameters": {
13 "type": "object",
14 "properties": {
15 "location": {
16 "type": "string",
17 "description": "City and country"
18 },
19 "unit": {
20 "type": "string",
21 "enum": ["celsius", "fahrenheit"]
22 }
23 },
24 "required": ["location"]
25 }
26 }
27 }
28]
29
30# Make API call
31response = client.chat.completions.create(
32 model="gpt-4o",
33 messages=[
34 {"role": "user", "content": "What's the weather in Tokyo?"}
35 ],
36 tools=tools,
37 tool_choice="auto" # Can also be "none" or specific tool
38)Handling OpenAI Tool Calls
🐍openai_tool_handling.py
1import json
2
3def handle_openai_response(response, messages: list) -> str:
4 """Process OpenAI response for tool calls."""
5
6 message = response.choices[0].message
7
8 # Check for tool calls
9 if message.tool_calls:
10 # Add assistant's response to messages
11 messages.append(message)
12
13 # Process each tool call
14 for tool_call in message.tool_calls:
15 function_name = tool_call.function.name
16 function_args = json.loads(tool_call.function.arguments)
17
18 # Execute the function
19 result = execute_tool(function_name, function_args)
20
21 # Add tool result to messages
22 messages.append({
23 "role": "tool",
24 "tool_call_id": tool_call.id,
25 "content": json.dumps(result)
26 })
27
28 # Get final response with tool results
29 final_response = client.chat.completions.create(
30 model="gpt-4o",
31 messages=messages,
32 tools=tools
33 )
34 return final_response.choices[0].message.content
35
36 return message.content
37
38
39# OpenAI response structure
40# response.choices[0].message.tool_calls = [
41# {
42# "id": "call_abc123",
43# "type": "function",
44# "function": {
45# "name": "get_weather",
46# "arguments": '{"location": "Tokyo, Japan"}'
47# }
48# }
49# ]Parallel Tool Calls
OpenAI supports calling multiple tools in parallel:
🐍parallel_tools.py
1# OpenAI can return multiple tool calls at once
2response = client.chat.completions.create(
3 model="gpt-4o",
4 messages=[
5 {"role": "user", "content": "Compare weather in Tokyo and Paris"}
6 ],
7 tools=tools,
8 parallel_tool_calls=True # Default is True
9)
10
11# Response may contain multiple tool calls
12# [
13# {"id": "call_1", "function": {"name": "get_weather", "arguments": '{"location": "Tokyo"}'}},
14# {"id": "call_2", "function": {"name": "get_weather", "arguments": '{"location": "Paris"}'}}
15# ]
16
17# Execute in parallel for efficiency
18import asyncio
19
20async def execute_parallel_tools(tool_calls):
21 tasks = []
22 for call in tool_calls:
23 task = asyncio.create_task(
24 execute_tool_async(
25 call.function.name,
26 json.loads(call.function.arguments)
27 )
28 )
29 tasks.append((call.id, task))
30
31 results = []
32 for call_id, task in tasks:
33 result = await task
34 results.append({
35 "role": "tool",
36 "tool_call_id": call_id,
37 "content": json.dumps(result)
38 })
39
40 return resultsGoogle Gemini Functions
Google Gemini uses a similar approach with its own syntax:
🐍gemini_functions.py
1from google import genai
2from google.genai import types
3
4client = genai.Client()
5
6# Define functions as Python functions with docstrings
7def get_weather(location: str, unit: str = "celsius") -> dict:
8 """Get current weather for a location.
9
10 Args:
11 location: City and country, e.g., 'Tokyo, Japan'
12 unit: Temperature unit ('celsius' or 'fahrenheit')
13
14 Returns:
15 Weather information including temperature and conditions
16 """
17 # Implementation here
18 return {"temp": 22, "condition": "Sunny", "location": location}
19
20
21def search_web(query: str, max_results: int = 5) -> list:
22 """Search the web for information.
23
24 Args:
25 query: Search query string
26 max_results: Maximum number of results to return
27
28 Returns:
29 List of search results with titles and URLs
30 """
31 # Implementation here
32 return [{"title": "Result", "url": "https://..."}]
33
34
35# Create with tools
36response = client.models.generate_content(
37 model="gemini-2.0-flash",
38 contents="What's the weather in Tokyo?",
39 config=types.GenerateContentConfig(
40 tools=[get_weather, search_web],
41 ),
42)Handling Gemini Function Calls
🐍gemini_tool_handling.py
1def handle_gemini_response(response, tools_dict: dict):
2 """Process Gemini response for function calls."""
3
4 for part in response.candidates[0].content.parts:
5 if hasattr(part, 'function_call'):
6 func_call = part.function_call
7
8 # Get the function
9 func = tools_dict[func_call.name]
10
11 # Execute with arguments
12 result = func(**func_call.args)
13
14 # Continue conversation with result
15 response = client.models.generate_content(
16 model="gemini-2.0-flash",
17 contents=[
18 types.Content(
19 role="user",
20 parts=[types.Part(text="What's the weather?")]
21 ),
22 response.candidates[0].content,
23 types.Content(
24 role="function",
25 parts=[types.Part(
26 function_response=types.FunctionResponse(
27 name=func_call.name,
28 response=result
29 )
30 )]
31 )
32 ],
33 config=types.GenerateContentConfig(
34 tools=list(tools_dict.values()),
35 ),
36 )
37
38 return response
39
40 return response.textGemini Automatic Function Execution
🐍gemini_auto_execute.py
1# Gemini can automatically execute functions
2response = client.models.generate_content(
3 model="gemini-2.0-flash",
4 contents="What's the weather in Tokyo and Paris?",
5 config=types.GenerateContentConfig(
6 tools=[get_weather, search_web],
7 automatic_function_calling=types.AutomaticFunctionCallingConfig(
8 disable=False # Enable automatic execution
9 ),
10 ),
11)
12
13# With automatic_function_calling, Gemini will:
14# 1. Determine it needs to call get_weather twice
15# 2. The SDK automatically executes the functions
16# 3. Returns the final synthesized response
17
18print(response.text)
19# "Tokyo is currently 22°C and sunny, while Paris is 18°C and cloudy."Provider Comparison
| Feature | Anthropic (Claude) | OpenAI | Google (Gemini) |
|---|---|---|---|
| Schema format | input_schema | parameters | Python docstrings |
| Tool wrapper | name at top level | type: function | Native Python |
| Result format | tool_result content block | role: tool message | function_response |
| Parallel calls | Supported | Explicit support | Supported |
| Auto execution | No | No | Yes (optional) |
| Stop reason | stop_reason: tool_use | finish_reason: tool_calls | Part has function_call |
Unified Interface
🐍unified_interface.py
1from abc import ABC, abstractmethod
2from dataclasses import dataclass
3from typing import Any
4
5@dataclass
6class ToolCall:
7 """Unified tool call representation."""
8 id: str
9 name: str
10 arguments: dict
11
12@dataclass
13class ToolResult:
14 """Unified tool result."""
15 tool_call_id: str
16 content: str
17
18class LLMProvider(ABC):
19 """Abstract base for LLM providers."""
20
21 @abstractmethod
22 def format_tools(self, tools: list) -> list:
23 """Format tools for this provider."""
24 pass
25
26 @abstractmethod
27 def extract_tool_calls(self, response) -> list[ToolCall]:
28 """Extract tool calls from response."""
29 pass
30
31 @abstractmethod
32 def format_tool_results(self, results: list[ToolResult]) -> Any:
33 """Format tool results for this provider."""
34 pass
35
36
37class AnthropicProvider(LLMProvider):
38 def format_tools(self, tools):
39 return [{
40 "name": t.name,
41 "description": t.description,
42 "input_schema": t.parameters
43 } for t in tools]
44
45 def extract_tool_calls(self, response):
46 calls = []
47 for block in response.content:
48 if block.type == "tool_use":
49 calls.append(ToolCall(
50 id=block.id,
51 name=block.name,
52 arguments=block.input
53 ))
54 return calls
55
56
57class OpenAIProvider(LLMProvider):
58 def format_tools(self, tools):
59 return [{
60 "type": "function",
61 "function": {
62 "name": t.name,
63 "description": t.description,
64 "parameters": t.parameters
65 }
66 } for t in tools]
67
68 def extract_tool_calls(self, response):
69 message = response.choices[0].message
70 if not message.tool_calls:
71 return []
72 return [ToolCall(
73 id=tc.id,
74 name=tc.function.name,
75 arguments=json.loads(tc.function.arguments)
76 ) for tc in message.tool_calls]Consider Using an Abstraction Layer
For multi-provider agents, consider using an abstraction layer like LangChain, LiteLLM, or your own wrapper to handle the differences between providers.
Summary
Function calling fundamentals:
- LLMs don't execute: They request, you execute
- JSON Schema: All providers use schemas for parameters
- Anthropic: Uses input_schema with tool_result responses
- OpenAI: Uses parameters with role: tool messages
- Gemini: Uses Python docstrings, supports auto-execution
- Unify: Build abstraction layers for multi-provider support
Next: Let's learn how to design effective tools that LLMs can use reliably.