Chapter 0
10 min read
Section 3 of 175

API Keys and Configuration

Prerequisites and Setup

Introduction

To build AI agents, you need access to large language model APIs. This section walks you through obtaining and configuring API keys for the major providers: Anthropic (Claude), OpenAI, and Google (Gemini).

Security First: API keys are like passwords to your AI infrastructure. Never commit them to version control, share them publicly, or expose them in client-side code.

Anthropic API (Claude)

Getting Your API Key

  1. Go to console.anthropic.com
  2. Create an account or sign in
  3. Navigate to API Keys in the left sidebar
  4. Click Create Key
  5. Name your key (e.g., "agentic-ai-book") and copy it immediately

Copy Your Key Now

Anthropic only shows your API key once. If you lose it, you'll need to create a new one.

Testing Your Key

🐍test_anthropic.py
1import anthropic
2
3# Initialize the client (reads ANTHROPIC_API_KEY from environment)
4client = anthropic.Anthropic()
5
6# Test with a simple message
7response = client.messages.create(
8    model="claude-sonnet-4-20250514",
9    max_tokens=100,
10    messages=[
11        {"role": "user", "content": "Say 'Hello, Agent Builder!' in exactly 3 words."}
12    ]
13)
14
15print(response.content[0].text)
16# Expected output: "Hello, Agent Builder!"

Available Models

ModelIdentifierBest For
Claude Opus 4claude-opus-4-20250514Complex reasoning, research, coding
Claude Sonnet 4claude-sonnet-4-20250514Balanced performance and cost
Claude Haiku 3.5claude-3-5-haiku-20241022Fast, lightweight tasks

Start with Sonnet

Claude Sonnet 4 offers the best balance of capability and cost for agent development. Use Opus for complex tasks, Haiku for high-volume simple operations.

OpenAI API

Getting Your API Key

  1. Go to platform.openai.com
  2. Sign in or create an account
  3. Navigate to API keys in the left sidebar
  4. Click Create new secret key
  5. Name it and copy immediately

Testing Your Key

🐍test_openai.py
1from openai import OpenAI
2
3# Initialize the client (reads OPENAI_API_KEY from environment)
4client = OpenAI()
5
6# Test with a simple completion
7response = client.chat.completions.create(
8    model="gpt-4o",
9    max_tokens=100,
10    messages=[
11        {"role": "user", "content": "Say 'Hello, Agent Builder!' in exactly 3 words."}
12    ]
13)
14
15print(response.choices[0].message.content)
16# Expected output: "Hello, Agent Builder!"

Available Models

ModelIdentifierBest For
GPT-4ogpt-4oGeneral-purpose, multimodal
GPT-4o Minigpt-4o-miniFaster, more affordable
o3o3Complex reasoning, agentic tasks
o4-minio4-miniEfficient reasoning

Google AI (Gemini)

Getting Your API Key

  1. Go to aistudio.google.com
  2. Sign in with your Google account
  3. Click Get API Key
  4. Create a key in a new or existing Google Cloud project
  5. Copy the generated key

Testing Your Key

🐍test_gemini.py
1import google.generativeai as genai
2
3# Configure with your API key
4genai.configure(api_key=os.environ["GOOGLE_API_KEY"])
5
6# Initialize the model
7model = genai.GenerativeModel("gemini-2.0-flash")
8
9# Test with a simple prompt
10response = model.generate_content("Say 'Hello, Agent Builder!' in exactly 3 words.")
11
12print(response.text)
13# Expected output: "Hello, Agent Builder!"

Available Models

ModelIdentifierBest For
Gemini 2.0 Flashgemini-2.0-flashFast, multimodal, agentic
Gemini 2.0 Progemini-2.0-proAdvanced reasoning
Gemini 1.5 Progemini-1.5-proLong context, code

Environment Variables

The .env File

Store your API keys in a .env file at the root of your project:

.env
1# LLM API Keys
2ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
3OPENAI_API_KEY=sk-proj-your-key-here
4GOOGLE_API_KEY=AIzaSy-your-key-here
5
6# Optional: Default model preferences
7DEFAULT_MODEL=claude-sonnet-4-20250514
8DEFAULT_MAX_TOKENS=4096
9
10# Optional: Logging level
11LOG_LEVEL=INFO

Loading Environment Variables

🐍load_env.py
1import os
2from dotenv import load_dotenv
3
4# Load .env file at the start of your application
5load_dotenv()
6
7# Access keys (with validation)
8def get_api_key(provider: str) -> str:
9    key_map = {
10        "anthropic": "ANTHROPIC_API_KEY",
11        "openai": "OPENAI_API_KEY",
12        "google": "GOOGLE_API_KEY",
13    }
14
15    env_var = key_map.get(provider.lower())
16    if not env_var:
17        raise ValueError(f"Unknown provider: {provider}")
18
19    key = os.environ.get(env_var)
20    if not key:
21        raise ValueError(f"{env_var} not set in environment")
22
23    return key
24
25# Usage
26anthropic_key = get_api_key("anthropic")
27openai_key = get_api_key("openai")

Add .env to .gitignore

Never commit your .env file to version control. Make sure it's listed in your .gitignore.

Creating a .env.example

Create a template file that others can copy:

.env.example
1# Copy this file to .env and fill in your API keys
2# Do NOT commit .env to version control
3
4# Required: At least one LLM API key
5ANTHROPIC_API_KEY=
6OPENAI_API_KEY=
7GOOGLE_API_KEY=
8
9# Optional settings
10DEFAULT_MODEL=claude-sonnet-4-20250514
11LOG_LEVEL=INFO

API Best Practices

Key Management Rules

  • Rotate regularly - Generate new keys periodically
  • Use separate keys - Different keys for dev, staging, production
  • Set spending limits - Configure billing alerts and caps
  • Monitor usage - Track API calls for anomalies
  • Revoke if compromised - Immediately regenerate exposed keys

Rate Limiting and Retries

🐍api_wrapper.py
1import time
2from typing import Callable, TypeVar
3import anthropic
4
5T = TypeVar("T")
6
7def with_retry(
8    func: Callable[[], T],
9    max_retries: int = 3,
10    base_delay: float = 1.0,
11) -> T:
12    """Execute a function with exponential backoff retry."""
13    for attempt in range(max_retries):
14        try:
15            return func()
16        except anthropic.RateLimitError:
17            if attempt == max_retries - 1:
18                raise
19            delay = base_delay * (2 ** attempt)
20            print(f"Rate limited. Retrying in {delay}s...")
21            time.sleep(delay)
22        except anthropic.APIError as e:
23            if attempt == max_retries - 1:
24                raise
25            print(f"API error: {e}. Retrying...")
26            time.sleep(base_delay)
27    raise RuntimeError("Max retries exceeded")
28
29# Usage
30def call_claude():
31    client = anthropic.Anthropic()
32    return client.messages.create(
33        model="claude-sonnet-4-20250514",
34        max_tokens=100,
35        messages=[{"role": "user", "content": "Hello"}]
36    )
37
38response = with_retry(call_claude)

Cost Management

StrategyDescription
Use smaller models firstStart with Haiku/GPT-4o-mini, escalate if needed
Cache responsesStore and reuse results for identical queries
Truncate contextOnly include relevant context to reduce tokens
Set max_tokensAlways limit response length appropriately
Batch requestsCombine multiple queries when possible

Development vs Production

During development, use shorter max_tokens and lower-cost models. Switch to higher-capability models for production or complex tasks.

Summary

You should now have:

  1. API keys for Anthropic, OpenAI, and/or Google
  2. .env file configured with your keys
  3. .env.example template for sharing
  4. Understanding of rate limiting and cost management
Next Step: With your API keys configured, let's look at how to structure your agent projects for maintainability and scalability.