Chapter 15
15 min read
Section 91 of 175

LangGraph Fundamentals

LangGraph Deep Dive

Introduction

LangGraph is a library for building stateful, multi-agent applications with LLMs. Unlike simple chain-based architectures, LangGraph enables you to define complex workflows as graphs where nodes represent computation steps and edges define the flow of execution. This makes it ideal for building sophisticated agentic systems.

Section Overview: We'll explore what LangGraph is, understand its core concepts, set up our development environment, and build our first simple graph-based agent.

What is LangGraph?

LangGraph is built on top of LangChain and provides a graph-based approach to orchestrating LLM applications. It excels at scenarios requiring:

  • Cycles and loops - Execute steps repeatedly until a condition is met
  • Conditional branching - Route execution based on LLM decisions or state
  • Persistent state - Maintain state across multiple interactions
  • Human-in-the-loop - Pause execution for human intervention
  • Multi-agent coordination - Orchestrate multiple specialized agents

LangGraph vs Traditional Chains

AspectTraditional ChainsLangGraph
Execution flowLinear, sequentialGraph-based, flexible
LoopsNot supportedNative support
BranchingLimitedFull conditional routing
State managementManualBuilt-in state
Human interventionDifficultFirst-class support
Multi-agentComplex to implementNative patterns

Core Concepts

LangGraph is built around several fundamental concepts:

StateGraph

🐍python
1from langgraph.graph import StateGraph
2from typing import TypedDict
3
4
5# Define the state schema
6class AgentState(TypedDict):
7    """State that flows through the graph."""
8    messages: list[str]
9    current_step: str
10    results: dict
11
12
13# Create a StateGraph with the state schema
14graph = StateGraph(AgentState)
15
16# The state is automatically passed to each node
17# and updated based on node outputs

Nodes

Nodes are the computational units in a graph. Each node receives the current state and returns updates to apply:

🐍python
1from typing import Any
2
3
4def analyze_node(state: AgentState) -> dict[str, Any]:
5    """A node that analyzes the input."""
6    messages = state["messages"]
7    latest_message = messages[-1] if messages else ""
8
9    # Perform analysis
10    analysis_result = perform_analysis(latest_message)
11
12    # Return state updates (merged with existing state)
13    return {
14        "current_step": "analysis_complete",
15        "results": {"analysis": analysis_result}
16    }
17
18
19def perform_analysis(message: str) -> dict:
20    """Placeholder for actual analysis logic."""
21    return {"summary": f"Analyzed: [message[:50]]..."}
22
23
24# Add node to graph
25graph.add_node("analyze", analyze_node)

Edges

Edges define how execution flows between nodes:

🐍python
1from langgraph.graph import END
2
3# Unconditional edge: always go from A to B
4graph.add_edge("node_a", "node_b")
5
6# Conditional edge: choose next node based on state
7def route_decision(state: AgentState) -> str:
8    """Decide which node to execute next."""
9    if state["results"].get("needs_review"):
10        return "review"
11    elif state["results"].get("is_complete"):
12        return END
13    else:
14        return "continue_processing"
15
16
17graph.add_conditional_edges(
18    "analyze",  # Source node
19    route_decision,  # Routing function
20    {
21        "review": "review_node",
22        "continue_processing": "process_node",
23        END: END
24    }
25)

Entry and Exit Points

🐍python
1from langgraph.graph import START, END
2
3# Set the entry point
4graph.set_entry_point("analyze")
5# Or equivalently:
6# graph.add_edge(START, "analyze")
7
8# Set exit points (where the graph can end)
9graph.add_edge("final_node", END)

Installation and Setup

Let's set up a development environment for LangGraph:

bash
1# Install LangGraph and dependencies
2pip install langgraph langchain langchain-openai
3
4# Optional: Install for visualization
5pip install grandalf
6
7# Set up environment variables
8export OPENAI_API_KEY="your-api-key"

Basic Project Structure

📝text
1my_langgraph_project/
2+-- src/
3|   +-- graphs/
4|   |   +-- __init__.py
5|   |   +-- research_agent.py
6|   |   +-- coding_agent.py
7|   +-- nodes/
8|   |   +-- __init__.py
9|   |   +-- analysis.py
10|   |   +-- generation.py
11|   +-- state/
12|   |   +-- __init__.py
13|   |   +-- schemas.py
14|   +-- utils/
15|       +-- __init__.py
16|       +-- llm.py
17+-- tests/
18|   +-- test_graphs.py
19+-- requirements.txt
20+-- main.py

LLM Configuration

🐍python
1# src/utils/llm.py
2from langchain_openai import ChatOpenAI
3from langchain_anthropic import ChatAnthropic
4
5
6def get_openai_llm(model: str = "gpt-4o", temperature: float = 0.0):
7    """Get configured OpenAI LLM."""
8    return ChatOpenAI(
9        model=model,
10        temperature=temperature
11    )
12
13
14def get_anthropic_llm(model: str = "claude-3-sonnet-20240229"):
15    """Get configured Anthropic LLM."""
16    return ChatAnthropic(
17        model=model,
18        temperature=0.0
19    )

Your First Graph

Let's build a simple question-answering agent with research capabilities:

🐍python
1from typing import TypedDict, Annotated, Literal
2from langgraph.graph import StateGraph, START, END
3from langchain_openai import ChatOpenAI
4from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
5import operator
6
7
8# Define state schema
9class ResearchState(TypedDict):
10    """State for the research agent."""
11    question: str
12    research_notes: Annotated[list[str], operator.add]
13    answer: str
14    iteration: int
15
16
17# Initialize LLM
18llm = ChatOpenAI(model="gpt-4o", temperature=0)
19
20
21# Define nodes
22def research_node(state: ResearchState) -> dict:
23    """Perform research on the question."""
24    question = state["question"]
25    iteration = state.get("iteration", 0)
26
27    # Simulate research (in practice, this might call search APIs)
28    messages = [
29        SystemMessage(content="You are a research assistant. Provide key facts about the topic."),
30        HumanMessage(content=f"Research iteration [iteration + 1]: [question]")
31    ]
32
33    response = llm.invoke(messages)
34    research_note = response.content
35
36    return {
37        "research_notes": [f"Iteration [iteration + 1]: [research_note]"],
38        "iteration": iteration + 1
39    }
40
41
42def synthesize_node(state: ResearchState) -> dict:
43    """Synthesize research into final answer."""
44    question = state["question"]
45    notes = state["research_notes"]
46
47    messages = [
48        SystemMessage(content="Synthesize the research notes into a comprehensive answer."),
49        HumanMessage(content=f"Question: [question]\n\nResearch Notes:\n" + "\n".join(notes))
50    ]
51
52    response = llm.invoke(messages)
53
54    return {"answer": response.content}
55
56
57def should_continue(state: ResearchState) -> Literal["research", "synthesize"]:
58    """Decide whether to continue research or synthesize."""
59    iteration = state.get("iteration", 0)
60
61    # Do 2 iterations of research before synthesizing
62    if iteration < 2:
63        return "research"
64    return "synthesize"
65
66
67# Build the graph
68def build_research_graph():
69    """Build and compile the research graph."""
70
71    # Create graph
72    graph = StateGraph(ResearchState)
73
74    # Add nodes
75    graph.add_node("research", research_node)
76    graph.add_node("synthesize", synthesize_node)
77
78    # Add edges
79    graph.add_edge(START, "research")
80
81    graph.add_conditional_edges(
82        "research",
83        should_continue,
84        {
85            "research": "research",  # Loop back for more research
86            "synthesize": "synthesize"
87        }
88    )
89
90    graph.add_edge("synthesize", END)
91
92    # Compile and return
93    return graph.compile()
94
95
96# Run the graph
97def main():
98    agent = build_research_graph()
99
100    # Initialize state
101    initial_state = {
102        "question": "What are the key benefits of renewable energy?",
103        "research_notes": [],
104        "answer": "",
105        "iteration": 0
106    }
107
108    # Execute
109    result = agent.invoke(initial_state)
110
111    print("Question:", result["question"])
112    print("\nResearch Notes:")
113    for note in result["research_notes"]:
114        print(f"  - [note[:100]]...")
115    print("\nFinal Answer:", result["answer"])
116
117
118if __name__ == "__main__":
119    main()

Visualizing the Graph

🐍python
1from IPython.display import Image, display
2
3
4def visualize_graph(graph):
5    """Visualize the compiled graph."""
6    try:
7        # Get the graph as a Mermaid diagram
8        mermaid = graph.get_graph().draw_mermaid()
9        print("Mermaid Diagram:")
10        print(mermaid)
11
12        # Or as a PNG (requires graphviz)
13        png_data = graph.get_graph().draw_mermaid_png()
14        display(Image(png_data))
15    except Exception as e:
16        print(f"Visualization error: [e]")
17
18
19# Usage
20agent = build_research_graph()
21visualize_graph(agent)

Understanding Execution Flow

🐍python
1def trace_execution():
2    """Trace execution to understand flow."""
3    agent = build_research_graph()
4
5    initial_state = {
6        "question": "Explain quantum computing",
7        "research_notes": [],
8        "answer": "",
9        "iteration": 0
10    }
11
12    # Stream execution to see each step
13    print("Execution trace:")
14    for step in agent.stream(initial_state):
15        for node_name, node_output in step.items():
16            print(f"\n--- [node_name] ---")
17            print(f"Output: [node_output]")
18
19
20trace_execution()

Key Takeaways

  • LangGraph enables graph-based workflows where nodes are computation steps and edges define execution flow.
  • StateGraph manages state automatically, passing it to nodes and merging their outputs.
  • Conditional edges enable dynamic routing based on state or LLM decisions.
  • Cycles are native, allowing iterative refinement until conditions are met.
  • Streaming provides visibility into execution, useful for debugging and user feedback.
Next Section Preview: We'll dive deeper into nodes, edges, and state management, exploring advanced patterns for building sophisticated graph-based agents.