Skip to main content
LangGraph provides a flexible framework for building stateful, multi-actor applications with LLMs. This guide walks through the core concepts of building graphs.

Core Concepts

A LangGraph workflow consists of:
  • State: The data structure that flows through your graph
  • Nodes: Functions that process the state
  • Edges: Connections between nodes that define the flow
  • Graph: The compiled workflow that orchestrates execution

Creating a Simple Graph

1
Define Your State Schema
2
Start by defining a TypedDict that represents your graph’s state:
3
from typing_extensions import TypedDict

class State(TypedDict):
    text: str
4
For message-based applications, use the built-in MessagesState:
5
from langgraph.graph import MessagesState

class AgentState(MessagesState):
    # Add additional fields as needed
    context: str
6
Create the StateGraph
7
Initialize a StateGraph with your state schema:
8
from langgraph.graph import StateGraph

graph = StateGraph(State)
9
Add Nodes
10
Nodes are functions that take the current state and return updates:
11
def node_a(state: State) -> dict:
    return {"text": state["text"] + "a"}

def node_b(state: State) -> dict:
    return {"text": state["text"] + "b"}

graph.add_node("node_a", node_a)
graph.add_node("node_b", node_b)
12
Add Edges
13
Connect nodes with edges to define the flow:
14
from langgraph.graph import START, END

# Direct edge from START to first node
graph.add_edge(START, "node_a")

# Sequential flow
graph.add_edge("node_a", "node_b")

# End the graph
graph.add_edge("node_b", END)
15
Compile the Graph
16
Compile the graph to make it executable:
17
app = graph.compile()

# Invoke the graph
result = app.invoke({"text": ""})
print(result)  # {'text': 'ab'}

Conditional Edges

Use conditional edges to create dynamic routing based on state:
def should_continue(state: AgentState):
    messages = state["messages"]
    last_message = messages[-1]
    
    # If there are tool calls, continue to tools
    if last_message.tool_calls:
        return "continue"
    # Otherwise, end
    return "end"

graph.add_conditional_edges(
    "agent",
    should_continue,
    {
        "continue": "tools",
        "end": END,
    },
)

Working with Tools

Integrate tools using the prebuilt ToolNode:
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.prebuilt import ToolNode

tools = [TavilySearchResults(max_results=1)]
tool_node = ToolNode(tools)

graph.add_node("tools", tool_node)

State Reducers

Use Annotated types to define how state updates are merged:
from typing import Annotated
from collections.abc import Sequence
from langchain_core.messages import BaseMessage
from langgraph.graph import add_messages

class AgentState(TypedDict):
    # Messages are appended, not replaced
    messages: Annotated[Sequence[BaseMessage], add_messages]
The add_messages reducer intelligently merges message lists:
  • Appends new messages by default
  • Updates existing messages by ID
  • Handles message deletion with RemoveMessage

Multi-Agent Patterns

Create graphs with multiple agents by adding nodes for each agent:
def call_model(state, runtime):
    if runtime.context.get("model") == "anthropic":
        model = model_anth
    else:
        model = model_oai
    
    messages = state["messages"]
    response = model.invoke(messages)
    return {"messages": [response]}

graph.add_node("agent", call_model)
graph.add_node("tools", tool_node)

Best Practices

  • Keep nodes focused: Each node should handle a single responsibility
  • Use type hints: Define clear state schemas for better IDE support
  • Test incrementally: Build and test your graph one node at a time
  • Visualize your graph: Use graph.compile().get_graph().print_ascii() to debug
  • Handle errors gracefully: Add error handling in your node functions

Next Steps