Skip to main content
Handles tool execution patterns including function calls, state injection, persistent storage, and control flow. Manages parallel execution and error handling. Defined in: langgraph/prebuilt/tool_node.py:616

Overview

Use ToolNode when building custom workflows that require fine-grained control over tool execution—for example, custom routing logic, specialized error handling, or non-standard agent architectures. For standard ReAct-style agents, use create_agent instead. It uses ToolNode internally with sensible defaults for the agent loop, conditional routing, and error handling.

Class Definition

class ToolNode(RunnableCallable):
    def __init__(
        self,
        tools: Sequence[BaseTool | Callable],
        *,
        name: str = "tools",
        tags: list[str] | None = None,
        handle_tool_errors: bool
        | str
        | Callable[..., str]
        | type[Exception]
        | tuple[type[Exception], ...] = _default_handle_tool_errors,
        messages_key: str = "messages",
        wrap_tool_call: ToolCallWrapper | None = None,
        awrap_tool_call: AsyncToolCallWrapper | None = None,
    ) -> None

Input Formats

ToolNode accepts multiple input formats:
  1. Graph state with messages key that has a list of messages:
    • Common representation for agentic workflows
    • Supports custom messages key via messages_key parameter
  2. Message List: [AIMessage(..., tool_calls=[...])]
    • List of messages with tool calls in the last AIMessage
  3. Direct Tool Calls: [{"name": "tool", "args": {...}, "id": "1", "type": "tool_call"}]
    • Bypasses message parsing for direct tool execution
    • For programmatic tool invocation and testing

Output Formats

Output format depends on input type and tool behavior: For Regular tools:
  • Dict input → {"messages": [ToolMessage(...)]}
  • List input → [ToolMessage(...)]
For Command tools:
  • Returns [Command(...)] or mixed list with regular tool outputs
  • Command can update state, trigger navigation, or send messages

Parameters

tools
Sequence[BaseTool | Callable]
required
A sequence of tools that can be invoked by this node.Supports:
  • BaseTool instances: Tools with schemas and metadata
  • Plain functions: Automatically converted to tools with inferred schemas
name
str
default:"'tools'"
The name identifier for this node in the graph. Used for debugging and visualization.
tags
list[str] | None
default:"None"
Optional metadata tags to associate with the node for filtering and organization.
handle_tool_errors
bool | str | Callable | type[Exception] | tuple[type[Exception], ...]
default:"_default_handle_tool_errors"
Configuration for error handling during tool execution. Supports multiple strategies:
  • True: Catch all errors and return a ToolMessage with the default error template containing the exception details.
  • str: Catch all errors and return a ToolMessage with this custom error message string.
  • type[Exception]: Only catch exceptions with the specified type and return the default error message for it.
  • tuple[type[Exception], ...]: Only catch exceptions with the specified types and return default error messages for them.
  • Callable[..., str]: Catch exceptions matching the callable’s signature and return the string result of calling it with the exception.
  • False: Disable error handling entirely, allowing exceptions to propagate.
Defaults to a callable that:
  • Catches tool invocation errors (due to invalid arguments provided by the model) and returns a descriptive error message
  • Ignores tool execution errors (they will be re-raised)
messages_key
str
default:"'messages'"
The key in the state dictionary that contains the message list. This same key will be used for the output ToolMessage objects.Allows custom state schemas with different message field names.
wrap_tool_call
ToolCallWrapper | None
default:"None"
Sync wrapper function to intercept tool execution. Receives ToolCallRequest and execute callable, returns ToolMessage or Command. Enables retries, caching, request modification, and control flow.
awrap_tool_call
AsyncToolCallWrapper | None
default:"None"
Async wrapper function to intercept tool execution. If not provided, falls back to wrap_tool_call for async execution.

Properties

tools_by_name
dict[str, BaseTool]
Mapping from tool name to BaseTool instance.

Usage Examples

Basic Usage

from langgraph.prebuilt import ToolNode
from langchain_core.tools import tool

@tool
def calculator(a: int, b: int) -> int:
    """Add two numbers."""
    return a + b

tool_node = ToolNode([calculator])

State Injection

from typing_extensions import Annotated
from langgraph.prebuilt import InjectedState, ToolNode
from langchain_core.tools import tool

@tool
def context_tool(query: str, state: Annotated[dict, InjectedState]) -> str:
    """Some tool that uses state."""
    return f"Query: {query}, Messages: {len(state['messages'])}"

tool_node = ToolNode([context_tool])

Custom Error Handling

from langgraph.prebuilt import ToolNode
from langchain_core.tools import tool

def handle_errors(e: ValueError) -> str:
    return "Invalid input provided"

@tool
def my_tool(value: int) -> str:
    """Process a value."""
    if value < 0:
        raise ValueError("Value must be positive")
    return f"Processed: {value}"

tool_node = ToolNode([my_tool], handle_tool_errors=handle_errors)

Store Injection

from typing_extensions import Annotated
from langgraph.store.base import BaseStore
from langgraph.prebuilt import InjectedStore, ToolNode
from langchain_core.tools import tool

@tool
def save_data(
    key: str,
    value: str,
    store: Annotated[BaseStore, InjectedStore()]
) -> str:
    """Save data to persistent storage."""
    store.put(("data",), key, value)
    return f"Saved {key}"

tool_node = ToolNode([save_data])

Runtime Context Access

from langgraph.prebuilt import ToolNode, ToolRuntime
from langchain_core.tools import tool

@tool
def context_aware_tool(x: int, runtime: ToolRuntime) -> str:
    """Tool that accesses runtime context."""
    # Access state
    messages = runtime.state["messages"]
    
    # Access tool_call_id
    print(f"Tool call ID: {runtime.tool_call_id}")
    
    # Access config
    print(f"Run ID: {runtime.config.get('run_id')}")
    
    # Access runtime context
    user_id = runtime.context.get("user_id")
    
    # Access store
    runtime.store.put(("metrics",), "count", 1)
    
    # Stream output
    runtime.stream_writer.write("Processing...")
    
    return f"Processed {x}"

tool_node = ToolNode([context_aware_tool])

Tool Call Wrapper for Retries

from langgraph.prebuilt import ToolNode
from langchain_core.tools import tool
from langchain_core.messages import ToolMessage

def retry_wrapper(request, execute):
    """Retry tool execution up to 3 times."""
    for attempt in range(3):
        try:
            result = execute(request)
            if isinstance(result, ToolMessage) and result.status != "error":
                return result
        except Exception as e:
            if attempt == 2:
                raise
            continue
    return result

@tool
def unreliable_tool(x: int) -> str:
    """A tool that might fail."""
    import random
    if random.random() < 0.5:
        raise ValueError("Random failure")
    return f"Success: {x}"

tool_node = ToolNode([unreliable_tool], wrap_tool_call=retry_wrapper)

In a StateGraph

from langchain_core.messages import AIMessage
from langgraph.prebuilt import ToolNode
from langchain_core.tools import tool
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict

class State(TypedDict):
    messages: list

@tool
def search(query: str) -> str:
    """Search for information."""
    return f"Results for: {query}"

def call_model(state: State):
    # Simulated model response with tool call
    return {
        "messages": [
            AIMessage(
                content="",
                tool_calls=[{
                    "name": "search",
                    "args": {"query": "LangGraph"},
                    "id": "1",
                    "type": "tool_call"
                }]
            )
        ]
    }

tool_node = ToolNode([search])

graph = StateGraph(State)
graph.add_node("model", call_model)
graph.add_node("tools", tool_node)
graph.add_edge(START, "model")
graph.add_edge("model", "tools")
graph.add_edge("tools", END)

compiled = graph.compile()
result = compiled.invoke({"messages": []})
print(result["messages"])

ToolRuntime

ToolRuntime
dataclass
Runtime context automatically injected into tools.When a tool function has a parameter named runtime with type hint ToolRuntime, the tool execution system will automatically inject an instance containing:
  • state: The current graph state
  • tool_call_id: The ID of the current tool call
  • config: RunnableConfig for the current execution
  • context: Runtime context (shared with Runtime)
  • store: BaseStore instance for persistent storage (shared with Runtime)
  • stream_writer: StreamWriter for streaming output (shared with Runtime)
No Annotated wrapper is needed - just use runtime: ToolRuntime as a parameter.Defined in: langgraph/prebuilt/tool_node.py:1537

Attributes

state
StateT
The current graph state.
context
ContextT
Runtime context.
config
RunnableConfig
Runnable configuration for the current execution.
stream_writer
StreamWriter
Stream writer for streaming output.
tool_call_id
str | None
The ID of the current tool call.
store
BaseStore | None
Persistent store instance.

InjectedState

InjectedState
class
Annotation for injecting graph state into tool arguments.This annotation enables tools to access graph state without exposing state management details to the language model. Tools annotated with InjectedState receive state data automatically during execution while remaining invisible to the model’s tool-calling interface.Defined in: langgraph/prebuilt/tool_node.py:1603

Parameters

field
str | None
default:"None"
Optional key to extract from the state dictionary. If None, the entire state is injected. If specified, only that field’s value is injected.

Example

from typing_extensions import Annotated, TypedDict
from langchain_core.messages import BaseMessage
from langgraph.prebuilt import InjectedState, ToolNode
from langchain_core.tools import tool

class AgentState(TypedDict):
    messages: list[BaseMessage]
    foo: str

@tool
def state_tool(x: int, state: Annotated[dict, InjectedState]) -> str:
    '''Do something with state.'''
    if len(state["messages"]) > 2:
        return state["foo"] + str(x)
    else:
        return "not enough messages"

@tool
def foo_tool(x: int, foo: Annotated[str, InjectedState("foo")]) -> str:
    '''Do something else with state.'''
    return foo + str(x + 1)

node = ToolNode([state_tool, foo_tool])

InjectedStore

InjectedStore
class
Annotation for injecting persistent store into tool arguments.This annotation enables tools to access LangGraph’s persistent storage system without exposing storage details to the language model. Tools annotated with InjectedStore receive the store instance automatically during execution while remaining invisible to the model’s tool-calling interface.The store provides persistent, cross-session data storage that tools can use for maintaining context, user preferences, or any other data that needs to persist beyond individual workflow executions.
InjectedStore annotation requires langchain-core >= 0.3.8
Defined in: langgraph/prebuilt/tool_node.py:1679

Example

from typing_extensions import Annotated
from typing import Any
from langgraph.store.memory import InMemoryStore
from langgraph.prebuilt import InjectedStore, ToolNode
from langchain_core.tools import tool

@tool
def save_preference(
    key: str,
    value: str,
    store: Annotated[Any, InjectedStore()]
) -> str:
    """Save user preference to persistent storage."""
    store.put(("preferences",), key, value)
    return f"Saved {key} = {value}"

@tool
def get_preference(
    key: str,
    store: Annotated[Any, InjectedStore()]
) -> str:
    """Retrieve user preference from persistent storage."""
    result = store.get(("preferences",), key)
    return result.value if result else "Not found"

store = InMemoryStore()
tool_node = ToolNode([save_preference, get_preference])

# Use with graph
from langgraph.graph import StateGraph

graph = StateGraph(State)
graph.add_node("tools", tool_node)
compiled_graph = graph.compile(store=store)  # Store is injected automatically

tools_condition

tools_condition(state, messages_key='messages')
function
Conditional routing function for tool-calling workflows.This utility function implements the standard conditional logic for ReAct-style agents: if the last AIMessage contains tool calls, route to the tool execution node; otherwise, end the workflow. This pattern is fundamental to most tool-calling agent architectures.Defined in: langgraph/prebuilt/tool_node.py:1456

Parameters

state
list[AnyMessage] | dict[str, Any] | BaseModel
required
The current graph state to examine for tool calls. Supported formats:
  • Dictionary containing a messages key (for StateGraph)
  • BaseModel instance with a messages attribute
messages_key
str
default:"'messages'"
The key or attribute name containing the message list in the state. This allows customization for graphs using different state schemas.

Returns

return
Literal['tools', '__end__']
Either 'tools' if tool calls are present in the last AIMessage, or '__end__' to terminate the workflow.

Example

from langgraph.graph import StateGraph
from langgraph.prebuilt import ToolNode, tools_condition
from typing_extensions import TypedDict

class State(TypedDict):
    messages: list

graph = StateGraph(State)
graph.add_node("llm", call_model)
graph.add_node("tools", ToolNode([my_tool]))
graph.add_conditional_edges(
    "llm",
    tools_condition,  # Routes to "tools" or "__end__"
    {"tools": "tools", "__end__": "__end__"},
)

See Also

  • create_react_agent - Factory function that uses ToolNode internally
  • ValidationNode - Node for validating tool calls without executing them
  • Command - Type for returning control flow commands from tools