langgraph/prebuilt/chat_agent_executor.py:278
Function Signature
Parameters
The language model for the agent. Supports static and dynamic model selection.Static model: A chat model instance (e.g.,
ChatOpenAI) or string identifier (e.g., "openai:gpt-4")Dynamic model: A callable with signature (state, runtime) -> BaseChatModel that returns different models based on runtime context. If the model has tools bound via bind_tools or other configurations, the return type should be a Runnable[LanguageModelInput, BaseMessage]. Coroutines are also supported, allowing for asynchronous model selection.Dynamic functions receive graph state and runtime, enabling context-dependent model selection. Must return a BaseChatModel instance. For tool calling, bind tools using .bind_tools(). Bound tools must be a subset of the tools parameter.Ensure returned models have appropriate tools bound via
.bind_tools() and support required functionality. Bound tools must be a subset of those specified in the tools parameter.A list of tools or a
ToolNode instance. If an empty list is provided, the agent will consist of a single LLM node without tool calling.An optional prompt for the LLM. Can take several forms:
str: Converted to aSystemMessageand added to the beginning of the list of messages instate["messages"]SystemMessage: Added to the beginning of the list of messages instate["messages"]Callable: Function that takes full graph state and the output is then passed to the language modelRunnable: Runnable that takes full graph state and the output is then passed to the language model
response_format
StructuredResponseSchema | tuple[str, StructuredResponseSchema] | None
default:"None"
An optional schema for the final agent output.If provided, output will be formatted to match the given schema and returned in the
structured_response state key.If not provided, structured_response will not be present in the output state.Can be passed in as:- An OpenAI function/tool schema
- A JSON Schema
- A TypedDict class
- A Pydantic class
- A tuple
(prompt, schema), where schema is one of the above. The prompt will be used together with the model that is being used to generate the structured response.
The graph will make a separate call to the LLM to generate the structured response after the agent loop is finished. This is not the only strategy to get structured responses, see more options in this guide.
An optional node to add before the
agent node (i.e., the node that calls the LLM). Useful for managing long message histories (e.g., message trimming, summarization, etc.).Pre-model hook must be a callable or a runnable that takes in current graph state and returns a state update in the form of:An optional node to add after the
agent node (i.e., the node that calls the LLM). Useful for implementing human-in-the-loop, guardrails, validation, or other post-processing.Post-model hook must be a callable or a runnable that takes in current graph state and returns a state update.Only available with
version="v2".An optional state schema that defines graph state. Must have
messages and remaining_steps keys. Defaults to AgentState that defines those two keys.remaining_steps is used to limit the number of steps the react agent can take. Calculated roughly as recursion_limit - total_steps_taken. If remaining_steps is less than 2 and tool calls are present in the response, the react agent will return a final AI Message with the content “Sorry, need more steps to process this request.”. No GraphRecursionError will be raised in this case.An optional schema for runtime context.
An optional checkpoint saver object. This is used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation).
An optional store object. This is used for persisting data across multiple threads (e.g., multiple conversations / users).
An optional list of node names to interrupt before. Should be one of the following:
"agent", "tools".This is useful if you want to add a user confirmation or other interrupt before taking an action.An optional list of node names to interrupt after. Should be one of the following:
"agent", "tools".This is useful if you want to return directly or run additional processing on an output.A flag indicating whether to enable debug mode.
Determines the version of the graph to create.Can be one of:
"v1": The tool node processes a single message. All tool calls in the message are executed in parallel within the tool node."v2": The tool node processes a tool call. Tool calls are distributed across multiple instances of the tool node using the Send API.
An optional name for the
CompiledStateGraph. This name will be automatically used when adding ReAct agent graph to another graph as a subgraph node - particularly useful for building multi-agent systems.Returns
A compiled LangChain
Runnable that can be used for chat interactions.The “agent” node calls the language model with the messages list (after applying the prompt). If the resulting AIMessage contains tool_calls, the graph will then call the “tools” node. The “tools” node executes the tools (1 tool per tool_call) and adds the responses to the messages list as ToolMessage objects. The agent node then calls the language model again. The process repeats until no more tool_calls are present in the response. The agent then returns the full list of messages as a dictionary containing the key 'messages'.How It Works
Usage Example
Basic Usage
With Dynamic Model Selection
With Structured Output
With Pre-Model Hook (Message Trimming)
With Post-Model Hook (Validation)
See Also
- ToolNode - Tool execution node used by
create_react_agent - ValidationNode - Node for validating tool calls
- create_agent - Recommended replacement from
langchainpackage