We will understand the advanced LangGraph concepts using this example: “Recipe Creator + Quality Checker”
You tell the system:
“Create a recipe for chocolate cake”
Flow:
Worker Agent
Generates recipe steps
May call a Tool to fetch ingredients from an online ingredients store (fake API for simplicity)
ToolNode
Executes the tool call
Returns results
Evaluator Agent
Checks recipe for:
clarity
step completeness
missing ingredients
Uses structured output (
RecipeEvalOutput)
Router
If recipe is good → END
If recipe needs more work → return to WORKER
┌──────────┐
│ WORKER │
└────┬─────┘
│
┌───────┴────────┐
│ tool call? │
▼ ▼
┌──────────┐ ┌────────────┐
│ TOOLS │ │ EVALUATOR │
└────┬─────┘ └──────┬─────┘
│ │
└───────┬─────────┘
▼
┌──────────┐
│ END │
└──────────┘
🟩 1. Imports + Setup
from typing import Annotated, TypedDict, List, Any, Dict
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.memory import MemorySaver
from pydantic import BaseModel, Field
import asyncio🟩 2. Structured Output Model
Evaluator returns this- feedback, is_good and needs_user_input
BaseModelcomes from Pydantic. By inheriting fromBaseModel, this class automatically gets:Data validation: it will check that the data types match.
Automatic documentation: useful if you are using it in APIs like FastAPI.
Serialization/deserialization: you can easily convert it to/from JSON.
Think of it as a template for structured data about a recipe evaluation.
class RecipeEvalOutput(BaseModel):
feedback: str = Field(description="Comments on the recipe quality")
is_good: bool = Field(description="Does the recipe meet quality criteria?")
needs_user_input: bool = Field(description="True if recipe needs clarification from the user")Sample output we will get:
{ "feedback": "The cake is too sweet. Consider reducing sugar.", "is_good": false, "needs_user_input": true }
RecipeEvalOutput is a structured way to store the evaluation of a recipe with:
Text feedback (
feedback),A boolean quality check (
is_good), andWhether the user needs to clarify something (
needs_user_input).
🟩 3. Graph State Definition
class State(TypedDict):
messages: Annotated[List[Any], add_messages]
user_goal: str
feedback: str
is_good: bool
needs_user_input: boolclass State(TypedDict):
Stateis a TypedDict, which comes from Python’stypingmodule.TypedDict allows you to define a dictionary where each key has a fixed type.
Unlike a Pydantic
BaseModel, TypedDict is not for runtime validation by default. It’s mostly for type checking (e.g., in IDEs or mypy).
messages: Annotated[List[Any], add_messages]
messagesis a list of items. The type isList[Any], meaning it can hold any type of object.Annotated[..., add_messages]adds metadata to the type.add_messageshere is likely a function or marker used elsewhere in the code, maybe to automatically append messages when the state changes.
Think of
messagesas a log of conversation or system events.Example:
state = { "messages": [{"role": "user", "text": "How do I bake a cake?"}],
🔹 Example Usage:
state: State = { "messages": [{"role": "user", "text": "I want a sugar-free cake."}], "user_goal": "Bake a sugar-free cake", "feedback": "The recipe needs sugar substitutes.", "is_good": False, "needs_user_input": True }
🟩 4. Define a Simple Tool
Fake ingredient lookup tool (sync for now).
def ingredient_lookup(dish: str):
ingredients = {
"chocolate cake": ["flour", "sugar", "cocoa", "eggs", "butter"],
"pancakes": ["flour", "milk", "eggs"]
}
return ingredients.get(dish.lower(), ["salt", "water"])ingredients.get(key, default) → tries to get the value for key in the dictionary:
If the key exists → returns its list of ingredients.
If the key does not exist → returns the
default, here["salt", "water"]
Bind as a LangChain ToolNode tool:
from langchain.agents import Tool
tools = [
Tool(
name="ingredient_lookup",
func=ingredient_lookup,
description="Look up typical ingredients for a dish"
)
]🟩 5. LLMs
worker_llm = ChatOpenAI(model="gpt-4o-mini")
worker_llm_tools = worker_llm.bind_tools(tools)
eval_llm = ChatOpenAI(model="gpt-4o-mini")
eval_llm_struct = eval_llm.with_structured_output(RecipeEvalOutputworker_llm = ChatOpenAI(model="gpt-4o-mini"):
This LLM instance is the “worker” that does the main task, e.g., answering questions, generating content, or performing actions.
It is general-purpose and can handle free-form text.
Output is mostly textual or tool calls; you don’t need strict structure yet.
eval_llm = ChatOpenAI(model="gpt-4o-mini"):
This LLM instance is the “evaluator”. Its job is not to generate arbitrary text, but to analyze and judge something, like the quality of a recipe.
.with_structured_output(RecipeEvalOutput) tells the model:
“Your output must follow this schema:
feedback(str),is_good(bool),needs_user_input(bool).”This ensures you always get predictable, machine-readable output that your code can use programmatically.
🟦 6. WORKER NODE
Creates recipe OR calls a tool.
def worker(state: State) -> Dict[str, Any]:
system_prompt = f"""
You are a recipe assistant. Your job is to create a recipe that satisfies the goal:
{state['user_goal']}
You can optionally call the ingredient_lookup tool if you need ingredient suggestions.
Write either:
- A refined recipe
- Or ask a clear question to the user
"""
messages = [SystemMessage(content=system_prompt)] + state["messages"]
response = worker_llm_tools.invoke(messages)
return {"messages": [response]}messagesis a list of messages sent to the LLM.[SystemMessage(content=system_prompt)]→ adds the system prompt at the beginning.+ state["messages"]→ appends previous conversation messages from the state.
This ensures the LLM knows the full context: system instructions + prior conversation.
worker_llm_tools→ the LLM bound with tools, likeingredient_lookup..invoke(messages)→ sends the messages to the LLM and gets the response.The LLM can either:
Generate a refined recipe, or
Ask a question to clarify the goal.
If needed, it can also call tools during generation (like ingredient lookup).
🟦 7. NODE ROUTER
Decides where worker output goes:
def worker_router(state: State) -> str:
last = state["messages"][-1]
if hasattr(last, "tool_calls") and last.tool_calls:
return "tools"
return "evaluator"The worker_router is essentially a decision function that routes the workflow based on the LLM’s last message:
Condition | Route |
|---|---|
Last message has |
|
Last message has no tool calls → LLM output is ready for evaluation |
|
🔹 Example Flow
Suppose the last LLM response is:
{ "text": "I need ingredient suggestions for chocolate cake.", "tool_calls": [{"tool": "ingredient_lookup", "arguments": {"dish": "chocolate cake"}}] }
hasattr(last, "tool_calls")→ Truelast.tool_calls→ not emptyworker_router(state)→"tools"
If the last response was just a recipe text:
{
"text": "Here’s a refined sugar-free chocolate cake recipe..."
}No tool_calls → "evaluator"
🟦 8. EVALUATOR NODE
Checks quality of recipe.
The user prompt provides context for evaluation:
state['user_goal']→ what the user wants.last_ai→ the assistant’s last response to evaluate.Instructions for output → the LLM should return structured fields:
feedback,is_good,needs_user_input.
eval_llm_struct→ the evaluator LLM configured for structured output (usingRecipeEvalOutput)..invoke([...])→ sends system + human messages to the LLM.result→ contains the structured output with:result.feedback→ textual feedbackresult.is_good→ boolean, is the recipe good?result.needs_user_input→ boolean, does the user need to clarify something?
Because we used .with_structured_output(RecipeEvalOutput), we guarantee these fields are present and typed.
def evaluator(state: State) -> State:
last_ai = state["messages"][-1].content
system_prompt = "You evaluate recipe quality using strict criteria."
user_prompt = f"""
User goal: {state['user_goal']}
Assistant response to evaluate:
{last_ai}
Return:
- feedback
- is_good = True/False
- needs_user_input = True/False
"""
result = eval_llm_struct.invoke([
SystemMessage(content=system_prompt),
HumanMessage(content=user_prompt)
])
return {
"messages": [{"role": "assistant", "content": f"Evaluator says: {result.feedback}"}],
"feedback": result.feedback,
"is_good": result.is_good,
🟦 9. ROUTER: Should we continue or end?
eval_router decides what happens after evaluation:
Condition | Next Step |
|---|---|
|
|
|
|
Otherwise → recipe not good, no clarification needed |
|
def eval_router(state: State) -> str:
if state["is_good"] or state["needs_user_input"]:
return "END"
return "worker"🟩 10. BUILD THE GRAPH
"worker"→ node that runs the worker function to generate/refine recipes."tools"→ node that runs tools, wrapped inToolNode(tools=tools). For example,ingredient_lookup."evaluator"→ node that runs the evaluator function, which assesses recipe quality and produces structured output.
This tells the graph where to go after the worker node:
"worker"node runs → callworker_router(state)to decide next step.worker_routerreturns either"tools"or"evaluator".The mapping dictionary maps router outputs to nodes in the graph:
"tools"→ go to the"tools"node."evaluator"→ go to the"evaluator"node.
Essentially, this is dynamic branching based on the LLM’s last output.
After running a tool, the graph automatically returns to the worker node.
This allows the worker to use tool results and generate a refined output.
After the evaluator node runs:
Call
eval_router(state)to decide next step.Mapping dictionary:
"worker"→ go back to"worker"node to refine the recipe."END"→ terminate the workflow.
This ensures the workflow loops or stops based on evaluation results.
START
|
v
worker
|--(worker_router -> "tools")--> tools --> worker
|--(worker_router -> "evaluator")--> evaluator --(eval_router)--> worker or ENDgraph_builder = StateGraph(State)
graph_builder.add_node("worker", worker)
graph_builder.add_node("tools", ToolNode(tools=tools))
graph_builder.add_node("evaluator", evaluator)
# edges
graph_builder.add_conditional_edges("worker", worker_router, {
"tools": "tools",
"evaluator": "evaluator"
})
graph_builder.add_edge("tools", "worker")
graph_builder.add_conditional_edges("evaluator", eval_router, {
"worker": "worker",
"END": END
})
graph_builder.add_edge(START, "worker")🟩 11. COMPILE WITH MEMORY
MemorySaver()is an object that keeps track of the state of the graph.Its purpose:
Save intermediate state (like messages, feedback, evaluation flags).
Allow resuming the workflow if it gets interrupted.
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)🟩 12. ASYNC CHAT FUNCTION
graph→ the compiled workflow graph you built earlier..ainvoke(...)→ asynchronous invocation of the graph:Takes the initial state.
Uses the config (e.g., thread ID).
Runs through worker → tools → evaluator → routers automatically.
Returns the final updated state after all workflow steps are complete.
await ensures that the function pauses until the workflow finishes, without blocking other tasks in an async environment.
async def chat(user_goal, user_message, thread="abc"):
config = {"configurable": {"thread_id": thread}}
state = {
"messages": [HumanMessage(content=user_message)],
"user_goal": user_goal,
"feedback": "",
"is_good": False,
"needs_user_input": False
}
result = await graph.ainvoke(state, config=config)
return result🟩 13. TEST IT
async def test():
out = await chat(
"Create a chocolate cake recipe",
"Here is my first attempt."
)
print(out["messages"])
asyncio.run(test())Congratulations! 🎉 You’ve just walked through building a fully functional, state-driven AI agent workflow—from defining the worker, evaluator, and tools, to wiring everything together in a graph and running it asynchronously.
By the end of this tutorial, you now understand:
How to structure LLM outputs for reliable programmatic decisions.
How to route between nodes dynamically based on AI responses.
How to integrate tools and evaluators into a cohesive agent workflow.
How to maintain state and memory for iterative conversations.
How to expose the workflow asynchronously for real-world applications like chatbots or assistants.
This is a foundational pattern for building more advanced AI agents: from recipe assistants to customer support bots, or any workflow where LLMs, structured outputs, and dynamic routing are needed.

