What is LangGraph?
LangGraph is a Python framework for building agent workflows using graphs.
Instead of writing one long function that tries to manage everything — prompts, memory, routing, tools — LangGraph lets you connect small Python functions together like nodes in a flowchart.
Each node decides:
what to do next
how to update the state
whether to pass control to another node
This makes your AI system:
✔️ easier to debug
✔️ modular (change one step without breaking others)
✔️ predictable (no hidden magic)
✔️ perfect for multi-agent workflows
Think of LangGraph as LEGO for AI agents — you snap pieces (nodes) together to form smart behaviors.
Now, lets build a Smoothie Recommendation Graph 🥤
We’ll build a tiny graph that:
Takes a fruit from a user
Generates a random smoothie name
Returns that smoothie as a chatbot reply
And we’ll do it step-by-step.
Step 0: Imports & Setup
from typing import Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from pydantic import BaseModel
import randomLet’s also define some ingredients:
fruits = ["Mango", "Strawberry", "Banana", "Blueberry", "Peach"]
styles = ["Blast", "Fusion", "Storm", "Vortex", "Cooler"]Step 1: Define the State
LangGraph needs a “State object” to track conversation (or any data).
We use Annotated to tell LangGraph how to update messages:
class SmoothieState(BaseModel):
messages: Annotated[list, add_messages]messages→ stores our conversationadd_messages→ LangGraph’s reducer that appends new messages automatically
Step 2: Create the Graph Builder
What is LangGraph?
LangGraph is a Python framework for building agent workflows using graphs.
Instead of writing one long function that tries to manage everything — prompts, memory, routing, tools — LangGraph lets you connect small Python functions together like nodes in a flowchart.
Each node decides:
what to do next
how to update the state
whether to pass control to another node
This makes your AI system:
✔️ easier to debug
✔️ modular (change one step without breaking others)
✔️ predictable (no hidden magic)
✔️ perfect for multi-agent workflows
Think of LangGraph as LEGO for AI agents — you snap pieces (nodes) together to form smart behaviors.
Now, lets build a Smoothie Recommendation Graph 🥤
We’ll build a tiny graph that:
Takes a fruit from a user
Generates a random smoothie name
Returns that smoothie as a chatbot reply
And we’ll do it step-by-step.
Step 0: Imports & Setup
from typing import Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from pydantic import BaseModel
import randomLet’s also define some ingredients:
fruits = ["Mango", "Strawberry", "Banana", "Blueberry", "Peach"]
styles = ["Blast", "Fusion", "Storm", "Vortex", "Cooler"]Step 1: Define the State
LangGraph needs a “State object” to track conversation (or any data).
We use Annotated to tell LangGraph how to update messages:
class SmoothieState(BaseModel):
messages: Annotated[list, add_messages]Step 2: Create the Graph Builder
graph_builder = StateGraph(SmoothieState)This creates a workspace where we’ll add nodes and edges.
Step 3: Create a Node
What is LangGraph?
LangGraph is a Python framework for building agent workflows using graphs.
Instead of writing one long function that tries to manage everything — prompts, memory, routing, tools — LangGraph lets you connect small Python functions together like nodes in a flowchart.
Each node decides:
what to do next
how to update the state
whether to pass control to another node
This makes your AI system:
✔️ easier to debug
✔️ modular (change one step without breaking others)
✔️ predictable (no hidden magic)
✔️ perfect for multi-agent workflows
Think of LangGraph as LEGO for AI agents — you snap pieces (nodes) together to form smart behaviors.
Now, lets build a Smoothie Recommendation Graph 🥤
We’ll build a tiny graph that:
Takes a fruit from a user
Generates a random smoothie name
Returns that smoothie as a chatbot reply
And we’ll do it step-by-step.
Step 0: Imports & Setup
from typing import Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from pydantic import BaseModel
import randomLet’s also define some ingredients:
fruits = ["Mango", "Strawberry", "Banana", "Blueberry", "Peach"]
styles = ["Blast", "Fusion", "Storm", "Vortex", "Cooler"]Step 1: Define the State
LangGraph needs a “State object” to track conversation (or any data).
We use Annotated to tell LangGraph how to update messages:
class SmoothieState(BaseModel):
messages: Annotated[list, add_messages]messages→ stores our conversationadd_messages→ LangGraph’s reducer that appends new messages automatically
Step 2: Create the Graph Builder
What is LangGraph?
LangGraph is a Python framework for building agent workflows using graphs.
Instead of writing one long function that tries to manage everything — prompts, memory, routing, tools — LangGraph lets you connect small Python functions together like nodes in a flowchart.
Each node decides:
what to do next
how to update the state
whether to pass control to another node
This makes your AI system:
✔️ easier to debug
✔️ modular (change one step without breaking others)
✔️ predictable (no hidden magic)
✔️ perfect for multi-agent workflows
Think of LangGraph as LEGO for AI agents — you snap pieces (nodes) together to form smart behaviors.
Now, lets build a Smoothie Recommendation Graph 🥤
We’ll build a tiny graph that:
Takes a fruit from a user
Generates a random smoothie name
Returns that smoothie as a chatbot reply
And we’ll do it step-by-step.
Step 0: Imports & Setup
from typing import Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from pydantic import BaseModel
import randomLet’s also define some ingredients:
fruits = ["Mango", "Strawberry", "Banana", "Blueberry", "Peach"]
styles = ["Blast", "Fusion", "Storm", "Vortex", "Cooler"]Step 1: Define the State
LangGraph needs a “State object” to track conversation (or any data).
We use Annotated to tell LangGraph how to update messages:
class SmoothieState(BaseModel):
messages: Annotated[list, add_messages]Step 2: Create the Graph Builder
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
def chatbot_node(state: SmoothieState) -> SmoothieState:
response = llm.invoke(state.messages)
return SmoothieState(messages=[response])Add it:
graph_builder = StateGraph(SmoothieState)
graph_builder.add_node("assistant", chatbot_node)StateGraph creates a workspace where we’ll add nodes and edges.
Step 3: Add Edges (The flow of the graph)
graph_builder.add_edge(START, "assistant")
graph_builder.add_edge("assistant", END)Step 4: Compile the Graph
graph = graph_builder.compile()Run:
print(chat("Give me a smoothie idea!", []))Now, it must be overwhelming for you, so lets understand few basic terminologies:
What is a Node?
A node is simply a Python function that:
receives the state
does some work
returns a new state
What is an Edge?
An edge tells LangGraph:
👉 Which node should run next?
You can create:
linear flows
branching flows
loops
multi-agent flows
Edges = arrows connecting nodes.
3. What is State?
The state is the shared memory of the entire graph.
It contains:
messages
variables
intermediate results
Each node receives the state, updates it, and returns it.
4. What is a Reducer?
A reducer tells LangGraph:
👉 “When a node returns new messages, how do I merge them with the existing messages?”
We used LangGraph’s default reducer:
Annotated[list, add_messages]This reducer:
✔️ Appends new messages to the history automatically
✔️ Saves you from manually merging message lists
5. What is the Compiled Graph?
LangGraph turns:
nodes
edges
state rules
into a ready-to-run agent workflow.
┌──────────┐
│ START │
└────┬─────┘
│
▼
┌────────────────┐
│ llm_node │
│ (ChatOpenAI) │
└──────┬─────────┘
│
▼
┌──────────┐
│ END │
└──────────┘
Now, we have basic understanding of the LangGraph. Lets proceed next and build a simple LangGraph Agent with tools and memory.
We will make a Smart Assistant that can:
Chat normally
Use a simple search tool
Use a custom math tool
Decide when to use a tool
Store memory across multiple turns
🔧 Tools We Provide to the LLM
We use two simple tools:
1. Fake Search Tool (returns hardcoded info)
def fake_search(query: str):
"""Pretend search engine."""
return f"Search result for '{query}': Paris is the capital of France."2. Math Tool
def add_numbers(a: int, b: int):
return f"The sum is {a + b}"We wrap them as LangChain tools:
from langchain.agents import Tool
tool_search = Tool(
name="search",
func=fake_search,
description="Use this when the user asks factual questions."
)
tool_add = Tool(
name="add_numbers",
func=lambda x: add_numbers(**x),
description="Add two numbers. Input: {a: int, b: int}"
)
tools = [tool_search, tool_add]Step 1: Define State
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]🔹 Step 2: Build the Graph
llm.bind_tools(tools) tells the LLM that these are the tools that you were looking for.
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import ToolNode, tools_condition
# Bind tools to LLM
llm = ChatOpenAI(model="gpt-4o-mini")
llm_with_tools = llm.bind_tools(tools)
# Start the builder
graph_builder = StateGraph(State)🔹 Step 3: Add Nodes
Node 1 — Chatbot Node
LLM decides:
respond normally
OR call a tool
def chatbot_node(state: State):
response = llm_with_tools.invoke(state["messages"])
return {"messages": [response]}Add it:
graph_builder.add_node("chatbot", chatbot_node)Node 2 — Tool Handling Node
Automatically runs whichever tool the LLM requested. ToolNode(tools=tools) is the actual code that runs the tool when LLM asks for it.
graph_builder.add_node("tools", ToolNode(tools=tools))🔹 Step 4: Add Edges
Chatbot → maybe Tools
The conditional edge tools_condition tells the graph that only go to the tool Node if the LLM actually called a tool.
graph_builder.add_conditional_edges(
"chatbot",
tools_condition, # detects tool call
"tools"
)
Why tools_condition Must Connect "chatbot" → "tools"?
Even though you already bound tools to the LLM using llm.bind_tools(tools) but binding tools does NOT execute the tools. t only gives the LLM the ability to request them.
Now, the chatbot runs first, the chatbot node receives the user input. Chatbot decides if the LLM wants to call a tool or should it just respond normally.
Now, tool_condition checks the model output, if tool calls exists then go to tools Node, and if not then end the graph.
After tool runs → go back to chatbot
graph_builder.add_edge("tools", "chatbot")
Start the graph at chatbot:
graph_builder.add_edge(START, "chatbot")🔹 Step 5: Add Memory (Checkpointing)
This saves state across turns.
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)🔹 Step 6: Chat Function
config = {"configurable": {"thread_id": "demo1"}}
def chat(user_input: str, history):
result = graph.invoke(
{"messages": [{"role": "user", "content": user_input}]},
config=config
)
return result["messages"][-1].content🧪 Testing the Agent (Example Output)
User:
"What is the capital of France?"
What Happens:
LLM → decides it needs the search tool
ToolNode → runs fake_search()
Chatbot → responds:
Paris is the capital of France. ┌────────────┐
START ───► │ chatbot │
│ (LLM node) │
└──────┬─────┘
│
│ tools_condition = TRUE
▼
┌────────────┐
│ tools │
│ (ToolNode) │
└──────┬─────┘
│
│ tool_result
▼
┌────────────┐
│ chatbot │ ←— LLM generates final answer
└──────┬─────┘
│
▼
(END / RETURN)
✅ Does the Chatbot Only Decide Whether to Use a Tool?
No — the chatbot always produces a normal LLM output.
But the chatbot output may contain either:
1️⃣ A direct answer
Example:
{
"content": "The capital of France is Paris.",
"tool_calls": []
}No tool call → chatbot’s answer is final.
2️⃣ A tool_call
Example:
{
"content": "To answer, I need to call the calculator.",
"tool_calls": [
{"name": "calculator", "arguments": {"expression": "2+2"}}
]
}
Tool call exists → chatbot's output is not final.
Instead, graph routing sends the flow to:
👉 ToolNode
Now, We will look into the Asynchronous LangGraph, what ainvoke() really does, when to use arun(), and why async matters.
LangGraph can run:
synchronous code → one step at a time
asynchronous code → multiple things at the same time (non-blocking)
Many LLM systems need async because:
Tools might call external APIs (search, weather, DB queries)
Tools can run in parallel
LLM calls can be awaited without blocking the entire program
UI frameworks (FastAPI, Gradio, Streamlit) support async
🟦 1. Asynchronous Tools (ToolNode)
✔️ Sync tool execution
result = tool.run(inputs)Blocks the execution until the tool finishes.
Good for simple scripts.
✔️ Async tool execution
result = await tool.arun(inputs)Non-blocking.
The graph can process other nodes while waiting.
Required when tool does async I/O like:
API calls
Network requests
Database queries
🟦 2. Asynchronous Graph Execution
LangGraph provides two ways to run your graph:
✔️ Sync execution
result = graph.invoke(state)Runs step-by-step.
You must wait until whole graph completes.
Fine for simple console applications or testing.
✔️ Async execution
result = await graph.ainvoke(state)Runs nodes asynchronously.
Allows the graph to:
Use async LLMs (
await llm.ainvoke)Use async tools (
await tool.arun)Run nodes in parallel when possible
Not block your server/UI
Feature |
|
|
|---|---|---|
Blocking | Yes | No |
Parallel execution | ❌ No | ✅ Yes |
Async tools | ❌ No | ✅ Yes |
Web server friendly | ❌ No | ✅ Yes |
Best for | local tests | full apps, production |
Async LangGraph allows:
✔️ Parallel Node Execution
If multiple nodes are eligible to run in the same “super-step”, they run in parallel.
Example:
Node A → Node B
Node A → Node CIf B and C do not depend on each other, async graph runs:
await asyncio.gather(run_B(), run_C())🥳 Tutorial Complete: Part 1 Success!
You've successfully completed the first part of our LangGraph tutorial!
You now have a solid basic understanding of the core concepts that power LangGraph. This foundation is crucial for building robust, stateful, and multi-step AI applications.
What You've Achieved
Grasped the fundamentals of stateful computation.
Learned the importance of managing conversational flow.
Understood the basic structure of a LangGraph application.
Next Steps: The Journey Continues!
In the next tutorial (Part 2), we will dive much deeper into the exciting features of LangGraph.

