On this tutorial, we goal to know how LangGraph permits us to handle dialog flows in a structured method, whereas additionally offering the ability to “time journey” by means of checkpoints. By constructing a chatbot that integrates a free Gemini mannequin and a Wikipedia device, we are able to add a number of steps to a dialogue, report every checkpoint, replay the complete state historical past, and even resume from a previous state. This hands-on strategy permits us to see, in real-time, how LangGraph’s design facilitates the monitoring and manipulation of dialog development with readability and management. Try the FULL CODES right here.
!pip -q set up -U langgraph langchain langchain-google-genai google-generativeai typing_extensions
!pip -q set up "requests==2.32.4"
import os
import json
import textwrap
import getpass
import time
from typing import Annotated, Listing, Dict, Any, Optionally available
from typing_extensions import TypedDict
from langchain.chat_models import init_chat_model
from langchain_core.messages import BaseMessage
from langchain_core.instruments import device
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.reminiscence import InMemorySaver
from langgraph.prebuilt import ToolNode, tools_condition
import requests
from requests.adapters import HTTPAdapter, Retry
if not os.environ.get("GOOGLE_API_KEY"):
os.environ["GOOGLE_API_KEY"] = getpass.getpass("🔑 Enter your Google API Key (Gemini): ")
llm = init_chat_model("google_genai:gemini-2.0-flash")
We begin by putting in the required libraries, organising our Gemini API key, and importing all the mandatory modules. We then initialize the Gemini mannequin utilizing LangChain in order that we are able to use it because the core LLM in our LangGraph workflow. Try the FULL CODES right here.
WIKI_SEARCH_URL = "https://en.wikipedia.org/w/api.php"
_session = requests.Session()
_session.headers.replace({
"Consumer-Agent": "LangGraph-Colab-Demo/1.0 (contact: [email protected])",
"Settle for": "utility/json",
})
retry = Retry(
complete=5, join=5, learn=5, backoff_factor=0.5,
status_forcelist=(429, 500, 502, 503, 504),
allowed_methods=("GET", "POST")
)
_session.mount("https://", HTTPAdapter(max_retries=retry))
_session.mount("http://", HTTPAdapter(max_retries=retry))
def _wiki_search_raw(question: str, restrict: int = 3) -> Listing[Dict[str, str]]:
"""
Use MediaWiki search API with:
- origin='*' (good follow for CORS)
- Well mannered UA + retries
Returns compact record of {title, snippet_html, url}.
"""
params = {
"motion": "question",
"record": "search",
"format": "json",
"srsearch": question,
"srlimit": restrict,
"srprop": "snippet",
"utf8": 1,
"origin": "*",
}
r = _session.get(WIKI_SEARCH_URL, params=params, timeout=15)
r.raise_for_status()
information = r.json()
out = []
for merchandise in information.get("question", {}).get("search", []):
title = merchandise.get("title", "")
page_url = f"https://en.wikipedia.org/wiki/{title.change(' ', '_')}"
snippet = merchandise.get("snippet", "")
out.append({"title": title, "snippet_html": snippet, "url": page_url})
return out
@device
def wiki_search(question: str) -> Listing[Dict[str, str]]:
"""Search Wikipedia and return as much as 3 outcomes with title, snippet_html, and url."""
strive:
outcomes = _wiki_search_raw(question, restrict=3)
return outcomes if outcomes else [{"title": "No results", "snippet_html": "", "url": ""}]
besides Exception as e:
return [{"title": "Error", "snippet_html": str(e), "url": ""}]
TOOLS = [wiki_search]
We arrange a Wikipedia search device with a customized session, retries, and a well mannered user-agent. We outline _wiki_search_raw to question the MediaWiki API after which wrap it as a LangChain device, permitting us to seamlessly name it inside our LangGraph workflow. Try the FULL CODES right here.
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
llm_with_tools = llm.bind_tools(TOOLS)
SYSTEM_INSTRUCTIONS = textwrap.dedent("""
You're ResearchBuddy, a cautious analysis assistant.
- If the person asks you to "analysis", "discover data", "newest", "internet", or references a library/framework/product,
you SHOULD name the `wiki_search` device at the least as soon as earlier than finalizing your reply.
- Once you name instruments, be concise within the textual content you produce across the name.
- After receiving device outcomes, cite at the least the web page titles you utilized in your abstract.
""").strip()
def chatbot(state: State) -> Dict[str, Any]:
"""Single step: name the LLM (with instruments certain) on the present messages."""
return {"messages": [llm_with_tools.invoke(state["msgs"])]}
graph_builder.add_node("chatbot", chatbot)
reminiscence = InMemorySaver()
graph = graph_builder.compile(checkpointer=reminiscence)
We outline our graph state to retailer the working message thread and bind our Gemini mannequin to the wiki_search device, permitting it to name it when wanted. We add a chatbot node and a instruments node, wire them with conditional edges, and allow checkpointing with an in-memory saver. We now compile the graph so we are able to add steps, replay historical past, and resume from any checkpoint. Try the FULL CODES right here.
def print_last_message(occasion: Dict[str, Any]):
"""Fairly-print the final message in an occasion if out there."""
if "messages" in occasion and occasion["messages"]:
msg = occasion["messages"][-1]
strive:
if isinstance(msg, BaseMessage):
msg.pretty_print()
else:
position = msg.get("position", "unknown")
content material = msg.get("content material", "")
print(f"n[{role.upper()}]n{content material}n")
besides Exception:
print(str(msg))
def show_state_history(cfg: Dict[str, Any]) -> Listing[Any]:
"""Print a concise view of checkpoints; return the record as nicely."""
historical past = record(graph.get_state_history(cfg))
print("n=== 📜 State historical past (most up-to-date first) ===")
for i, st in enumerate(historical past):
n = st.subsequent
n_txt = f"{n}" if n else "()"
print(f"{i:02d}) NumMessages={len(st.values.get('messages', []))} Subsequent={n_txt}")
print("=== Finish historical past ===n")
return historical past
def pick_checkpoint_by_next(historical past: Listing[Any], node_name: str = "instruments") -> Optionally available[Any]:
"""Decide the primary checkpoint whose `subsequent` features a given node (e.g., 'instruments')."""
for st in historical past:
nxt = tuple(st.subsequent) if st.subsequent else tuple()
if node_name in nxt:
return st
return None
We add utility capabilities to make our LangGraph workflow simpler to examine and management. We use print_last_message to neatly show the newest response, show_state_history to record all saved checkpoints, and pick_checkpoint_by_next to find a checkpoint the place the graph is about to run a particular node, such because the instruments step. Try the FULL CODES right here.
config = {"configurable": {"thread_id": "demo-thread-1"}}
first_turn = {
"messages": [
{"role": "system", "content": SYSTEM_INSTRUCTIONS},
{"role": "user", "content": "I'm learning LangGraph. Could you do some research on it for me?"},
]
}
print("n==================== 🟢 STEP 1: First person flip ====================")
occasions = graph.stream(first_turn, config, stream_mode="values")
for ev in occasions:
print_last_message(ev)
second_turn = {
"messages": [
{"role": "user", "content": "Ya. Maybe I'll build an agent with it!"}
]
}
print("n==================== 🟢 STEP 2: Second person flip ====================")
occasions = graph.stream(second_turn, config, stream_mode="values")
for ev in occasions:
print_last_message(ev)
We simulate two person interactions in the identical thread by streaming occasions by means of the graph. We first present system directions and ask the assistant to analysis LangGraph, then observe up with a second person message about constructing an autonomous agent. Every step is checkpointed, permitting us to replay or resume from these states later. Try the FULL CODES right here.
print("n==================== 🔁 REPLAY: Full state historical past ====================")
historical past = show_state_history(config)
to_replay = pick_checkpoint_by_next(historical past, node_name="instruments")
if to_replay is None:
to_replay = historical past[min(2, len(history) - 1)]
print("Chosen checkpoint to renew from:")
print(" Subsequent:", to_replay.subsequent)
print(" Config:", to_replay.config)
print("n==================== ⏪ RESUME from chosen checkpoint ====================")
for ev in graph.stream(None, to_replay.config, stream_mode="vals"):
print_last_message(ev)
MANUAL_INDEX = None
if MANUAL_INDEX just isn't None and 0 <= MANUAL_INDEX < len(historical past):
chosen = historical past[MANUAL_INDEX]
print(f"n==================== 🧭 MANUAL RESUME @ index {MANUAL_INDEX} ====================")
print("Subsequent:", chosen.subsequent)
print("Config:", chosen.config)
for ev in graph.stream(None, chosen.config, stream_mode="values"):
print_last_message(ev)
print("n✅ Completed. You added steps, replayed historical past, and resumed from a previous checkpoint.")
We replay the complete checkpoint historical past to see how our dialog evolves throughout steps and determine a helpful level to renew. We then “time journey” by restarting from a particular checkpoint, and optionally from any handbook index, so we proceed the dialogue precisely from that saved state.
In conclusion, we now have gained a clearer image of how LangGraph’s checkpointing and time-travel capabilities convey flexibility and transparency to dialog administration. By stepping by means of a number of person turns, replaying state historical past, and resuming from earlier factors, we are able to expertise firsthand the ability of this framework in constructing dependable analysis brokers or autonomous assistants. We acknowledge that this workflow is not only a demo, however a basis that we are able to prolong into extra advanced functions, the place reproducibility and traceability are as essential because the solutions themselves.
Try the FULL CODES right here. Be happy to take a look at our GitHub Web page for Tutorials, Codes and Notebooks. Additionally, be happy to observe us on Twitter and don’t neglect to hitch our 100k+ ML SubReddit and Subscribe to our Publication.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the most recent breakthroughs, get unique updates, and join with a worldwide community of future-focused thinkers.
Unlock tomorrow’s traits at this time: learn extra, subscribe to our publication, and turn out to be a part of the NextTech neighborhood at NextTech-news.com

