Close Menu
  • Home
  • Opinion
  • Region
    • Africa
    • Asia
    • Europe
    • Middle East
    • North America
    • Oceania
    • South America
  • AI & Machine Learning
  • Robotics & Automation
  • Space & Deep Tech
  • Web3 & Digital Economies
  • Climate & Sustainability Tech
  • Biotech & Future Health
  • Mobility & Smart Cities
  • Global Tech Pulse
  • Cybersecurity & Digital Rights
  • Future of Work & Education
  • Trend Radar & Startup Watch
  • Creator Economy & Culture
What's Hot

This startup desires to show your regular watch right into a smartwatch

March 29, 2026

A Coding Information to Exploring nanobot’s Full Agent Pipeline, from Wiring Up Instruments and Reminiscence to Expertise, Subagents, and Cron Scheduling

March 29, 2026

A house for Vrindavan’s forgotten widows; Bellatrix Aerospace secures $20M

March 29, 2026
Facebook X (Twitter) Instagram LinkedIn RSS
NextTech NewsNextTech News
Facebook X (Twitter) Instagram LinkedIn RSS
  • Home
  • Africa
  • Asia
  • Europe
  • Middle East
  • North America
  • Oceania
  • South America
  • Opinion
Trending
  • This startup desires to show your regular watch right into a smartwatch
  • A Coding Information to Exploring nanobot’s Full Agent Pipeline, from Wiring Up Instruments and Reminiscence to Expertise, Subagents, and Cron Scheduling
  • A house for Vrindavan’s forgotten widows; Bellatrix Aerospace secures $20M
  • New on Crave: April 2026
  • Contained in the Hardly ever Seen Sony DKC-C200X Passport Digital camera That Prints Pictures Wirelessly
  • Chroma Releases Context-1: A 20B Agentic Search Mannequin for Multi-Hop Retrieval, Context Administration, and Scalable Artificial Job Technology
  • Moonshot AI’s Yang Zhilin Particulars Kimi K2.5 at ZGC Discussion board
  • Koodo, Fido, Virgin providing $35/80GB CAN/US/MEX deal for restricted time
Sunday, March 29
NextTech NewsNextTech News
Home - AI & Machine Learning - A Coding Information to Exploring nanobot’s Full Agent Pipeline, from Wiring Up Instruments and Reminiscence to Expertise, Subagents, and Cron Scheduling
AI & Machine Learning

A Coding Information to Exploring nanobot’s Full Agent Pipeline, from Wiring Up Instruments and Reminiscence to Expertise, Subagents, and Cron Scheduling

NextTechBy NextTechMarch 29, 2026No Comments23 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
A Coding Information to Exploring nanobot’s Full Agent Pipeline, from Wiring Up Instruments and Reminiscence to Expertise, Subagents, and Cron Scheduling
Share
Facebook Twitter LinkedIn Pinterest Email


On this tutorial, we take a deep dive into nanobot, the ultra-lightweight private AI agent framework from HKUDS that packs full agent capabilities into roughly 4,000 traces of Python. Somewhat than merely putting in and operating it out of the field, we crack open the hood and manually recreate every of its core subsystems, the agent loop, device execution, reminiscence persistence, expertise loading, session administration, subagent spawning, and cron scheduling, so we perceive precisely how they work. We wire every little thing up with OpenAI’s gpt-4o-mini as our LLM supplier, enter our API key securely by way of the terminal (by no means exposing it in pocket book output), and progressively construct from a single tool-calling loop all the best way to a multi-step analysis pipeline that reads and writes recordsdata, shops long-term recollections, and delegates duties to concurrent background employees. By the top, we don’t simply know the best way to use nanobots, we perceive the best way to lengthen them with customized instruments, expertise, and our personal agent architectures.

import sys
import os
import subprocess


def part(title, emoji="🔹"):
   """Fairly-print a piece header."""
   width = 72
   print(f"n{'═' * width}")
   print(f"  {emoji}  {title}")
   print(f"{'═' * width}n")


def information(msg):
   print(f"  ℹ️  {msg}")


def success(msg):
   print(f"  ✅ {msg}")


def code_block(code):
   print(f"  ┌─────────────────────────────────────────────────")
   for line in code.strip().cut up("n"):
       print(f"  │ {line}")
   print(f"  └─────────────────────────────────────────────────")


part("STEP 1 · Putting in nanobot-ai & Dependencies", "📦")


information("Putting in nanobot-ai from PyPI (newest secure)...")
subprocess.check_call([
   sys.executable, "-m", "pip", "install", "-q",
   "nanobot-ai", "openai", "rich", "httpx"
])
success("nanobot-ai put in efficiently!")


import importlib.metadata
nanobot_version = importlib.metadata.model("nanobot-ai")
print(f"  📌 nanobot-ai model: {nanobot_version}")


part("STEP 2 · Safe OpenAI API Key Enter", "🔑")


information("Your API key will NOT be printed or saved in pocket book output.")
information("It's held solely in reminiscence for this session.n")


attempt:
   from google.colab import userdata
   OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
   if not OPENAI_API_KEY:
       elevate ValueError("Not set in Colab secrets and techniques")
   success("Loaded API key from Colab Secrets and techniques ('OPENAI_API_KEY').")
   information("Tip: You'll be able to set this in Colab → 🔑 Secrets and techniques panel on the left sidebar.")
besides Exception:
   import getpass
   OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key: ")
   success("API key captured securely by way of terminal enter.")


os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY


import openai
shopper = openai.OpenAI(api_key=OPENAI_API_KEY)
attempt:
   shopper.fashions.record()
   success("OpenAI API key validated — connection profitable!")
besides Exception as e:
   print(f"  ❌ API key validation failed: {e}")
   print("     Please restart and enter a sound key.")
   sys.exit(1)


part("STEP 3 · Configuring nanobot for OpenAI", "⚙️")


import json
from pathlib import Path


NANOBOT_HOME = Path.residence() / ".nanobot"
NANOBOT_HOME.mkdir(mother and father=True, exist_ok=True)


WORKSPACE = NANOBOT_HOME / "workspace"
WORKSPACE.mkdir(mother and father=True, exist_ok=True)
(WORKSPACE / "reminiscence").mkdir(mother and father=True, exist_ok=True)


config = {
   "suppliers": {
       "openai": {
           "apiKey": OPENAI_API_KEY
       }
   },
   "brokers": {
       "defaults": {
           "mannequin": "openai/gpt-4o-mini",
           "maxTokens": 4096,
           "workspace": str(WORKSPACE)
       }
   },
   "instruments": {
       "restrictToWorkspace": True
   }
}


config_path = NANOBOT_HOME / "config.json"
config_path.write_text(json.dumps(config, indent=2))
success(f"Config written to {config_path}")


agents_md = WORKSPACE / "AGENTS.md"
agents_md.write_text(
   "# Agent Instructionsnn"
   "You might be nanobot 🐈, an ultra-lightweight private AI assistant.n"
   "You might be useful, concise, and use instruments when wanted.n"
   "At all times clarify your reasoning step-by-step.n"
)


soul_md = WORKSPACE / "SOUL.md"
soul_md.write_text(
   "# Personalitynn"
   "- Pleasant and approachablen"
   "- Technically precisen"
   "- Makes use of emoji sparingly for warmthn"
)


user_md = WORKSPACE / "USER.md"
user_md.write_text(
   "# Person Profilenn"
   "- The consumer is exploring the nanobot framework.n"
   "- They're occupied with AI agent architectures.n"
)


memory_md = WORKSPACE / "reminiscence" / "MEMORY.md"
memory_md.write_text("# Lengthy-term Memorynn_No recollections saved but._n")


success("Workspace bootstrap recordsdata created:")
for f in [agents_md, soul_md, user_md, memory_md]:
   print(f"     📄 {f.relative_to(NANOBOT_HOME)}")


part("STEP 4 · nanobot Structure Deep Dive", "🏗️")


information("""nanobot is organized into 7 subsystems in ~4,000 traces of code:


 ┌──────────────────────────────────────────────────────────┐
 │                    USER INTERFACES                       │
 │         CLI  ·  Telegram  ·  WhatsApp  ·  Discord        │
 └──────────────────┬───────────────────────────────────────┘
                    │  InboundMessage / OutboundMessage
 ┌──────────────────▼───────────────────────────────────────┐
 │                    MESSAGE BUS                           │
 │          publish_inbound() / publish_outbound()          │
 └──────────────────┬───────────────────────────────────────┘
                    │
 ┌──────────────────▼───────────────────────────────────────┐
 │                  AGENT LOOP (loop.py)                    │
 │    ┌─────────┐  ┌──────────┐  ┌────────────────────┐    │
 │    │ Context  │→ │   LLM    │→ │  Instrument Execution    │    │
 │    │ Builder  │  │  Name    │  │  (if tool_calls)   │    │
 │    └─────────┘  └──────────┘  └────────┬───────────┘    │
 │         ▲                              │  loop again     │
 │         │          ◄───────────────────┘  till executed    │
 │    ┌────┴────┐  ┌──────────┐  ┌────────────────────┐    │
 │    │ Reminiscence  │  │  Expertise  │  │   Subagent Mgr     │    │
 │    │ Retailer   │  │  Loader  │  │   (spawn duties)    │    │
 │    └─────────┘  └──────────┘  └────────────────────┘    │
 └──────────────────────────────────────────────────────────┘
                    │
 ┌──────────────────▼───────────────────────────────────────┐
 │               LLM PROVIDER LAYER                         │
 │     OpenAI · Anthropic · OpenRouter · DeepSeek · ...     │
 └───────────────────────────────────────────────────────────┘


 The Agent Loop iterates as much as 40 instances (configurable):
   1. ContextBuilder assembles system immediate + reminiscence + expertise + historical past
   2. LLM known as with instruments definitions
   3. If response has tool_calls → execute instruments, append outcomes, loop
   4. If response is obvious textual content → return as last reply
""")

We arrange the total basis of the tutorial by importing the required modules, defining helper features for clear part show, and putting in the nanobot dependencies inside Google Colab. We then securely load and validate the OpenAI API key so the remainder of the pocket book can work together with the mannequin with out exposing credentials within the pocket book output. After that, we configure the nanobot workspace and create the core bootstrap recordsdata, reminiscent of AGENTS.md and SOUL.md, USER.md, and MEMORY.md, and research the high-level structure so we perceive how the framework is organized earlier than transferring into implementation.

part("STEP 5 · The Agent Loop — Core Idea in Motion", "🔄")


information("We'll manually recreate nanobot's agent loop sample utilizing OpenAI.")
information("That is precisely what loop.py does internally.n")


import json as _json
import datetime


TOOLS = [
   {
       "type": "function",
       "function": {
           "name": "get_current_time",
           "description": "Get the current date and time.",
           "parameters": {"type": "object", "properties": {}, "required": []}
       }
   },
   {
       "kind": "perform",
       "perform": {
           "title": "calculate",
           "description": "Consider a mathematical expression.",
           "parameters": {
               "kind": "object",
               "properties": {
                   "expression": {
                       "kind": "string",
                       "description": "Math expression to judge, e.g. '2**10 + 42'"
                   }
               },
               "required": ["expression"]
           }
       }
   },
   {
       "kind": "perform",
       "perform": {
           "title": "read_file",
           "description": "Learn the contents of a file within the workspace.",
           "parameters": {
               "kind": "object",
               "properties": {
                   "path": {
                       "kind": "string",
                       "description": "Relative file path throughout the workspace"
                   }
               },
               "required": ["path"]
           }
       }
   },
   {
       "kind": "perform",
       "perform": {
           "title": "write_file",
           "description": "Write content material to a file within the workspace.",
           "parameters": {
               "kind": "object",
               "properties": {
                   "path": {"kind": "string", "description": "Relative file path"},
                   "content material": {"kind": "string", "description": "Content material to put in writing"}
               },
               "required": ["path", "content"]
           }
       }
   },
   {
       "kind": "perform",
       "perform": {
           "title": "save_memory",
           "description": "Save a truth to the agent's long-term reminiscence.",
           "parameters": {
               "kind": "object",
               "properties": {
                   "truth": {"kind": "string", "description": "The very fact to recollect"}
               },
               "required": ["fact"]
           }
       }
   }
]


def execute_tool(title: str, arguments: dict) -> str:
   """Execute a device name — mirrors nanobot's ToolRegistry.execute()."""
   if title == "get_current_time":


   elif title == "calculate":
       expr = arguments.get("expression", "")
       attempt:
           end result = eval(expr, {"__builtins__": {}}, {"abs": abs, "spherical": spherical, "min": min, "max": max})
           return str(end result)
       besides Exception as e:
           return f"Error: {e}"


   elif title == "read_file":
       fpath = WORKSPACE / arguments.get("path", "")
       if fpath.exists():
           return fpath.read_text()[:4000]
       return f"Error: File not discovered — {arguments.get('path')}"


   elif title == "write_file":
       fpath = WORKSPACE / arguments.get("path", "")
       fpath.guardian.mkdir(mother and father=True, exist_ok=True)
       fpath.write_text(arguments.get("content material", ""))
       return f"Efficiently wrote {len(arguments.get('content material', ''))} chars to {arguments.get('path')}"


   elif title == "save_memory":
       truth = arguments.get("truth", "")
       mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
       present = mem_file.read_text()
       timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M")
       mem_file.write_text(present + f"n- [{timestamp}] {truth}n")
       return f"Reminiscence saved: {truth}"


   return f"Unknown device: {title}"




def agent_loop(user_message: str, max_iterations: int = 10, verbose: bool = True):
   """
   Recreates nanobot's AgentLoop._process_message() logic.


   The loop:
     1. Construct context (system immediate + bootstrap recordsdata + reminiscence)
     2. Name LLM with instruments
     3. If tool_calls → execute → append outcomes → loop
     4. If textual content response → return last reply
   """
   system_parts = []
   for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]:
       fpath = WORKSPACE / md_file
       if fpath.exists():
           system_parts.append(fpath.read_text())


   mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
   if mem_file.exists():
       system_parts.append(f"n## Your Memoryn{mem_file.read_text()}")


   system_prompt = "nn".be a part of(system_parts)


   messages = [
       {"role": "system", "content": system_prompt},
       {"role": "user", "content": user_message}
   ]


   if verbose:
       print(f"  📨 Person: {user_message}")
       print(f"  🧠 System immediate: {len(system_prompt)} chars "
             f"(from {len(system_parts)} bootstrap recordsdata)")
       print()


   for iteration in vary(1, max_iterations + 1):
       if verbose:
           print(f"  ── Iteration {iteration}/{max_iterations} ──")


       response = shopper.chat.completions.create(
           mannequin="gpt-4o-mini",
           messages=messages,
           instruments=TOOLS,
           tool_choice="auto",
           max_tokens=2048
       )


       alternative = response.decisions[0]
       message = alternative.message


       if message.tool_calls:
           if verbose:
               print(f"  🔧 LLM requested {len(message.tool_calls)} device name(s):")


           messages.append(message.model_dump())


           for tc in message.tool_calls:
               fname = tc.perform.title
               args = _json.masses(tc.perform.arguments) if tc.perform.arguments else {}


               if verbose:
                   print(f"     → {fname}({_json.dumps(args, ensure_ascii=False)[:80]})")


               end result = execute_tool(fname, args)


               if verbose:
                   print(f"     ← {end result[:100]}{'...' if len(end result) > 100 else ''}")


               messages.append({
                   "position": "device",
                   "tool_call_id": tc.id,
                   "content material": end result
               })


           if verbose:
               print()


       else:
           last = message.content material or ""
           if verbose:
               print(f"  💬 Agent: {last}n")
           return last


   return "⚠️ Max iterations reached with no last response."




print("─" * 60)
print("  DEMO 1: Time-aware calculation with device chaining")
print("─" * 60)
result1 = agent_loop(
   "What's the present time? Additionally, calculate 2^20 + 42 for me."
)


print("─" * 60)
print("  DEMO 2: File creation + reminiscence storage")
print("─" * 60)
result2 = agent_loop(
   "Write a haiku about AI brokers to a file known as 'haiku.txt'. "
   "Then keep in mind that I take pleasure in poetry about expertise."
)

We manually recreate the center of nanobot by defining the device schemas, implementing their execution logic, and constructing the iterative agent loop that connects the LLM to instruments. We assemble the immediate from the workspace recordsdata and reminiscence, ship the dialog to the mannequin, detect device calls, execute them, append the outcomes again into the dialog, and preserve looping till the mannequin returns a last reply. We then check this mechanism with sensible examples that contain time lookups, calculations, file writing, and reminiscence saving, so we are able to see the loop function precisely like the inner nanobot movement.

part("STEP 6 · Reminiscence System — Persistent Agent Reminiscence", "🧠")


information("""nanobot's reminiscence system (reminiscence.py) makes use of two storage mechanisms:


 1. MEMORY.md  — Lengthy-term info (at all times loaded into context)
 2. YYYY-MM-DD.md — Every day journal entries (loaded for current days)


 Reminiscence consolidation runs periodically to summarize and compress
 outdated entries, protecting the context window manageable.
""")


mem_content = (WORKSPACE / "reminiscence" / "MEMORY.md").read_text()
print("  📂 Present MEMORY.md contents:")
print("  ┌─────────────────────────────────────────────")
for line in mem_content.strip().cut up("n"):
   print(f"  │ {line}")
print("  └─────────────────────────────────────────────n")


at the moment = datetime.datetime.now().strftime("%Y-%m-%d")
daily_file = WORKSPACE / "reminiscence" / f"{at the moment}.md"
daily_file.write_text(
   f"# Every day Log — {at the moment}nn"
   "- Person ran the nanobot superior tutorialn"
   "- Explored agent loop, instruments, and memoryn"
   "- Created a haiku about AI agentsn"
)
success(f"Every day journal created: reminiscence/{at the moment}.md")


print("n  📁 Workspace contents:")
for merchandise in sorted(WORKSPACE.rglob("*")):
   if merchandise.is_file():
       rel = merchandise.relative_to(WORKSPACE)
       dimension = merchandise.stat().st_size
       print(f"     {'📄' if merchandise.suffix == '.md' else '📝'} {rel} ({dimension} bytes)")


part("STEP 7 · Expertise System — Extending Agent Capabilities", "🎯")


information("""nanobot's SkillsLoader (expertise.py) reads Markdown recordsdata from the
expertise/ listing. Every talent has:
 - A reputation and outline (for the LLM to resolve when to make use of it)
 - Directions the LLM follows when the talent is activated
 - Some expertise are 'at all times loaded'; others are loaded on demand


Let's create a customized talent and see how the agent makes use of it.
""")


skills_dir = WORKSPACE / "expertise"
skills_dir.mkdir(exist_ok=True)


data_skill = skills_dir / "data_analyst.md"
data_skill.write_text("""# Knowledge Analyst Ability


## Description
Analyze knowledge, compute statistics, and supply insights from numbers.


## Directions
When requested to investigate knowledge:
1. Establish the info kind and construction
2. Compute related statistics (imply, median, vary, std dev)
3. Search for patterns and outliers
4. Current findings in a transparent, structured format
5. Counsel follow-up questions


## At all times Accessible
false
""")


review_skill = skills_dir / "code_reviewer.md"
review_skill.write_text("""# Code Reviewer Ability


## Description
Evaluation code for bugs, safety points, and finest practices.


## Directions
When reviewing code:
1. Test for frequent bugs and logic errors
2. Establish safety vulnerabilities
3. Counsel efficiency enhancements
4. Consider code fashion and readability
5. Price the code high quality on a 1-10 scale


## At all times Accessible
true
""")


success("Customized expertise created:")
for f in skills_dir.iterdir():
   print(f"     🎯 {f.title}")


print("n  🧪 Testing skill-aware agent interplay:")
print("  " + "─" * 56)


skills_context = "nn## Accessible Skillsn"
for skill_file in skills_dir.glob("*.md"):
   content material = skill_file.read_text()
   skills_context += f"n### {skill_file.stem}n{content material}n"


result3 = agent_loop(
   "Evaluation this Python code for points:nn"
   "```pythonn"
   "def get_user(id):n"
   "    question = f'SELECT * FROM customers WHERE id = {id}'n"
   "    end result = db.execute(question)n"
   "    return resultn"
   "```"
)

We transfer into the persistent reminiscence system by inspecting the long-term reminiscence file, making a day by day journal entry, and reviewing how the workspace evolves after earlier interactions. We then lengthen the agent with a expertise system by creating markdown-based talent recordsdata that describe specialised behaviors reminiscent of knowledge evaluation and code assessment. Lastly, we simulate how skill-aware prompting works by exposing these expertise to the agent and asking it to assessment a Python perform, which helps us see how nanobot could be guided by way of modular functionality descriptions.

part("STEP 8 · Customized Instrument Creation — Extending the Agent", "🔧")


information("""nanobot's device system makes use of a ToolRegistry with a easy interface.
Every device wants:
 - A reputation and outline
 - A JSON Schema for parameters
 - An execute() technique


Let's create customized instruments and wire them into our agent loop.
""")


import random


CUSTOM_TOOLS = [
   {
       "type": "function",
       "function": {
           "name": "roll_dice",
           "description": "Roll one or more dice with a given number of sides.",
           "parameters": {
               "type": "object",
               "properties": {
                   "num_dice": {"type": "integer", "description": "Number of dice to roll", "default": 1},
                   "sides": {"type": "integer", "description": "Number of sides per die", "default": 6}
               },
               "required": []
           }
       }
   },
   {
       "kind": "perform",
       "perform": {
           "title": "text_stats",
           "description": "Compute statistics a few textual content: phrase rely, char rely, sentence rely, studying time.",
           "parameters": {
               "kind": "object",
               "properties": {
                   "textual content": {"kind": "string", "description": "The textual content to investigate"}
               },
               "required": ["text"]
           }
       }
   },
   {
       "kind": "perform",
       "perform": {
           "title": "generate_password",
           "description": "Generate a random safe password.",
           "parameters": {
               "kind": "object",
               "properties": {
                   "size": {"kind": "integer", "description": "Password size", "default": 16}
               },
               "required": []
           }
       }
   }
]


_original_execute = execute_tool


def execute_tool_extended(title: str, arguments: dict) -> str:
   if title == "roll_dice":
       n = arguments.get("num_dice", 1)
       s = arguments.get("sides", 6)
       rolls = [random.randint(1, s) for _ in range(n)]
       return f"Rolled {n}d{s}: {rolls} (complete: {sum(rolls)})"


   elif title == "text_stats":
       textual content = arguments.get("textual content", "")
       phrases = len(textual content.cut up())
       chars = len(textual content)
       sentences = textual content.rely('.') + textual content.rely('!') + textual content.rely('?')
       reading_time = spherical(phrases / 200, 1)
       return _json.dumps({
           "phrases": phrases,
           "characters": chars,
           "sentences": max(sentences, 1),
           "reading_time_minutes": reading_time
       })


   elif title == "generate_password":
       import string
       size = arguments.get("size", 16)
       chars = string.ascii_letters + string.digits + "!@#$%^&*"
       pwd = ''.be a part of(random.alternative(chars) for _ in vary(size))
       return f"Generated password ({size} chars): {pwd}"


   return _original_execute(title, arguments)


execute_tool = execute_tool_extended


ALL_TOOLS = TOOLS + CUSTOM_TOOLS


def agent_loop_v2(user_message: str, max_iterations: int = 10, verbose: bool = True):
   """Agent loop with prolonged customized instruments."""
   system_parts = []
   for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]:
       fpath = WORKSPACE / md_file
       if fpath.exists():
           system_parts.append(fpath.read_text())
   mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
   if mem_file.exists():
       system_parts.append(f"n## Your Memoryn{mem_file.read_text()}")
   system_prompt = "nn".be a part of(system_parts)


   messages = [
       {"role": "system", "content": system_prompt},
       {"role": "user", "content": user_message}
   ]


   if verbose:
       print(f"  📨 Person: {user_message}")
       print()


   for iteration in vary(1, max_iterations + 1):
       if verbose:
           print(f"  ── Iteration {iteration}/{max_iterations} ──")


       response = shopper.chat.completions.create(
           mannequin="gpt-4o-mini",
           messages=messages,
           instruments=ALL_TOOLS,
           tool_choice="auto",
           max_tokens=2048
       )
       alternative = response.decisions[0]
       message = alternative.message


       if message.tool_calls:
           if verbose:
               print(f"  🔧 {len(message.tool_calls)} device name(s):")
           messages.append(message.model_dump())
           for tc in message.tool_calls:
               fname = tc.perform.title
               args = _json.masses(tc.perform.arguments) if tc.perform.arguments else {}
               if verbose:
                   print(f"     → {fname}({_json.dumps(args, ensure_ascii=False)[:80]})")
               end result = execute_tool(fname, args)
               if verbose:
                   print(f"     ← {end result[:120]}{'...' if len(end result) > 120 else ''}")
               messages.append({
                   "position": "device",
                   "tool_call_id": tc.id,
                   "content material": end result
               })
           if verbose:
               print()
       else:
           last = message.content material or ""
           if verbose:
               print(f"  💬 Agent: {last}n")
           return last


   return "⚠️ Max iterations reached."




print("─" * 60)
print("  DEMO 3: Customized instruments in motion")
print("─" * 60)
result4 = agent_loop_v2(
   "Roll 3 six-sided cube for me, then generate a 20-character password, "
   "and at last analyze the textual content stats of this sentence: "
)


part("STEP 9 · Multi-Flip Dialog — Session Administration", "💬")


information("""nanobot's SessionManager (session/supervisor.py) maintains dialog
historical past per session_key (format: 'channel:chat_id'). Historical past is saved
in JSON recordsdata and loaded into context for every new message.


Let's simulate a multi-turn dialog with persistent state.
""")

We develop the agent’s capabilities by defining new customized instruments reminiscent of cube rolling, textual content statistics, and password technology, after which wiring them into the device execution pipeline. We replace the executor, merge the built-in and customized device definitions, and create a second model of the agent loop that may cause over this bigger set of capabilities. We then run a demo activity that forces the mannequin to chain a number of device invocations, demonstrating how simple it’s to increase nanobot with our personal features whereas protecting the identical general interplay sample.

class SimpleSessionManager:
   """
   Minimal recreation of nanobot's SessionManager.
   Shops dialog historical past and offers context continuity.
   """
   def __init__(self, workspace: Path):
       self.workspace = workspace
       self.periods: dict[str, list[dict]] = {}


   def get_history(self, session_key: str) -> record[dict]:
       return self.periods.get(session_key, [])


   def add_turn(self, session_key: str, position: str, content material: str):
       if session_key not in self.periods:
           self.periods[session_key] = []
       self.periods[session_key].append({"position": position, "content material": content material})


   def save(self, session_key: str):
       fpath = self.workspace / f"session_{session_key.exchange(':', '_')}.json"
       fpath.write_text(_json.dumps(self.periods.get(session_key, []), indent=2))


   def load(self, session_key: str):
       fpath = self.workspace / f"session_{session_key.exchange(':', '_')}.json"
       if fpath.exists():
           self.periods[session_key] = _json.masses(fpath.read_text())




session_mgr = SimpleSessionManager(WORKSPACE)
SESSION_KEY = "cli:tutorial_user"




def chat(user_message: str, verbose: bool = True):
   """Multi-turn chat with session persistence."""
   session_mgr.add_turn(SESSION_KEY, "consumer", user_message)


   system_parts = []
   for md_file in ["AGENTS.md", "SOUL.md"]:
       fpath = WORKSPACE / md_file
       if fpath.exists():
           system_parts.append(fpath.read_text())
   system_prompt = "nn".be a part of(system_parts)


   historical past = session_mgr.get_history(SESSION_KEY)
   messages = [{"role": "system", "content": system_prompt}] + historical past


   if verbose:
       print(f"  👤 You: {user_message}")
       print(f"     (dialog historical past: {len(historical past)} messages)")


   response = shopper.chat.completions.create(
       mannequin="gpt-4o-mini",
       messages=messages,
       max_tokens=1024
   )
   reply = response.decisions[0].message.content material or ""


   session_mgr.add_turn(SESSION_KEY, "assistant", reply)
   session_mgr.save(SESSION_KEY)


   if verbose:
       print(f"  🐈 nanobot: {reply}n")
   return reply




print("─" * 60)
print("  DEMO 4: Multi-turn dialog with reminiscence")
print("─" * 60)


chat("Hello! My title is Alex and I am constructing an AI agent.")
chat("What's my title? And what am I engaged on?")
chat("Are you able to recommend 3 options I ought to add to my agent?")


success("Session endured with full dialog historical past!")
session_file = WORKSPACE / f"session_{SESSION_KEY.exchange(':', '_')}.json"
session_data = _json.masses(session_file.read_text())
print(f"  📄 Session file: {session_file.title} ({len(session_data)} messages)")


part("STEP 10 · Subagent Spawning — Background Activity Delegation", "🚀")


information("""nanobot's SubagentManager (agent/subagent.py) permits the primary agent
to delegate duties to impartial background employees. Every subagent:
 - Will get its personal device registry (no SpawnTool to stop recursion)
 - Runs as much as 15 iterations independently
 - Studies outcomes again by way of the MessageBus


Let's simulate this sample with concurrent duties.
""")


import asyncio
import uuid




async def run_subagent(task_id: str, objective: str, verbose: bool = True):
   """
   Simulates nanobot's SubagentManager._run_subagent().
   Runs an impartial LLM loop for a selected objective.
   """
   if verbose:
       print(f"  🔹 Subagent [{task_id[:8]}] began: {objective[:60]}")


   response = shopper.chat.completions.create(
       mannequin="gpt-4o-mini",
       messages=[
           {"role": "system", "content": "You are a focused research assistant. "
            "Complete the assigned task concisely in 2-3 sentences."},
           {"role": "user", "content": goal}
       ],
       max_tokens=256
   )


   end result = response.decisions[0].message.content material or ""
   if verbose:
       print(f"  ✅ Subagent [{task_id[:8]}] executed: {end result[:80]}...")
   return {"task_id": task_id, "objective": objective, "end result": end result}




async def spawn_subagents(objectives: record[str]):
   """Spawn a number of subagents concurrently — mirrors SubagentManager.spawn()."""
   duties = []
   for objective in objectives:
       task_id = str(uuid.uuid4())
       duties.append(run_subagent(task_id, objective))


   print(f"n  🚀 Spawning {len(duties)} subagents concurrently...n")
   outcomes = await asyncio.collect(*duties)
   return outcomes




objectives = [
   "What are the 3 key components of a ReAct agent architecture?",
   "Explain the difference between tool-calling and function-calling in LLMs.",
   "What is MCP (Model Context Protocol) and why does it matter for AI agents?",
]


attempt:
   loop = asyncio.get_running_loop()
   import nest_asyncio
   nest_asyncio.apply()
   subagent_results = asyncio.get_event_loop().run_until_complete(spawn_subagents(objectives))
besides RuntimeError:
   subagent_results = asyncio.run(spawn_subagents(objectives))
besides ModuleNotFoundError:
   print("  ℹ️  Operating subagents sequentially (set up nest_asyncio for async)...n")
   subagent_results = []
   for objective in objectives:
       task_id = str(uuid.uuid4())
       response = shopper.chat.completions.create(
           mannequin="gpt-4o-mini",
           messages=[
               {"role": "system", "content": "Complete the task concisely in 2-3 sentences."},
               {"role": "user", "content": goal}
           ],
           max_tokens=256
       )
       r = response.decisions[0].message.content material or ""
       print(f"  ✅ Subagent [{task_id[:8]}] executed: {r[:80]}...")
       subagent_results.append({"task_id": task_id, "objective": objective, "end result": r})


print(f"n  📋 All {len(subagent_results)} subagent outcomes collected!")
for i, r in enumerate(subagent_results, 1):
   print(f"n  ── Consequence {i} ──")
   print(f"  Objective: {r['goal'][:60]}")
   print(f"  Reply: {r['result'][:200]}")

We simulate multi-turn dialog administration by constructing a light-weight session supervisor that shops, retrieves, and persists dialog historical past throughout turns. We use that historical past to keep up continuity within the chat, permitting the agent to recollect particulars from earlier within the interplay and reply extra coherently and statefully. After that, we mannequin subagent spawning by launching concurrent background duties that every deal with a centered goal, which helps us perceive how nanobot can delegate parallel work to impartial agent employees.

part("STEP 11 · Scheduled Duties — The Cron Sample", "⏰")


information("""nanobot's CronService (cron/service.py) makes use of APScheduler to set off
agent actions on a schedule. When a job fires, it creates an
InboundMessage and publishes it to the MessageBus.


Let's exhibit the sample with a simulated scheduler.
""")


from datetime import timedelta




class SimpleCronJob:
   """Mirrors nanobot's cron job construction."""
   def __init__(self, title: str, message: str, interval_seconds: int):
       self.id = str(uuid.uuid4())[:8]
       self.title = title
       self.message = message
       self.interval = interval_seconds
       self.enabled = True
       self.last_run = None
       self.next_run = datetime.datetime.now() + timedelta(seconds=interval_seconds)




jobs = [
   SimpleCronJob("morning_briefing", "Give me a brief morning status update.", 86400),
   SimpleCronJob("memory_cleanup", "Review and consolidate my memories.", 43200),
   SimpleCronJob("health_check", "Run a system health check.", 3600),
]


print("  📋 Registered Cron Jobs:")
print("  ┌────────┬────────────────────┬──────────┬──────────────────────┐")
print("  │ ID     │ Identify               │ Interval │ Subsequent Run             │")
print("  ├────────┼────────────────────┼──────────┼──────────────────────┤")
for job in jobs:
   interval_str = f"{job.interval // 3600}h" if job.interval >= 3600 else f"{job.interval}s"
   print(f"  │ {job.id} │ {job.title:<18} │ {interval_str:>8} │ {job.next_run.strftime('%Y-%m-%d %H:%M')} │")
print("  └────────┴────────────────────┴──────────┴──────────────────────┘")


print(f"n  ⏰ Simulating cron set off for '{jobs[2].title}'...")
cron_result = agent_loop_v2(jobs[2].message, verbose=True)


part("STEP 12 · Full Agent Pipeline — Finish-to-Finish Demo", "🎬")


information("""Now let's run a posh, multi-step activity that workouts the total
nanobot pipeline: context constructing → device use → reminiscence → file I/O.
""")


print("─" * 60)
print("  DEMO 5: Advanced multi-step analysis activity")
print("─" * 60)


complex_result = agent_loop_v2(
   "I want you to assist me with a small undertaking:n"
   "1. First, test the present timen"
   "2. Write a brief undertaking plan to 'project_plan.txt' about constructing "
   "a private AI assistant (3-4 bullet factors)n"
   "3. Do not forget that my present undertaking is 'constructing a private AI assistant'n"
   "4. Learn again the undertaking plan file to substantiate it was saved correctlyn"
   "Then summarize every little thing you probably did.",
   max_iterations=15
)


part("STEP 13 · Ultimate Workspace Abstract", "📊")


print("  📁 Full workspace state after tutorial:n")
total_files = 0
total_bytes = 0
for merchandise in sorted(WORKSPACE.rglob("*")):
   if merchandise.is_file():
       rel = merchandise.relative_to(WORKSPACE)
       dimension = merchandise.stat().st_size
       total_files += 1
       total_bytes += dimension
       icon = {"md": "📄", "txt": "📝", "json": "📋"}.get(merchandise.suffix.lstrip("."), "📎")
       print(f"     {icon} {rel} ({dimension:,} bytes)")


print(f"n  ── Abstract ──")
print(f"  Complete recordsdata: {total_files}")
print(f"  Complete dimension:  {total_bytes:,} bytes")
print(f"  Config:      {config_path}")
print(f"  Workspace:   {WORKSPACE}")


print("n  🧠 Ultimate Reminiscence State:")
mem_content = (WORKSPACE / "reminiscence" / "MEMORY.md").read_text()
print("  ┌─────────────────────────────────────────────")
for line in mem_content.strip().cut up("n"):
   print(f"  │ {line}")
print("  └─────────────────────────────────────────────")


part("COMPLETE · What's Subsequent?", "🎉")


print("""  You have explored the core internals of nanobot! Here is what to attempt subsequent:


 🔹 Run the true CLI agent:
    nanobot onboard && nanobot agent


 🔹 Hook up with Telegram:
    Add a bot token to config.json and run `nanobot gateway`


 🔹 Allow net search:
    Add a Courageous Search API key underneath instruments.net.search.apiKey


 🔹 Strive MCP integration:
    nanobot helps Mannequin Context Protocol servers for exterior instruments


 🔹 Discover the supply (~4K traces):
    https://github.com/HKUDS/nanobot


 🔹 Key recordsdata to learn:
    • agent/loop.py    — The agent iteration loop
    • agent/context.py — Immediate meeting pipeline
    • agent/reminiscence.py  — Persistent reminiscence system
    • agent/instruments/     — Constructed-in device implementations
    • agent/subagent.py — Background activity delegation


""")

We exhibit the cron-style scheduling sample by defining easy scheduled jobs, itemizing their intervals and subsequent run instances, and simulating the triggering of an automatic agent activity. We then run a bigger end-to-end instance that mixes context constructing, device use, reminiscence updates, and file operations right into a single multi-step workflow, so we are able to see the total pipeline working collectively in a practical activity. On the finish, we examine the ultimate workspace state, assessment the saved reminiscence, and shut the tutorial with clear subsequent steps that join this pocket book implementation to the true nanobot undertaking and its supply code.

In conclusion, we walked by way of each main layer of the nanobot’s structure, from the iterative LLM-tool loop at its core to the session supervisor that provides our agent conversational reminiscence throughout turns. We constructed 5 built-in instruments, three customized instruments, two expertise, a session persistence layer, a subagent spawner, and a cron simulator, all whereas protecting every little thing in a single runnable script. What stands out is how nanobot proves {that a} production-grade agent framework doesn’t want tons of of hundreds of traces of code; the patterns we carried out right here, context meeting, device dispatch, reminiscence consolidation, and background activity delegation, are the identical patterns that energy far bigger methods, simply stripped all the way down to their essence. We now have a working psychological mannequin of agentic AI internals and a codebase sufficiently small to learn in a single sitting, which makes nanobot a perfect alternative for anybody trying to construct, customise, or analysis AI brokers from the bottom up.


Try the Full Codes right here. Additionally, be at liberty to comply with us on Twitter and don’t overlook to affix our 120k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you’ll be able to be a part of us on telegram as properly.


Michal Sutter is a knowledge science skilled with a Grasp of Science in Knowledge Science from the College of Padova. With a strong basis in statistical evaluation, machine studying, and knowledge engineering, Michal excels at reworking advanced datasets into actionable insights.

Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a worldwide community of future-focused thinkers.
Unlock tomorrow’s tendencies at the moment: learn extra, subscribe to our publication, and develop into a part of the NextTech neighborhood at NextTech-news.com

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
NextTech
  • Website

Related Posts

Chroma Releases Context-1: A 20B Agentic Search Mannequin for Multi-Hop Retrieval, Context Administration, and Scalable Artificial Job Technology

March 29, 2026

Google-Agent vs Googlebot: Google Defines the Technical Boundary Between Consumer Triggered AI Entry and Search Crawling Methods In the present day

March 29, 2026

Mistral AI Releases Voxtral TTS: A 4B Open-Weight Streaming Speech Mannequin for Low-Latency Multilingual Voice Era

March 28, 2026
Add A Comment
Leave A Reply Cancel Reply

Economy News

This startup desires to show your regular watch right into a smartwatch

By NextTechMarch 29, 2026

A brand new startup goals to show your common wristwatch right into a smartwatch with…

A Coding Information to Exploring nanobot’s Full Agent Pipeline, from Wiring Up Instruments and Reminiscence to Expertise, Subagents, and Cron Scheduling

March 29, 2026

A house for Vrindavan’s forgotten widows; Bellatrix Aerospace secures $20M

March 29, 2026
Top Trending

This startup desires to show your regular watch right into a smartwatch

By NextTechMarch 29, 2026

A brand new startup goals to show your common wristwatch right into…

A Coding Information to Exploring nanobot’s Full Agent Pipeline, from Wiring Up Instruments and Reminiscence to Expertise, Subagents, and Cron Scheduling

By NextTechMarch 29, 2026

On this tutorial, we take a deep dive into nanobot, the ultra-lightweight…

A house for Vrindavan’s forgotten widows; Bellatrix Aerospace secures $20M

By NextTechMarch 29, 2026

Howdy,Naukri-parent Data Edge will make investments as much as Rs 250 crore…

Subscribe to News

Get the latest sports news from NewsSite about world, sports and politics.

NEXTTECH-LOGO
Facebook X (Twitter) Instagram YouTube

AI & Machine Learning

Robotics & Automation

Space & Deep Tech

Web3 & Digital Economies

Climate & Sustainability Tech

Biotech & Future Health

Mobility & Smart Cities

Global Tech Pulse

Cybersecurity & Digital Rights

Future of Work & Education

Creator Economy & Culture

Trend Radar & Startup Watch

News By Region

Africa

Asia

Europe

Middle East

North America

Oceania

South America

2025 © NextTech-News. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms Of Service
  • Advertise With Us
  • Write For Us
  • Submit Article & Press Release

Type above and press Enter to search. Press Esc to cancel.

Subscribe For Latest Updates

Sign up to best of Tech news, informed analysis and opinions on what matters to you.

Invalid email address
 We respect your inbox and never send spam. You can unsubscribe from our newsletter at any time.     
Thanks for subscribing!