On this tutorial, we dive into the essence of Agentic AI by uniting LangChain, AutoGen, and Hugging Face right into a single, absolutely purposeful framework that runs with out paid APIs. We start by establishing a light-weight open-source pipeline after which progress by means of structured reasoning, multi-step workflows, and collaborative agent interactions. As we transfer from LangChain chains to simulated multi-agent programs, we expertise how reasoning, planning, and execution can seamlessly mix to kind autonomous, clever conduct, totally inside our management and setting. Try the FULL CODES right here.
import warnings
warnings.filterwarnings('ignore')
from typing import Checklist, Dict
import autogen
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import HuggingFacePipeline
from transformers import pipeline
import json
print("🚀 Loading fashions...n")
pipe = pipeline(
"text2text-generation",
mannequin="google/flan-t5-base",
max_length=200,
temperature=0.7
)
llm = HuggingFacePipeline(pipeline=pipe)
print("✓ Fashions loaded!n")
We begin by establishing the environment and bringing in all the mandatory libraries. We initialize a Hugging Face FLAN-T5 pipeline as our native language mannequin, guaranteeing it might generate coherent, contextually wealthy textual content. We verify that the whole lot masses efficiently, laying the groundwork for the agentic experiments that comply with. Try the FULL CODES right here.
def demo_langchain_basics():
print("="*70)
print("DEMO 1: LangChain - Clever Immediate Chains")
print("="*70 + "n")
immediate = PromptTemplate(
input_variables=["task"],
template="Process: {activity}nnProvide an in depth step-by-step resolution:"
)
chain = LLMChain(llm=llm, immediate=immediate)
activity = "Create a Python perform to calculate fibonacci sequence"
print(f"Process: {activity}n")
outcome = chain.run(activity=activity)
print(f"LangChain Response:n{outcome}n")
print("✓ LangChain demo completen")
def demo_langchain_multi_step():
print("="*70)
print("DEMO 2: LangChain - Multi-Step Reasoning")
print("="*70 + "n")
planner = PromptTemplate(
input_variables=["goal"],
template="Break down this purpose into 3 steps: {purpose}"
)
executor = PromptTemplate(
input_variables=["step"],
template="Clarify how you can execute this step: {step}"
)
plan_chain = LLMChain(llm=llm, immediate=planner)
exec_chain = LLMChain(llm=llm, immediate=executor)
purpose = "Construct a machine studying mannequin"
print(f"Objective: {purpose}n")
plan = plan_chain.run(purpose=purpose)
print(f"Plan:n{plan}n")
print("Executing first step...")
execution = exec_chain.run(step="Accumulate and put together knowledge")
print(f"Execution:n{execution}n")
print("✓ Multi-step reasoning completen")
We discover LangChain’s capabilities by setting up clever immediate templates that enable our mannequin to purpose by means of duties. We construct each a easy one-step chain and a multi-step reasoning movement that break complicated objectives into clear subtasks. We observe how LangChain allows structured pondering and turns plain directions into detailed, actionable responses. Try the FULL CODES right here.
class SimpleAgent:
def __init__(self, title: str, position: str, llm_pipeline):
self.title = title
self.position = position
self.pipe = llm_pipeline
self.reminiscence = []
def course of(self, message: str) -> str:
immediate = f"You're a {self.position}.nUser: {message}nYour response:"
response = self.pipe(immediate, max_length=150)[0]['generated_text']
self.reminiscence.append({"consumer": message, "agent": response})
return response
def __repr__(self):
return f"Agent({self.title}, position={self.position})"
def demo_simple_agents():
print("="*70)
print("DEMO 3: Easy Multi-Agent System")
print("="*70 + "n")
researcher = SimpleAgent("Researcher", "analysis specialist", pipe)
coder = SimpleAgent("Coder", "Python developer", pipe)
reviewer = SimpleAgent("Reviewer", "code reviewer", pipe)
print("Brokers created:", researcher, coder, reviewer, "n")
activity = "Create a perform to type an inventory"
print(f"Process: {activity}n")
print(f"[{researcher.name}] Researching...")
analysis = researcher.course of(f"What's the perfect strategy to: {activity}")
print(f"Analysis: {analysis[:100]}...n")
print(f"[{coder.name}] Coding...")
code = coder.course of(f"Write Python code to: {activity}")
print(f"Code: {code[:100]}...n")
print(f"[{reviewer.name}] Reviewing...")
evaluation = reviewer.course of(f"Overview this strategy: {code[:50]}")
print(f"Overview: {evaluation[:100]}...n")
print("✓ Multi-agent workflow completen")
We design light-weight brokers powered by the identical Hugging Face pipeline, every assigned a particular position, equivalent to researcher, coder, or reviewer. We let these brokers collaborate on a easy coding activity, exchanging data and constructing upon one another’s outputs. We witness how a coordinated multi-agent workflow can emulate teamwork, creativity, and self-organization in an automatic setting. Try the FULL CODES right here.
def demo_autogen_conceptual():
print("="*70)
print("DEMO 4: AutoGen Ideas (Conceptual Demo)")
print("="*70 + "n")
agent_config = {
"brokers": [
{"name": "UserProxy", "type": "user_proxy", "role": "Coordinates tasks"},
{"name": "Assistant", "type": "assistant", "role": "Solves problems"},
{"name": "Executor", "type": "executor", "role": "Runs code"}
],
"workflow": [
"1. UserProxy receives task",
"2. Assistant generates solution",
"3. Executor tests solution",
"4. Feedback loop until complete"
]
}
print(json.dumps(agent_config, indent=2))
print("n📝 AutoGen Key Options:")
print(" • Automated agent chat conversations")
print(" • Code execution capabilities")
print(" • Human-in-the-loop assist")
print(" • Multi-agent collaboration")
print(" • Software/perform callingn")
print("✓ AutoGen ideas explainedn")
class MockLLM:
def __init__(self):
self.responses = {
"code": "def fibonacci(n):n if n <= 1:n return nn return fibonacci(n-1) + fibonacci(n-2)",
"clarify": "This can be a recursive implementation of the Fibonacci sequence.",
"evaluation": "The code is appropriate however may very well be optimized with memoization.",
"default": "I perceive. Let me assist with that activity."
}
def generate(self, immediate: str) -> str:
prompt_lower = immediate.decrease()
if "code" in prompt_lower or "perform" in prompt_lower:
return self.responses["code"]
elif "clarify" in prompt_lower:
return self.responses["explain"]
elif "evaluation" in prompt_lower:
return self.responses["review"]
return self.responses["default"]
def demo_autogen_with_mock():
print("="*70)
print("DEMO 5: AutoGen with Customized LLM Backend")
print("="*70 + "n")
mock_llm = MockLLM()
dialog = [
("User", "Create a fibonacci function"),
("CodeAgent", mock_llm.generate("write code for fibonacci")),
("ReviewAgent", mock_llm.generate("review this code")),
]
print("Simulated AutoGen Multi-Agent Dialog:n")
for speaker, message in dialog:
print(f"[{speaker}]")
print(f"{message}n")
print("✓ AutoGen simulation completen")
We illustrate AutoGen’s core thought by defining a conceptual configuration of brokers and their workflow. We then simulate an AutoGen-style dialog utilizing a customized mock LLM that generates practical but controllable responses. We notice how this framework permits a number of brokers to purpose, check, and refine concepts collaboratively with out counting on any exterior APIs. Try the FULL CODES right here.
def demo_hybrid_system():
print("="*70)
print("DEMO 6: Hybrid LangChain + Multi-Agent System")
print("="*70 + "n")
reasoning_prompt = PromptTemplate(
input_variables=["problem"],
template="Analyze this drawback: {drawback}nWhat are the important thing steps?"
)
reasoning_chain = LLMChain(llm=llm, immediate=reasoning_prompt)
planner = SimpleAgent("Planner", "strategic planner", pipe)
executor = SimpleAgent("Executor", "activity executor", pipe)
drawback = "Optimize a gradual database question"
print(f"Downside: {drawback}n")
print("[LangChain] Analyzing drawback...")
evaluation = reasoning_chain.run(drawback=drawback)
print(f"Evaluation: {evaluation[:120]}...n")
print(f"[{planner.name}] Creating plan...")
plan = planner.course of(f"Plan how you can: {drawback}")
print(f"Plan: {plan[:120]}...n")
print(f"[{executor.name}] Executing...")
outcome = executor.course of(f"Execute: Add database indexes")
print(f"Outcome: {outcome[:120]}...n")
print("✓ Hybrid system completen")
if __name__ == "__main__":
print("="*70)
print("🤖 ADVANCED AGENTIC AI TUTORIAL")
print("AutoGen + LangChain + HuggingFace")
print("="*70 + "n")
demo_langchain_basics()
demo_langchain_multi_step()
demo_simple_agents()
demo_autogen_conceptual()
demo_autogen_with_mock()
demo_hybrid_system()
print("="*70)
print("🎉 TUTORIAL COMPLETE!")
print("="*70)
print("n📚 What You Realized:")
print(" ✓ LangChain immediate engineering and chains")
print(" ✓ Multi-step reasoning with LangChain")
print(" ✓ Constructing customized multi-agent programs")
print(" ✓ AutoGen structure and ideas")
print(" ✓ Combining LangChain + brokers")
print(" ✓ Utilizing HuggingFace fashions (no API wanted!)")
print("n💡 Key Takeaway:")
print(" You may construct highly effective agentic AI programs with out costly APIs!")
print(" Mix LangChain's chains with multi-agent architectures for")
print(" clever, autonomous AI programs.")
print("="*70 + "n")
We mix LangChain’s structured reasoning with our easy agentic system to create a hybrid clever framework. We enable LangChain to investigate issues whereas the brokers plan and execute corresponding actions in sequence. We conclude the demonstration by operating all modules collectively, showcasing how open-source instruments can combine seamlessly to construct adaptive, autonomous AI programs.
In conclusion, we witness how Agentic AI transforms from idea to actuality by means of a easy, modular design. We mix the reasoning depth of LangChain with the cooperative energy of brokers to construct adaptable programs that suppose, plan, and act independently. The result’s a transparent demonstration that highly effective, autonomous AI programs may be constructed with out costly infrastructure, leveraging open-source instruments, artistic design, and a little bit of experimentation.
Try the FULL CODES right here. Be happy to take a look at our GitHub Web page for Tutorials, Codes and Notebooks. Additionally, be happy to comply with us on Twitter and don’t overlook to affix our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you’ll be able to be part of us on telegram as nicely.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the most recent breakthroughs, get unique updates, and join with a world community of future-focused thinkers.
Unlock tomorrow’s developments immediately: learn extra, subscribe to our e-newsletter, and develop into a part of the NextTech group at NextTech-news.com

