Close Menu
  • Home
  • Opinion
  • Region
    • Africa
    • Asia
    • Europe
    • Middle East
    • North America
    • Oceania
    • South America
  • AI & Machine Learning
  • Robotics & Automation
  • Space & Deep Tech
  • Web3 & Digital Economies
  • Climate & Sustainability Tech
  • Biotech & Future Health
  • Mobility & Smart Cities
  • Global Tech Pulse
  • Cybersecurity & Digital Rights
  • Future of Work & Education
  • Trend Radar & Startup Watch
  • Creator Economy & Culture
What's Hot

Korea – U.S. Joint Truth Sheet Secures Tariff Readability for Autos, Semiconductors, and Prescription drugs – KoreaTechDesk

November 16, 2025

No Man, No Downside: How Suki Baroudi Is Redefining Egypt’s Woodworking World

November 16, 2025

Evaluating the High 4 Agentic AI Browsers in 2025: Atlas vs Copilot Mode vs Dia vs Comet

November 16, 2025
Facebook X (Twitter) Instagram LinkedIn RSS
NextTech NewsNextTech News
Facebook X (Twitter) Instagram LinkedIn RSS
  • Home
  • Africa
  • Asia
  • Europe
  • Middle East
  • North America
  • Oceania
  • South America
  • Opinion
Trending
  • Korea – U.S. Joint Truth Sheet Secures Tariff Readability for Autos, Semiconductors, and Prescription drugs – KoreaTechDesk
  • No Man, No Downside: How Suki Baroudi Is Redefining Egypt’s Woodworking World
  • Evaluating the High 4 Agentic AI Browsers in 2025: Atlas vs Copilot Mode vs Dia vs Comet
  • TARS Robotic from Interstellar Comes Alive, Turns into TARS3D
  • CISA Flags Important WatchGuard Fireware Flaw Exposing 54,000 Fireboxes to No-Login Assaults
  • The Quickest (68k) Macintosh May Not Be An Amiga Anymore
  • The way to Construct Reminiscence-Powered Agentic AI That Learns Constantly By Episodic Experiences and Semantic Patterns for Lengthy-Time period Autonomy
  • OpenAI DevDay Highlight: Artue Exhibits How Korea Is Shaping the Subsequent Wave of AI-Powered Cultural Experiences – KoreaTechDesk
Sunday, November 16
NextTech NewsNextTech News
Home - AI & Machine Learning - o1 Type Considering with Chain-of-Thought Reasoning utilizing Mirascope
AI & Machine Learning

o1 Type Considering with Chain-of-Thought Reasoning utilizing Mirascope

NextTechBy NextTechJuly 19, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
o1 Type Considering with Chain-of-Thought Reasoning utilizing Mirascope
Share
Facebook Twitter LinkedIn Pinterest Email


On this tutorial, we’ll discover the best way to implement Chain-of-Thought (CoT) reasoning utilizing the Mirascope library and Groq’s LLaMA 3 mannequin. Relatively than having the mannequin leap straight to a solution, CoT reasoning encourages it to interrupt the issue down into logical steps—very similar to how a human would resolve it. This method improves accuracy, transparency, and helps sort out complicated, multi-step duties extra reliably. We’ll information you thru establishing the schema, defining step-by-step reasoning calls, producing last solutions, and visualizing the considering course of in a structured method.

We’ll be asking the LLM a relative velocity query – “If a practice leaves Metropolis A at 9:00 AM touring at 60 km/h, and one other practice leaves Metropolis B (which is 300 km away from Metropolis A) at 10:00 AM touring at 90 km/h towards Metropolis A, at what time will the trains meet?”

Putting in the dependencies

!pip set up "mirascope[groq]" 
!pip set up datetime

Groq API Key

For this tutorial, we require a Groq API key to make LLM calls. You may get one at https://console.groq.com/keys

Importing the libraries & defining a Pydantic schema

This part imports the required libraries and defines a COTResult Pydantic mannequin. The schema buildings every reasoning step with a title, content material, and a next_action flag to point whether or not the mannequin ought to proceed reasoning or return the ultimate reply.

from typing import Literal

from mirascope.core import groq
from pydantic import BaseModel, Area


historical past: listing[dict] = []


class COTResult(BaseModel):
    title: str = Area(..., desecription="The title of the step")
    content material: str = Area(..., description="The output content material of the step")
    next_action: Literal["continue", "final_answer"] = Area(
        ..., description="The following motion to take"
    )

Defining Step-wise Reasoning and Last Reply Features

These capabilities kind the core of the Chain-of-Thought (CoT) reasoning workflow. The cot_step operate permits the mannequin to assume iteratively by reviewing prior steps and deciding whether or not to proceed or conclude. This permits deeper reasoning, particularly for multi-step issues. The final_answer operate consolidates all reasoning right into a single, centered response, making the output clear and prepared for end-user consumption. Collectively, they assist the mannequin method complicated duties extra logically and transparently.

@groq.name("llama-3.3-70b-versatile", json_mode=True, response_model=COTResult)
def cot_step(immediate: str, step_number: int, previous_steps: str) -> str:
    return f"""
    You're an skilled AI assistant that explains your reasoning step-by-step.
    For this step, present a title that describes what you are doing, together with the content material.
    Determine in the event you want one other step or in the event you're prepared to provide the ultimate reply.

    Tips:
    - Use AT MOST 5 steps to derive the reply.
    - Concentrate on your limitations as an LLM and what you possibly can and can't do.
    - In your reasoning, embrace exploration of different solutions.
    - Think about chances are you'll be improper, and in case you are improper in your reasoning, the place it will be.
    - Absolutely take a look at all different prospects.
    - YOU ARE ALLOWED TO BE WRONG. Whenever you say you might be re-examining
        - Really re-examine, and use one other method to take action.
        - Don't simply say you might be re-examining.

    IMPORTANT: Don't use code blocks or programming examples in your reasoning. Clarify your course of in plain language.

    That is step quantity {step_number}.

    Query: {immediate}

    Earlier steps:
    {previous_steps}
    """


@groq.name("llama-3.3-70b-versatile")
def final_answer(immediate: str, reasoning: str) -> str:
    return f"""
    Based mostly on the next chain of reasoning, present a last reply to the query.
    Solely present the textual content response with none titles or preambles.
    Retain any formatting as instructed by the unique immediate, resembling precise formatting at no cost response or a number of selection.

    Query: {immediate}

    Reasoning:
    {reasoning}

    Last Reply:
    """

Producing and Displaying Chain-of-Thought Responses

This part defines two key capabilities to handle the total Chain-of-Thought reasoning loop:

  • generate_cot_response handles the iterative reasoning course of. It sends the person question to the mannequin step-by-step, tracks every step’s content material, title, and response time, and stops when the mannequin alerts it has reached the ultimate reply or after a most of 5 steps. It then calls final_answer to provide a transparent conclusion primarily based on the accrued reasoning.
  • display_cot_response neatly prints the step-by-step breakdown together with the time taken for every step, adopted by the ultimate reply and the whole processing time.

Collectively, these capabilities assist visualize how the mannequin causes by way of a posh immediate and permit for higher transparency and debugging of multi-step outputs.

def generate_cot_response(
    user_query: str,
) -> tuple[list[tuple[str, str, float]], float]:
    steps: listing[tuple[str, str, float]] = []
    total_thinking_time: float = 0.0
    step_count: int = 1
    reasoning: str = ""
    previous_steps: str = ""

    whereas True:
        start_time: datetime = datetime.now()
        cot_result = cot_step(user_query, step_count, previous_steps)
        end_time: datetime = datetime.now()
        thinking_time: float = (end_time - start_time).total_seconds()

        steps.append(
            (
                f"Step {step_count}: {cot_result.title}",
                cot_result.content material,
                thinking_time,
            )
        )
        total_thinking_time += thinking_time

        reasoning += f"n{cot_result.content material}n"
        previous_steps += f"n{cot_result.content material}n"

        if cot_result.next_action == "final_answer" or step_count >= 5:
            break

        step_count += 1

    # Generate last reply
    start_time = datetime.now()
    final_result: str = final_answer(user_query, reasoning).content material
    end_time = datetime.now()
    thinking_time = (end_time - start_time).total_seconds()
    total_thinking_time += thinking_time

    steps.append(("Last Reply", final_result, thinking_time))

    return steps, total_thinking_time


def display_cot_response(
    steps: listing[tuple[str, str, float]], total_thinking_time: float
) -> None:
    for title, content material, thinking_time in steps:
        print(f"{title}:")
        print(content material.strip())
        print(f"**Considering time: {thinking_time:.2f} seconds**n")

    print(f"**Whole considering time: {total_thinking_time:.2f} seconds**")

Operating the Chain-of-Thought Workflow

The run operate initiates the total Chain-of-Thought (CoT) reasoning course of by sending a multi-step math phrase drawback to the mannequin. It begins by printing the person’s query, then makes use of generate_cot_response to compute a step-by-step reasoning hint. These steps, together with the whole processing time, are displayed utilizing display_cot_response.

Lastly, the operate logs each the query and the mannequin’s last reply right into a shared historical past listing, preserving the total interplay for future reference or auditing. This operate ties collectively all earlier elements into a whole, user-facing reasoning stream.

def run() -> None:
    query: str = "If a practice leaves Metropolis A at 9:00 AM touring at 60 km/h, and one other practice leaves Metropolis B (which is 300 km away from Metropolis A) at 10:00 AM touring at 90 km/h towards Metropolis A, at what time will the trains meet?"
    print("(Consumer):", query)
    # Generate COT response
    steps, total_thinking_time = generate_cot_response(query)
    display_cot_response(steps, total_thinking_time)

    # Add the interplay to the historical past
    historical past.append({"position": "person", "content material": query})
    historical past.append(
        {"position": "assistant", "content material": steps[-1][1]}
    )  # Add solely the ultimate reply to the historical past


# Run the operate

run()

Take a look at the Codes. All credit score for this analysis goes to the researchers of this challenge.

Sponsorship Alternative: Attain essentially the most influential AI builders in US and Europe. 1M+ month-to-month readers, 500K+ group builders, infinite prospects. [Explore Sponsorship]


PASSPORT SIZE PHOTO

I’m a Civil Engineering Graduate (2022) from Jamia Millia Islamia, New Delhi, and I’ve a eager curiosity in Knowledge Science, particularly Neural Networks and their utility in varied areas.

Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the most recent breakthroughs, get unique updates, and join with a world community of future-focused thinkers.
Unlock tomorrow’s developments immediately: learn extra, subscribe to our publication, and grow to be a part of the NextTech group at NextTech-news.com

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
NextTech
  • Website

Related Posts

Evaluating the High 4 Agentic AI Browsers in 2025: Atlas vs Copilot Mode vs Dia vs Comet

November 16, 2025

The way to Construct Reminiscence-Powered Agentic AI That Learns Constantly By Episodic Experiences and Semantic Patterns for Lengthy-Time period Autonomy

November 16, 2025

Evaluating the High 5 AI Agent Architectures in 2025: Hierarchical, Swarm, Meta Studying, Modular, Evolutionary

November 15, 2025
Add A Comment
Leave A Reply Cancel Reply

Economy News

Korea – U.S. Joint Truth Sheet Secures Tariff Readability for Autos, Semiconductors, and Prescription drugs – KoreaTechDesk

By NextTechNovember 16, 2025

Korea and the US have finalized a Joint Truth Sheet that resets tariff expectations for…

No Man, No Downside: How Suki Baroudi Is Redefining Egypt’s Woodworking World

November 16, 2025

Evaluating the High 4 Agentic AI Browsers in 2025: Atlas vs Copilot Mode vs Dia vs Comet

November 16, 2025
Top Trending

Korea – U.S. Joint Truth Sheet Secures Tariff Readability for Autos, Semiconductors, and Prescription drugs – KoreaTechDesk

By NextTechNovember 16, 2025

Korea and the US have finalized a Joint Truth Sheet that resets…

No Man, No Downside: How Suki Baroudi Is Redefining Egypt’s Woodworking World

By NextTechNovember 16, 2025

Who says that repairing a damaged chair, hanging a chandelier, or restoring…

Evaluating the High 4 Agentic AI Browsers in 2025: Atlas vs Copilot Mode vs Dia vs Comet

By NextTechNovember 16, 2025

Agentic AI browsers are shifting the mannequin from ‘answering in regards to…

Subscribe to News

Get the latest sports news from NewsSite about world, sports and politics.

NEXTTECH-LOGO
Facebook X (Twitter) Instagram YouTube

AI & Machine Learning

Robotics & Automation

Space & Deep Tech

Web3 & Digital Economies

Climate & Sustainability Tech

Biotech & Future Health

Mobility & Smart Cities

Global Tech Pulse

Cybersecurity & Digital Rights

Future of Work & Education

Creator Economy & Culture

Trend Radar & Startup Watch

News By Region

Africa

Asia

Europe

Middle East

North America

Oceania

South America

2025 © NextTech-News. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms Of Service
  • Advertise With Us
  • Write For Us
  • Submit Article & Press Release

Type above and press Enter to search. Press Esc to cancel.

Subscribe For Latest Updates

Sign up to best of Tech news, informed analysis and opinions on what matters to you.

Invalid email address
 We respect your inbox and never send spam. You can unsubscribe from our newsletter at any time.     
Thanks for subscribing!