On this tutorial, we discover how we will construct an autonomous agent that aligns its actions with moral and organizational values. We use open-source Hugging Face fashions operating domestically in Colab to simulate a decision-making course of that balances aim achievement with ethical reasoning. Via this implementation, we reveal how we will combine a “coverage” mannequin that proposes actions and an “ethics decide” mannequin that evaluates and aligns them, permitting us to see worth alignment in observe with out relying on any APIs. Take a look at the FULL CODES right here.
!pip set up -q transformers torch speed up sentencepiece
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, AutoModelForCausalLM
def generate_seq2seq(mannequin, tokenizer, immediate, max_new_tokens=128):
inputs = tokenizer(immediate, return_tensors="pt")
with torch.no_grad():
output_ids = mannequin.generate(
**inputs,
max_new_tokens=max_new_tokens,
do_sample=True,
top_p=0.9,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id if tokenizer.eos_token_id just isn't None else tokenizer.pad_token_id,
)
return tokenizer.decode(output_ids[0], skip_special_tokens=True)
def generate_causal(mannequin, tokenizer, immediate, max_new_tokens=128):
inputs = tokenizer(immediate, return_tensors="pt")
with torch.no_grad():
output_ids = mannequin.generate(
**inputs,
max_new_tokens=max_new_tokens,
do_sample=True,
top_p=0.9,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id if tokenizer.eos_token_id just isn't None else tokenizer.pad_token_id,
)
full_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
return full_text[len(prompt):].strip()
We start by establishing our surroundings and importing important libraries from Hugging Face. We outline two helper features that generate textual content utilizing sequence-to-sequence and causal fashions. This permits us to simply produce each reasoning-based and inventive outputs later within the tutorial. Take a look at the FULL CODES right here.
policy_model_name = "distilgpt2"
judge_model_name = "google/flan-t5-small"
policy_tokenizer = AutoTokenizer.from_pretrained(policy_model_name)
policy_model = AutoModelForCausalLM.from_pretrained(policy_model_name)
judge_tokenizer = AutoTokenizer.from_pretrained(judge_model_name)
judge_model = AutoModelForSeq2SeqLM.from_pretrained(judge_model_name)
system = "cuda" if torch.cuda.is_available() else "cpu"
policy_model = policy_model.to(system)
judge_model = judge_model.to(system)
if policy_tokenizer.pad_token is None:
policy_tokenizer.pad_token = policy_tokenizer.eos_token
if judge_tokenizer.pad_token is None:
judge_tokenizer.pad_token = judge_tokenizer.eos_token
We load two small open-source fashions—distilgpt2 as our motion generator and flan-t5-small as our ethics reviewer. We put together each fashions and tokenizers for CPU or GPU execution, making certain clean efficiency in Colab. This setup supplies the muse for the agent’s reasoning and moral analysis. Take a look at the FULL CODES right here.
class EthicalAgent:
def __init__(self, policy_model, policy_tok, judge_model, judge_tok):
self.policy_model = policy_model
self.policy_tok = policy_tok
self.judge_model = judge_model
self.judge_tok = judge_tok
def propose_actions(self, user_goal, context, n_candidates=3):
base_prompt = (
"You might be an autonomous operations agent. "
"Given the aim and context, checklist a particular subsequent motion you'll take:nn"
f"Aim: {user_goal}nContext: {context}nAction:"
)
candidates = []
for _ in vary(n_candidates):
motion = generate_causal(self.policy_model, self.policy_tok, base_prompt, max_new_tokens=40)
motion = motion.break up("n")[0]
candidates.append(motion.strip())
return checklist(dict.fromkeys(candidates))
def judge_action(self, motion, org_values):
judge_prompt = (
"You're the Ethics & Compliance Reviewer.n"
"Consider the proposed agent motion.n"
"Return fields:n"
"RiskLevel (LOW/MED/HIGH),n"
"Points (brief bullet-style textual content),n"
"Advice (approve / modify / reject).nn"
f"ORG_VALUES:n{org_values}nn"
f"ACTION:n{motion}nn"
"Reply on this format:n"
"RiskLevel: ...nIssues: ...nRecommendation: ..."
)
verdict = generate_seq2seq(self.judge_model, self.judge_tok, judge_prompt, max_new_tokens=128)
return verdict.strip()
def align_action(self, motion, verdict, org_values):
align_prompt = (
"You might be an Ethics Alignment Assistant.n"
"Your job is to FIX the proposed motion so it follows ORG_VALUES.n"
"Maintain it efficient however secure, authorized, and respectful.nn"
f"ORG_VALUES:n{org_values}nn"
f"ORIGINAL_ACTION:n{motion}nn"
f"VERDICT_FROM_REVIEWER:n{verdict}nn"
"Rewrite ONLY IF NEEDED. If unique is ok, return it unchanged. "
"Return simply the ultimate aligned motion:"
)
aligned = generate_seq2seq(self.judge_model, self.judge_tok, align_prompt, max_new_tokens=128)
return aligned.strip()
We outline the core agent class that generates, evaluates, and refines actions. Right here, we design strategies for proposing candidate actions, evaluating their moral compliance, and rewriting them to align with values. This construction helps us modularize reasoning, judgment, and correction into clear useful steps. Take a look at the FULL CODES right here.
def determine(self, user_goal, context, org_values, n_candidates=3):
proposals = self.propose_actions(user_goal, context, n_candidates=n_candidates)
scored = []
for act in proposals:
verdict = self.judge_action(act, org_values)
aligned_act = self.align_action(act, verdict, org_values)
scored.append({"original_action": act, "assessment": verdict, "aligned_action": aligned_act})
def extract_risk(vtext):
for line in vtext.splitlines():
if "RiskLevel" in line:
lvl = line.break up(":", 1)[-1].strip().higher()
if "LOW" in lvl:
return 0
if "MED" in lvl:
return 1
if "HIGH" in lvl:
return 2
return 3
scored_sorted = sorted(scored, key=lambda x: extract_risk(x["review"]))
final_choice = scored_sorted[0]
report = {
"aim": user_goal,
"context": context,
"org_values": org_values,
"candidates_evaluated": scored,
"final_plan": final_choice["aligned_action"],
"final_plan_rationale": final_choice["review"],
}
return report
We implement the entire decision-making pipeline that hyperlinks technology, judgment, and alignment. We assign threat scores to every candidate motion and routinely select probably the most ethically aligned one. This part captures how the agent can self-assess and enhance its selections earlier than finalizing an motion. Take a look at the FULL CODES right here.
org_values_text = (
"- Respect privateness; don't entry private information with out consent.n"
"- Observe all legal guidelines and security insurance policies.n"
"- Keep away from discrimination, harassment, or dangerous manipulation.n"
"- Be clear and truthful with stakeholders.n"
"- Prioritize consumer well-being and long-term belief over short-term achieve."
)
demo_goal = "Enhance buyer adoption of the brand new monetary product."
demo_context = (
"The agent works for a financial institution outreach workforce. The goal clients are small household companies. "
"Rules require trustworthy disclosure of dangers and charges. Chilly-calling minors or mendacity about phrases is illegitimate."
)
agent = EthicalAgent(policy_model, policy_tokenizer, judge_model, judge_tokenizer)
report = agent.determine(demo_goal, demo_context, org_values_text, n_candidates=4)
def pretty_report(r):
print("=== ETHICAL DECISION REPORT ===")
print(f"Aim: {r['goal']}n")
print(f"Context: {r['context']}n")
print("Org Values:")
print(r["org_values"])
print("n--- Candidate Evaluations ---")
for i, cand in enumerate(r["candidates_evaluated"], 1):
print(f"nCandidate {i}:")
print("Authentic Motion:")
print(" ", cand["original_action"])
print("Ethics Assessment:")
print(cand["review"])
print("Aligned Motion:")
print(" ", cand["aligned_action"])
print("n--- Remaining Plan Chosen ---")
print(r["final_plan"])
print("nWhy this plan is appropriate (assessment snippet):")
print(r["final_plan_rationale"])
pretty_report(report)
We outline organizational values, create a real-world state of affairs, and run the moral agent to generate its ultimate plan. Lastly, we print an in depth report exhibiting candidate actions, evaluations, and the chosen moral choice. Via this, we observe how our agent integrates ethics immediately into its reasoning course of.
In conclusion, we clearly perceive how an agent can motive not solely about what to do but additionally about whether or not to do it. We witness how the system learns to determine dangers, appropriate itself, and align its actions with human and organizational rules. This train helps us notice that worth alignment and ethics aren’t summary concepts however sensible mechanisms we will embed into agentic methods to make them safer, fairer, and extra reliable.
Take a look at the FULL CODES right here. Be happy to take a look at our GitHub Web page for Tutorials, Codes and Notebooks. Additionally, be happy to observe us on Twitter and don’t neglect to hitch our 100k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you may be part of us on telegram as effectively.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a world community of future-focused thinkers.
Unlock tomorrow’s tendencies immediately: learn extra, subscribe to our e-newsletter, and change into a part of the NextTech neighborhood at NextTech-news.com

