Hirundo, the primary startup devoted to machine unlearning, has raised $8 million in seed funding to deal with a number of the most urgent challenges in synthetic intelligence: hallucinations, bias, and embedded information vulnerabilities. The spherical was led by Maverick Ventures Israel with participation from SuperSeed, Alpha Intelligence Capital, Tachles VC, AI.FUND, and Plug and Play Tech Middle.
Making AI Neglect: The Promise of Machine Unlearning
In contrast to conventional AI instruments that concentrate on refining or filtering AI outputs, Hirundo’s core innovation is machine unlearning—a way that enables AI fashions to “overlook” particular information or behaviors after they’ve already been skilled. This method allows enterprises to surgically take away hallucinations, biases, private or proprietary information, and adversarial vulnerabilities from deployed AI fashions with out retraining them from scratch. Retraining large-scale fashions can take weeks and hundreds of thousands of {dollars}; Hirundo presents a much more environment friendly different.
Hirundo likens this course of to AI neurosurgery: the corporate pinpoints precisely the place in a mannequin’s parameters undesired outputs originate and exactly removes them, all whereas preserving efficiency. This groundbreaking approach empowers organizations to remediate fashions in manufacturing environments and deploy AI with a lot better confidence.
Why AI Hallucinations Are So Harmful
AI hallucinations consult with a mannequin’s tendency to generate false or deceptive info that sounds believable and even factual. These hallucinations are particularly problematic in enterprise environments, the place choices based mostly on incorrect info can result in authorized publicity, operational errors, and reputational harm. Research have proven that 58 to 82% % of “details” generated by AI for authorized queries contained some sort of hallucination.
Regardless of efforts to attenuate hallucinations utilizing guardrails or fine-tuning, these strategies typically masks issues slightly than eliminating them. Guardrails act like filters, and fine-tuning usually fails to take away the basis trigger—particularly when the hallucination is baked deep into the mannequin’s realized weights. Hirundo goes past this by truly eradicating the habits or information from the mannequin itself.
A Scalable Platform for Any AI Stack
Hirundo’s platform is constructed for flexibility and enterprise-grade deployment. It integrates with each generative and non-generative programs throughout a variety of knowledge varieties—pure language, imaginative and prescient, radar, LiDAR, tabular, speech, and timeseries. The platform mechanically detects mislabeled gadgets, outliers, and ambiguities in coaching information. It then permits customers to debug particular defective outputs and hint them again to problematic coaching information or realized behaviors, which could be unlearned immediately.
That is all achieved with out altering workflows. Hirundo’s SOC-2 licensed system could be run through SaaS, non-public cloud (VPC), and even air-gapped on-premises, making it appropriate for delicate environments corresponding to finance, healthcare, and protection.
Demonstrated Influence Throughout Fashions
The corporate has already demonstrated sturdy efficiency enhancements throughout standard giant language fashions (LLMs). In exams utilizing Llama and DeepSeek, Hirundo achieved a 55% discount in hallucinations, 70% lower in bias, and 85% discount in profitable immediate injection assaults. These outcomes have been verified utilizing impartial benchmarks corresponding to HaluEval, PurpleLlama, and Bias Benchmark Q&A.
Whereas present options work properly with open-source fashions like Llama, Mistral, and Gemma, Hirundo is actively increasing assist to gated fashions like ChatGPT and Claude. This makes their know-how relevant throughout the complete spectrum of enterprise LLMs.
Founders with Tutorial and Trade Depth
Hirundo was based in 2023 by a trio of specialists on the intersection of academia and enterprise AI. CEO Ben Luria is a Rhodes Scholar and former visiting fellow at Oxford, who beforehand based fintech startup Worqly and co-founded ScholarsIL, a nonprofit supporting greater training. Michael Leybovich, Hirundo’s CTO, is a former graduate researcher on the Technion and award-winning R&D officer at Ofek324. Prof. Oded Shmueli, the corporate’s Chief Scientist, is the previous Dean of Pc Science on the Technion and has held analysis positions at IBM, HP, AT&T, and extra.
Their collective expertise spans foundational AI analysis, real-world deployment, and safe information administration—making them uniquely certified to deal with the AI trade’s present reliability disaster.
Investor Backing for a Reliable AI Future
Traders on this spherical are aligned with Hirundo’s imaginative and prescient of constructing reliable, enterprise-ready AI. Yaron Carni, founding father of Maverick Ventures Israel, famous the pressing want for a platform that may take away hallucinated or biased intelligence earlier than it causes real-world hurt. “With out eradicating hallucinations or biased intelligence from AI, we find yourself distorting outcomes and inspiring distrust,” he mentioned. “Hirundo presents a kind of AI triage—eradicating untruths or information constructed on discriminatory sources and fully reworking the probabilities of AI.”
SuperSeed’s Managing Accomplice, Mads Jensen, echoed this sentiment: “We put money into distinctive AI corporations reworking trade verticals, however this transformation is just as highly effective because the fashions themselves are reliable. Hirundo’s method to machine unlearning addresses a vital hole within the AI growth lifecycle.”
Addressing a Rising Problem in AI Deployment
As AI programs are more and more built-in into vital infrastructure, considerations about hallucinations, bias, and embedded delicate information have gotten tougher to disregard. These points pose important dangers in high-stakes environments, from finance to healthcare and protection.
Machine unlearning is rising as a vital software within the AI trade’s response to rising considerations over mannequin reliability and security. As hallucinations, embedded bias, and publicity of delicate information more and more undermine belief in deployed AI programs, unlearning presents a direct option to mitigate these dangers—after a mannequin is skilled and in use.
Relatively than counting on retraining or surface-level fixes like filtering, machine unlearning allows focused elimination of problematic behaviors and information from fashions already in manufacturing. This method is gaining traction amongst enterprises and authorities companies in search of scalable, compliant options for high-stakes purposes.

