Hallucinations of AI are becoming more and more dangerous as models are increasingly trusted to surface information and make important decisions.
We all have all knowledgeable friends who cannot acknowledge when they don’t know something, or rely on giving dangerous advice based on what they read online. Hallucinations by AI models are like their friends, but this could be responsible for creating a cancer treatment plan.
So Themis Ai is in the picture. This MIT spinout seems simple in theory, but in reality it was very complicated and managed to achieve what I’m telling AI systems “I don’t know about this.”
AI systems usually display overconfidence. Themis’ Capsa platform serves as a reality check for AI and helps us recognize that models challenge speculation rather than certainty.
Founded in 2021 by MIT professor Daniela Rath and together with former research colleagues Alexander Amini and Elahhe Ahmadi, Themis AI has developed a platform that allows virtually any AI system and moments of uncertainty to be flagged before it leads to mistakes.
Capsa essentially trains AI to detect patterns of how it handles information that can confuse, bias, or work with incomplete data that can lead to hallucinations.
Since its launch, Themis claims it has published research into creating chatbots that do not constitute things by telecom companies, avoiding expensive network planning errors, helping oil and gas companies understand complex earthquake data.
Most people don’t realize how often AI systems are simply making the best guess. These speculations can have serious consequences as these systems handle increasingly important tasks. Themis AI software adds a layer of missing self-awareness.
Themis’ journey to tackle AI hallucinations
The trip to Themis AI began years ago at Professor RUS’s MIT Lab. The team was investigating the underlying issues. How do you make your machine recognize your limits?
In 2018, Toyota funded research into trustworthy AI in self-driving cars, a sector where mistakes can be fatal. If self-driving cars must accurately identify pedestrians and other road hazards, the interests are very high.
Their breakthrough occurred when they developed an algorithm that could find racial and gender bias in facial recognition systems. Rather than simply identifying the problem, I actually fixed the system by recalibrating the training data. This is essentially teaching AI to correct its own bias.
By 2021, they demonstrated how this approach could revolutionize drug discovery. AI systems can assess potential drugs, but importantly, there is a flag if the prediction was based on robust evidence and educated speculation or complete hallucination. The pharmaceutical industry has recognized the potential savings in money and time by focusing solely on drug candidates that AI has confidence in.
Another advantage of this technology is devices with limited computing power. Edge devices use small models that do not match the accuracy of the huge models running on the server, but with Themis technology, these devices can handle most tasks locally, asking the big server for help when encountering difficulties.
AI has great potential to improve our lives, but there are real risks to that possibility. When AI systems are deeply integrated into critical infrastructure and decision-making, their ability to acknowledge uncertainties that lead to hallucinations can prove to be the most human and most valuable quality. Themis AI makes them learn this important skill.
reference: Diabetes Management: IBM and Roche use AI to predict blood glucose levels

Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London. The comprehensive event will be held in collaboration with other major events, including the Intelligent Automation Conference, Blockx, Digital Transformation Week, and Cyber Security & Cloud Expo.
Check out other upcoming Enterprise Technology events and webinars with TechForge here.