Truera applies explainability AI research to grow its model evaluation platform

Aug 9, 2020News

As AI impacts more industries and areas of society, startups are building testing tools to help companies and governments assess the state of their models and identify potential compliance issues. Google Cloud chief AI scientist Andrew Moore recently predicted that the AI development pipeline will someday center on testing and validation. But Truera, previously known as AILens, says it’s building a different kind of platform, based on the work of co-founders Anupam Datta and Shayak Sen, whose explainability AI research considers causality and cooperative game theory, as well as fairness and bias.

Datta, who is on leave from his job as a professor at Carnegie Mellon University, told VentureBeat the research behind Truera began in 2014, when CMU researchers uncovered forms of gender bias in online advertising but lacked tools to understand how to measure that bias. The Truera platform can detect and help mitigate problems that occur in AI models, like bias imbalances, changes in data distribution that can impact predictions, and stability over time. Datta said customers are using this feature more as a result of anomalies brought on by COVID-19.

The original article can be found here.

In the field of AI and causality, Professor Judea Pearl is a pioneer for developing a theory of causal and counterfactual inference based on structural models. In 2011, Professor Pearl won the Turing Award, computer science’s highest honor, for “fundamental contributions to artificial intelligence through the development of a calculus of probabilistic and causal reasoning.”  In 2020, Professor Pearl is also awarded as World Leader in AI World Society (AIWS.net) by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF).