UPDATED 21:10 EST / NOVEMBER 21 2019

AI

Google’s Explainable AI service sheds light on how machine learning models make decisions

Google LLC has introduced a new “Explainable AI” service to its cloud platform aimed at making the process by which machine learning models come to their decisions more transparent.

The idea is that this will help build greater trust in those models, Google said. That’s important because most existing models tend to be rather opaque. It’s just not clear how they reach their decisions.

Tracy Frey, director of strategy for Google Cloud AI, explained in a blog post today that Explainable AI is intended to improve the interpretability of machine learning models. She said the new service works by quantifying each data factor’s contribution to the outcome a model comes up with, helping users understand why it makes the decisions it does.

In other words, it won’t be explaining things in layman’s terms, but the analysis should still be useful for data scientists and developers who build the machine learning models in the first place.

Explainable AI has further limitations, as any interpretations it comes up with will depend on the nature of the machine learning model and the data used to train it.

“Any explanation method has limitations,” she wrote. “For one, AI Explanations reflect the patterns the model found in the data, but they don’t reveal any fundamental relationships in your data sample, population, or application. We’re striving to make the most straightforward, useful explanation methods available to our customers, while being transparent about its limitations.”

Nonetheless, Explainable AI could be important because accurate explanations of why a particular machine learning model reaches the conclusions it does would be useful for senior executives within an organization, who are ultimately responsible for those decisions. That’s especially true in the case of highly regulated industries where confidence is absolutely critical. For many organizations in that position, Google said, AI without any kind of interpretability is currently out of bounds.

In related news, Google also released what it calls “model cards,” which serve as documentation for the Face Detection and Object Detection features of its Cloud Vision application programming interface.

The model cards detail the performance characteristics of those pre-trained machine learning models, and provide practical information about their performance and limitations. Google said the intention is to help developers make more informed decisions about which models to use and how to deploy them responsibly.

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU