BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How To Avoid Another AI Winter

Forbes Technology Council
POST WRITTEN BY
Jans Aasman

Getty

Although there has been great progress in artificial intelligence (AI) over the past few years, many of us remember the AI winter in the 1990s, which resulted from overinflated promises by developers and unnaturally high expectations from end users. Now, industry insiders, such as Facebook head of AI Jerome Pesenti, are predicting that AI will soon hit another wall—this time due to the lack of semantic understanding.

“Deep learning and current AI, if you are really honest, has a lot of limitations,” said Pesenti. “We are very, very far from human intelligence, and there are some criticisms that are valid: It can propagate human biases, it’s not easy to explain, it doesn't have common sense, it’s more on the level of pattern matching than robust semantic understanding.”

Other computer scientists believe that AI is currently facing a "reproducibility crisis" because many complex machine-learning algorithms are a "black box" and cannot be easily reproduced. Joelle Pineau, a computer science professor at McGill, points out that replicating and explaining how AI models work provides transparency that aids future technology innovation and research efforts and also becomes critical when algorithms replace human decision-making for things like deciding who stays in jail and who is approved for a mortgage.

Let’s take a look at what can be done to avoid another AI winter.

Start With Symbolic AI

The inability to explain and reproduce AI models is a hurdle we need to cross in order for AI to be both trusted and practical. This can be accomplished by taking a step back in time and looking at symbolic AI again and then taking two steps forward by combining symbolic AI (classic knowledge representations, rule-based systems, reasoning, graph search) with machine learning techniques.

Symbolic AI adds meaning or semantics to data through the use of ontologies and taxonomies. Rule-based systems are a major technology in symbolic AI. These systems also heavily rely on these ontologies and taxonomies as they help with formulating correct and meaningful if/then rules. The advantage of using rules and rule-based systems is that they provide consistent and repeatable results but also greatly help with getting explainable results.

Eliminate Data Silos

For AI to deliver on current expectations, it also requires eliminating silos to query across IT systems, issuing sophisticated aggregate queries and automating schema and data validation measures for accurate analytics results.

The rigors for assembling diverse, annotated training datasets for machine learning models mandates the ability to query across databases or swiftly integrate disparate sources for this purpose. Semantic graph databases support this prerequisite for statistical AI with a standards-based approach in which each node and edge of the graph has a unique, machine-readable global identifier.

Thus, organizations can link together different databases to query across them while incorporating a range of sources for common use cases, such as predicting an individual’s next health issue or just-in-time supply chain management.

These federated queries not only make silo culture obsolete, but also ensure that data always remain relevant and future-proof against any upcoming technologies. In an age in which AI and analytics have become increasingly necessary for real-time action, organizations simply won’t have time to rebuild the schema and nomenclature between silo databases.

Auto-Validate Data 

The notion of schema is intrinsically related to data validation, which is essential for trusting the results of queries for analytics. Semantic knowledge graphs standardize all schema with naturally evolving data models, self-describing schema and schema on-demand options like JSON, JSON-LD, and SHACL.

Frameworks like SHACL are pivotal for validating data and automatically do so by ensuring a uniformity of data shapes. The reality is that without apparent schema it’s almost impossible to validate data for this basic facet of data quality without writing procedural code, which can be extremely time-consuming based on the scale of the undertaking.

The automation of the AI age is well upon us; manually generating script for data validation measures that are as automatable and repeatable as cognitive computing itself is merely a waste of time.

Standardize Vocabularies To Aggregate Queries

The sophistication of the queries necessary for operational AI is extremely trying; however, the standardization of the vocabularies and taxonomies in knowledge graphs supports sequential query aggregates that are difficult to duplicate in other settings. Once organizations standardize the words—and their meaning—for different concepts, they can not only create various taxonomies but also link them for queries.

In healthcare, for example, it’s then possible to query all of the patients with a certain type of disease, a specific treatment for it, a transplant within the next three months and a particular medication transcribed for it. Such sophisticated, temporal queries are partially aided by the ability to query different taxonomies to aggregate various queries into a single one for specific results.

In the near future, there will be few use cases in which the capability to query across databases, automatically validate data and aggregate queries won’t be needed. AI will become essential to the enterprise and those adopting a semantic knowledge graph approach will be certain to stay out of the cold.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?