BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Here's Why Google Is Open-Sourcing Some Of Its Most Important Technology

This article is more than 7 years old.

We think of art as the most human of endeavors, but in recent years we’ve learned that machines can understand creativity too. There are algorithms that evaluate songs and movies for record companies and movie studios. One music professor even created a program that wrote compositions which drew critical acclaim.

Paradoxically, developing algorithms that can create artistic works pushes the bounds of human capability. Unlike machines that, say, dig holes or build cars, algorithms that produce creative work need to understand things that even humans find difficult to articulate. That’s the idea behind Google’s Magenta project, which is developing machine learning tools for art and music

Magenta is built on top of TensorFlow, the library of machine learning tools that the company recently released as an open source technology, allowing anyone who wants to download the source code. To get a sense of why Google would open up its most advanced technology, which is at the heart of its most important products, I asked company executives about it.

What Is TensorFlow?

To understand what TensorFlow is, think about the plumber who comes to your house with his toolbox. Some of the tools are fairly simple, like his wrenches and caulking gun. Others, however, are highly specialized, designed to allow him to fix specific types of pipes and fixtures in highly confined spaces.

TensorFlow works in a similar way. For instance, some of its tools can recognize voices and images, while others can parse text to understand the relationships between words in a sentence to derive meaning. By combining these tools together, TensorFlow allows developers to make highly intelligent products.

To see how TensorFlow can be used, let’s return to our plumbing analogy. Most jobs plumbers do are fairly simple, like a clogged drain or a backed up toilet. Every once in a while though, something more obscure comes up. When that happens, it can take an entire day or more to diagnose the problem or to find the right part to fix it.

So if someone wanted to develop an automated plumbing assistant, they could use the tools in TensorFlow to analyze a database of millions of parts and technical manuals. Still, like any plumber's assistant, Google’s algorithms won’t immediately know what they’re looking at. They have to be trained.

Sending Algorithms To School

The field of Artificial Intelligence is nothing new, but began in 1956 during a series of conferences at Dartmouth which included such luminaries such as Marvin Minsky and Claude Shannon, the father of information theory. There was great enthusiasm about the project and they figured they would have the problem licked within 20 years

Clearly, that didn’t happen. As it turned out, human knowledge is far too complex to be encoded in a set of rules connected by logical statements. However much computer scientists tried to load up their machines with expert knowledge, there would always be huge gaps in understanding. Put simply, rule-based systems don’t scale to the level of human intelligence.

What’s different about artificial intelligence systems like TensorFlow is that they learn. So Google is also sharing how it trains its algorithms, allowing developers with even limited expertise in artificial intelligence to create useful applications.

Let’s return to our plumbing example once again. After training the system on a few thousand images and getting input from human experts, it would begin to recognize important attributes of plumbing supplies. It wouldn’t be perfect, but it would be able to help humans narrow down the options and save a lot of time. A feedback mechanism could be included in the app, so it could continue to learn from its mistakes.

Keep in mind that none of this would be quick or easy, but still, the potential of Google’s technology is clear. It can’t produce an all-knowing, all-seeing master race of robots, but it does open up vast possibilities for humans to collaborate more effectively with machines.

Opening Up TensorFlow

TensorFlow is, of course, an extremely valuable technology. Machine learning itself sits at the outer bounds of computer science and Google is one of the few companies that has an advanced capability in the field. So why would it release the code to the public where anybody, even competitors, can see it?

The decision to open-source was the brainchild of Jeff Dean, who felt that the company’s innovation efforts were being hampered by the slow pace of normal science. Google researchers would write a paper, which would then be discussed at a conference some months later. Months after that somebody else would write another paper building on their work.

Dean saw that open-sourcing TensorFlow could significantly accelerate the process. Rather than having to wait for the next paper or conference, Google's researchers could actively collaborate with the scientific community in real-time. Smart people outside of Google could also improve the source code and, by sharing machine learning techniques more broadly, it would help populate the field with more technical talent.

“Having this system open-sourced we’re able to collaborate with many other researchers at universities and startups, which gives us new ideas about how we can advance our technology. Since we made the decision to open-source, the code runs faster, it can do more things and it’s more flexible and convenient,” says Rajat Monga, who leads the TensorFlow team.

Building Ecosystems Of Value

From a traditional point of view, Google’s decision to open-source its machine learning tools may seem strange. We couldn’t imagine Coca-Cola releasing its famous formula to the public and even many tech companies, such as Apple, are famously secretive about their products and processes. Even Google keeps many things, like its search algorithms, close to the vest.

Yet the world is changing. It used to be that the surest path to success was to optimize the value chain. By honing internal processes and building scale, you could improve your negotiating position with both customers and suppliers, creating even more efficiency still. That’s how you built a sustainable competitive advantage.

Now, however, the most successful products arise out of ecosystems of value. Sure, Google employs some very smart people, but they can only advance technology if they are firmly embedded in the greater scientific community. It also needs legions of others to expand the reach of its work through embedding Google technology into their own products.

That’s why, like Google, many other cutting edge companies are sharing key technologies openly. Soon after Google released TensorFlow, Facebook announced it would open source its own library of AI tools. Tesla open-sourced its electric car patents. Even more recently, IBM shared its quantum computing platform on the cloud.

Today we live in a semantic economy where everything connects and competitive advantage goes not to those who best reduce costs and leverage assets, but those who create new informational value for the entire ecosystem. Power no longer resides at the top of the heap, but rather at the center of networks.

Follow me on TwitterCheck out my website