Skip to main content

Google is making it easier than ever to give any app the power of object recognition

Google is making it easier than ever to give any app the power of object recognition

/

The company has open sourced a number of mobile-first machine vision programs

Share this story

google image recognition fb
An example of on-device object recognition.

Smartphones have fast become the new frontier of artificial intelligence. Algorithms that used to run in the cloud, beaming results down to our devices via the internet, are now being replaced by software that runs directly on phones and tablets. Facebook is doing it, Apple is doing it, and Google is (perhaps) doing it slightly more than anyone else.

The latest example of mobile AI from the Silicon Valley search giant is the release of MobileNets, a set of machine vision neural networks designed to run directly on mobile devices. The networks come in a variety of sizes to fit all sort of devices (bigger neural nets for more powerful processors) and can be trained to tackle a number of tasks.

MobileNets can be used to analyze faces, detect common objects, geolocate photos, and perform fine-grained recognition tasks, like identifying different species of dogs. These tools are extremely adaptable, and could be put to a number of different uses, including powering augmented reality features, or creating apps to help the disabled. Google says the performance of each neural net differs from task to task, but overall, its networks either meet or approach recent state-of-the-art standards.

Google’s new MobileNets can be trained to complete a number of different tasks.
Google’s new MobileNets can be trained to complete a number of different tasks.
Image: Google

For consumers, this is going to mean more mobile apps with AI functions as developers start incorporating these tools. Running these sort of tasks directly on-device has a number of benefits for everyday users, including faster performance, greater convenience (you don’t have to connect to the internet), and better privacy (your data isn’t being sent off-device).

Apple pushed the latter angle particular when it released a set of machine learning APIs for developers (named CoreML) earlier this month, and both Facebook and Google have created their own frameworks for building mobile-first. Even Snapchat is working on putting image recognition on your phone, releasing its first academic paper on the subject this week. The next step for mobile AI? Specially designed mobile processors. Both Google and Apple have dropped hints they might be crafting such silicon, and ARM has already released a first early batch.