Skip to main content

Google’s latest AI experiment lets software autocomplete your doodles

Google’s latest AI experiment lets software autocomplete your doodles

Share this story

Photo by Nick Statt / The Verge

Google Brain, the search giant’s internal artificial intelligence division, has been making substantial progress on computer vision techniques that let software parse the contents of hand-drawn images and then re-create those drawings on the fly. The latest release from the division’s AI experiments series is a new web app that lets you collaborate with a neural network to draw doodles of everyday objects. Start with any shape, and the software will then auto-complete the drawing to the best of its ability using predictions and its past experience digesting millions of user-generated examples.

Google’s AI is constantly improving, thanks to human-drawn doodles

The software is called Sketch-RNN, and Google researchers first announced it back in April. At the time, the team behind Sketch-RNN revealed that the underlying neural net is being continuously trained using human-made doodles sourced from a different AI experiment first released back in November called Quick, Draw! That program asked human users to draw various simple objects from a text prompt, while the software attempted to guess what it was every step of the way. Another spinoff from Quick, Draw! is a web app called AutoDraw, which identifies poorly hand-drawn doodles and suggested clean clip art replacements.

All of these programs improve over time as more people use them and keep feeding the AI learning mechanism the instructive data it needs. The end goal, it appears, is to teach Google software to contextualize real-world objects and then re-create them using its understanding of how the human brain draws connections between lines, shapes, and other image components. From there, Google could reasonably deploy even better versions of its existing image recognition tools, or perhaps train future AI algorithms to help robots tag and identify their surroundings.

In the case of this new web app, users can now work alongside Sketch-RNN to see how well it takes a starting shape and transforms it into the desired object or thing you’re trying to draw. For instance, select “pineapple” from the drop-down list of preselected subjects and start with just an oval. From there, Sketch-RNN attempts to make sense of the object’s orientation and decides where to try and doodle in the fruit’s thorny protruding leaves:

Photo by Nick Statt / The Verge

The image list is pretty diverse, with everything from a fire hydrant to power outlet to the Mona Lisa. Yet Sketch-RNN can be pretty hit or miss when it comes to more complicated drawings — the research team behind the software is first to admit this. Here, for instance, is the software trying its (virtual and disembodied) hand at doodling a roller coaster:

Photo by Nick Statt / The Verge

There are a number of other Sketch-RNN demos you can check out to get a deeper understanding for how the program functions. One, called “Multiple Predict,” lets Sketch-RNN generate numerous different versions of the same subject. For instance, when given a prompt to draw a mosquito, you just need to draw what looks like a thorax or abdomen and Sketch-RNN will take it from there while showing you how else it predicts the image could be completed:

Photo: Google

There are two other demos, titled “Interpolation” and “Variational Auto-Encoder,” that will have Sketch-RNN try to move between two different types of similar drawings in real time and also try to mimic your drawing with slight tweaks it comes up with on its own:

Photo: Google

The whole set of programs is a fascinating look underneath the hood of modern computer vision and image and object recognition tool sets tech companies have at their disposal. If you don’t mind drawing crudely with a computer mouse or trackpad and have some free time on your hands, it’s worth an afternoon trying to see how much better — or demonstrably worse — Sketch-RNN make can make your doodles.