- Quick Draw dataset sources 50 million sketches from 15 million artists
- Across the globe, people draw objects with remarkable similarity
- Machine learning focuses not just on what we draw but how we draw
What can a bathtub doodle teach us about humanity?
Mainly, that on some fundamental level, we are more alike than we may think.
To arrive at this insight, a recent art project called Forma Fluens analyzed millinos of digital sketches from around the world. Sourced from Google’s Quick Draw app, these twenty-second finger-drawn doodles were originally intended to help neural networks learn to recognize shapes.
Filling in the blanks
When asked to draw an object, most people not only draw the same basic shape, but will execute it in the same way. Artists begin with an outline and then fill in details, such as drawing an oval for a head and completing the eyes and other features afterward.
A drawing reveals not only how someone sees the object represented, but also how they imagine and remember it. ~Mauro Martino
There are a few notable exceptions amid the apparent unity. Doodlers in most countries draw traffic lights as vertical rectangles, but in Japan and Taiwan they are depicted horizontally, reflecting the prevailing shape of the actual object in those regions.
Less easy to explain is the different rendering of phones. Most Quick Draw users sketch the simple rectangle of a smartphone, but in Japan and India a desktop rotary phone with a handset predominates. Since smartphones are widely available in these countries, the choice of the iconic, historical image, reflects some cultural difference in seeing.
“Drawing is a primary form of creation,” says Martino. “A drawing reveals not only how someone sees the object represented, but also how they imagine and remember it.”
A cat by any other name
Doodles don’t accurately reflect the details of the objects they represent. They serve as symbolic communication, learned by children as they first begin to draw. Like their more-stylized modern kin, emojis, doodles carry emotional significance in just a few strokes.
The researchers behind Quick Draw are interested in improving a machine’s ability to decipher the idiosyncrasies on display as the global population portrays an object or thought.
A human can view a drawing of the head of a cat, the silhouette of a cat, or a cat’s features and recognize them as belonging in the same category. Examining millions of cat doodles helps the machine learn what all cat pictures have in common: A small nose, pointy ears, and whiskers.
To decipher variations in handwriting, the machine considers intent as much as results. It looks at which strokes are made first and in what direction — not just what we write, but how it is written. This attention leads to an ability to distinguish a hastily handwritten cursive ‘l’, from a ‘b’, or maybe even an ‘f’.
The Quick Draw dataset contains 50 million drawings across 345 categories by 100,000 artists from 34 countries and includes metadata for what the player was asked to draw and in which country they were located.
The data is publicly available and Google’s researchers encourage others to use it in creative ways. One group analyzed the ‘circle’ category, and correlated the direction of drawing (clockwise or counter-clockwise) to a region’s writing systems.
Another artist automatically generated faces that sourced all the features from different sketches. A face might have an eye from Korea, a nose from Germany, a mouth from Serbia, an ear from Australia, and a head from the Philippines.
In divisive times, it’s comforting to know that something as simple as a doodle can bring the human race closer together.
“When I think that all of humanity draws some objects in the same way — not only with the same form, but even with the same sequences of movements — I get shivers,” says Martino. “We are all so close to all of the other human beings on the planet.”
Forma Fluens has been shortlisted for the Kantar “Information is Beautiful” awards. Voting is open to the public until midnight PST on Friday, October 27. Winners will be announced in London on November 28, 2017.