Take a Look: Google’s Artificial Neural Network Thinks Art | TechWell

Take a Look: Google’s Artificial Neural Network Thinks Art

"Art does not reproduce what is visible; it makes things visible." Artist Paul Klee could also have been talking about art produced by artificial intelligence and artificial neural networks. 

Researchers at Google posted some insight into what’s going on with their AI research for image classification and speech recognition, including artificial neural networks (ANNs). While it’s still difficult to pinpoint what’s going to work and what isn’t, there’s an intriguing byproduct—artistic imagery that Google calls “Inceptionism.”

In theory, ANNs learn much like human neural networks do, by sending messages to each other as they learn to process data. The Google post about the ANN explains: “The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the ‘output’ layer is reached. The network’s ‘answer’ comes from this final output layer.” The artificial neural network is shown literally millions of examples, and the parameters are adjusted until correct classifications are reached.

But how do you check which network layers have correctly learned to categorize an image? Sometimes the network is headed down the wrong path, and a visual image can help detect and correct errors. For example, not all dumbbells have an arm attached, so a picture of just a dumbbell is preferable.

However, researchers also found that neural networks trained to differentiate between different images can also generate new images, based on the data. Researchers fed the neural network an image and allowed the network to analyze the picture. The engineers selected a network layer and asked it to enhance what it detected.

Each network layer deals with features at a different level. Lower levels tend to produce “strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations,” explained Google’s software engineers. “If we choose higher-level layers, which identify more sophisticated features in images, complex features or even whole objects tend to emerge. We ask the network: ‘Whatever you see there, I want more of it!’"

The results are either extraordinarily cool, or not, based on what you like.

Google also generated new impressions—“neural net dreams”—from random noise images using a network trained on a scene-centric database called Places by MIT Computer Science and AI Laboratory. Take a look at the Inceptionism gallery for hi-resolutiom versions. What do you think? Is it art?

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.