Video: A.I. Flight

Neurons learn to recognize some amazing patterns.

We typically don’t know what goes on inside a neural network, but there are ways to get a better understanding of what the individual neurons in a network do. Like neurons in the brain, each neuron in a neural network has an output, or activation, that is a combination of its inputs. Those outputs are then used as inputs for other neurons.

Filter visualization applied to the VGG19 neural network, layer 9, filter 223, trained on ImageNet.

When a neural network has been trained to perform a certain task, such as classifying the main object in an image (eg. is it an image of a gibbon or a coffee cup?), the individual neurons become specialized into picking out certain features of the input, such as horizontal lines, or circular objects, or more complex features such as objects with an eye-like appearance.

For example, if we took a neuron that has learnt to identify horizontal lines, and input an image of horizontal lines into the network, then our neuron will activate strongly, producing a relatively large output. If we fed the network something else, say a circle, then it would respond to a lesser extent. Neurons deeper in the neural network will typically respond to more complex patterns.

The Video

In the video for this post, we used a simple technique called filter visualization. In this technique we start off with a random image of noise, and feed that image into the network. We then look specifically at the output of an individual neuron in the network (it can be any of them), and figure out how to modify the individual pixels of the input image so that we will get a larger output at that neuron. By doing this over and over again, we will eventually get an input image that strongly activates our neuron, giving us a good idea of what type of image or features the neuron recognizes.

In the video, we start off with our random noise input image, select a neuron (or filter) from our network (either VGG16 or VGG19), and perform filter visualization for a certain number of iterations. Our modified image then becomes the first frame. We apply rotations, scaling, translations, and other transformations to the frame, reapply filter visualization, and that becomes our second frame. Repeating this process many times, changing which filter to visualize along the way to produce different patterns, gives us the video.

Further Reading

Filter Visualization: How to visualize convolutional features in 40 lines of code

Leave a Reply

Your email address will not be published. Required fields are marked *