Jeff Dean and his team from Google, working with Andrew Ng and Quoc Le from Stanford University, have effectively created a rudimentary, low-resolution digital version of the brain’s visual cortex.
The system, which comprises of a cluster of 1,000 computers (totaling 16,000 processor cores), analyzes 10 million 200×200 still frames from YouTube. Over 3 days, the system’s software builds up a network of hundreds of neurons and thousands (millions?) of synapses. During this period, the system tries to identify features — edges, lines, colors — and then creates object categories based on these features.
The rather intriguing result is that, when the system looks at an image of a cat, a specific (digital) neuron fires — just like in a human brain. Watching the system in action — watching the neurons light up — is almost like performing a virtual, digital MRI scan. In the picture below, you can see the contents of the “human face” neuron, alongside some of the stimuli that successfully trigger the neuron.