Comparing deep networks with the brain: can I “see” as well as humans? | India News

BENGALURU: A new study from the IISc Center for Neuroscience (SNC) explored how well deep neural networks – machine learning inspired by the network of brain cells or neurons in the human brain – compare to the human brain when it comes to visual perception.
Showing that deep neural networks can be trained to perform specific tasks, the researchers say they have played a key role in helping scientists understand how our brains perceive the things we see.
“Although deep networks have evolved significantly in the last decade, are nowhere near the performance, as well as the human brain, in perceiving visual cues. In a recent study, SP Arun, an associate professor at CNS, and his team compared different qualitative properties of these deep networks with those of the human brain, ”IISc said in a statement.
Deep networks, while a good model for understanding how the human brain views objects, work differently from the latter, IISc said, adding that while complex computing is commonplace for them, certain tasks that are relatively easy for humans can be difficult for these networks. completely.
“In the current study, published in Communications about nature, Arun and his team tried to understand what visual tasks these networks can perform naturally due to their architecture and which require additional training. The team studied 13 different perceptual effects and discovered previously unknown qualitative differences between the deep networks and the human brain “, the statement reads.
One example, IISc said, was the Thatcher effect phenomenon where it is easier for people to recognize changes in local features in a vertical image, but this becomes difficult when the image is flipped upside down.
Deep nets trained to recognize vertical faces showed a Thatcher effect compared to nets trained to recognize objects. Another visual property of the human brain, called mirror confusion, has been tested on these networks. To humans, the reflections of the mirrors along the vertical axis seem more similar than those along the horizontal axis. Researchers have found that deep networks also look stronger mirror confusion for vertically reflected images compared to horizontally reflected images.
“Another phenomenon specific to the human brain is that it focuses first on coarser details. This is known as the global benefit effect. For example, in an image of a tree, our brain would first see the tree as a whole, before observing the details of the leaves in it “, explains Georgin Jacob, first author and PhD student at CNS.
Surprisingly, he said, neural networks had a local advantage. This means that, unlike the brain, networks focus first on the finer details of an image. Therefore, even though these neural networks and the human brain perform the same object recognition tasks, the steps followed by the two are very different.
Arun, the study’s lead author, says identifying these differences may push researchers closer to making these networks more brain-like. Such analyzes can help researchers build more robust neural networks that not only perform better, but are also immune to “contradictory attacks” that aim to derail them.

.Source