Menu Close

“Face detection in untrained deep neural networks” by Baek, S., Song, M., Jang, J. et al. 

Original article: https://doi.org/10.1038/s41467-021-27606-9

I do love this work for many reasons! 

Summary

It demonstrates that face-selectivity can emerge from untrained deep neural networks, whose weights are randomly initialized.  

The author found that units selective to faces emerge robustly in randomly initialized networks and that these units reproduce many characteristics observed in monkeys. ( They only claimed that their results aligned with other studies for face-selectivity in monkeys )

Scientific importance

I think that this work provides some suggestions to the following questions:

Whether this neuronal selectivity can arise innately or whether it requires training from the visual experience?

Where do innate cognitive functions in both biological and artificial neural networks come from?

“These findings may provide insight into the origin of innate cognitive functions
in both biological and artificial neural networks.”

Is face-selectivity a particular type of neuronal properties or is selectivity one common property for face and other objects?

I partially agree with this work that selectivity is one common property for faces and other objects. However, I also believe that “face” is also special in terms of playing a key role in social interaction.

My question

However, I still do not know how technically (or physically or biologically) it could develop face-selectivity in untrained deep neural networks (or in a primate brain).

Do you think that the key factor for the development of the “phenomena” is the feed-forward connections rather than the statistical complexity embedded in each hierarchical circuit? Or maybe both?

Reference:

Baek, S., Song, M., Jang, J. et al. Face detection in untrained deep neural networks. Nat Commun 12, 7328 (2021). https://doi.org/10.1038/s41467-021-27606-9

“Adversarially Robust is A Big Deal”

It is interesting and surprising to see a tweet from Patrick Mineault with a review for the adversarial attack issue on the current artificial neural networks. Here, I just want to put it into my note where I will look it back later on.

“New Clues about the Origins of Biological Intelligence ” by Prof. Rafael Yuste  and Michael Levin

AUTHORS

Rafael Yuste is a professor of biological sciences at Columbia University and director of its Neurotechnology Center.

Michael Levin is a biology professor and director of the Allen Discovery Center at Tufts University.

A common solution is emerging in two different fields: developmental biology and neuroscience

The original article can be found https://www.scientificamerican.com/article/new-clues-about-the-origins-of-biological-intelligence/?amp;text=New

The keywords I want to put into my notes:

modularity; hierarchy; pattern completion

The interesting fact I want to remember:

“when Luis Carrillo-Reid and his colleagues at Columbia University studied how mice respond to visual stimuli, they found that activating as few as two neurons in the middle of a mouse brain—which contains more than 100 million neurons—could artificially trigger visual perceptions that led to particular behaviors

The conclusion I like to think more:

“Like a ratchet, evolution can thus effectively climb the intelligence ladder, stretching all the way from simple molecules to cognition. Hierarchical modularity and pattern completion can help understand the decision-making of cells and neurons during morphogenesis and brain processes, generating well adaptive animals and behavior. Studying how collective intelligence emerges in biology not only can help us better understand the process and products of evolution and design but could also be pertinent for the design of artificial intelligence systems and, more generally for engineering and even the social sciences.”