Menu Close

The thoughts about designing a brain-like model

This is a great discussion point. If I am right, we aim to build a model that can have similar computational mechanisms underlying various cognitions and behaviors. These mechanisms are formed through anatomical structures in the brain and altered through learning (e.g., visual experience). When speaking a model (e.g., ANNs), it starts from its structure and ends by learning (e.g., training on data). We have input, internal representation, and output parts to consider for building a model. We also need to consider the spatial and temporal dimensions of information flow formed in the internal model. I believe that this question may be too big to talk about here. I want to highlight the following points I have been considering:

  1. Some low-level features are still missing in the early layers of current artificial neuralnetworks. For example, some study has suggested eye position information in the V1 brain area, which probably has not been seen in current ANNs.
  2. It seems that the “neurons” in current ANNs do not have one unified principle to follow, such as the free-energy principle says that any self-organizing system that is at equilibrium with its environment must minimize its free energy (by Karl Friston). Here I like to know why the brain has inhibitory and excitatory neurons.
  3. It is related to both visual perception and decision-making when we speak image classification. It may be tightly associated with the prefrontal cortex (PFC) from the brain’s perspective. My current emotion perception study demonstrates that the emotion valence rating is associated with PFC. We may consider PFC into the brain-like model for visual perception. In addition, we still have many questions about how the concepts (e.g., “knife,” “dog,” or “party”) are stored in the brain. A recent study from Leonardo Fernandino indicates that conceptual knowledge is stored as patterns of neural activity that encode sensory-motor and affective information about each concept. Here we may argue that PFC is not necessary to allow animals to see. I believe that it is likely true. However, is the “see” the same as the “see” in the human brain? Probably not. Another recent study (by Schafroth, J.L., Basile, B.M., Martin, A. et al. ) indicates that monkeys just know that a triangle is just a triangle, but humans can spontaneously ascribe mental states to animated shapes.

It would be great to discuss more ideas to build a model that can match the brain better.