How the brain develops: a new way to shed light on cognition

Summary: A new computational neuroscience study sheds light on how the brain’s cognitive abilities develop and could help shape new AI research.

Source: University of Montreal

A new study introduces a new neurocomputational model of the human brain that could shed light on how the brain develops complex cognitive skills and advance neural artificial intelligence research.

Published on September 19, the study was carried out by an international group of researchers from the Institut Pasteur and the Sorbonne Université de Paris, the CHU Sainte-Justine, Mila – Quebec Institute of Artificial Intelligence and the University of Montreal.

The model, which made the cover of the magazine Proceedings of the National Academy of Sciences of the United States of America (PNAS), describes neural development in three hierarchical levels of information processing:

  • the first sensorimotor level explores how the inner activity of the brain learns patterns of perception and associates them with action;
  • the cognitive level examines how the brain contextually combines these patterns;
  • finally, the conscious level considers how the brain dissociates itself from the outside world and manipulates learned patterns (through memory) that are no longer accessible to perception.

The team’s research provides clues to the basic mechanisms underlying cognition through the model’s focus on the interplay between two fundamental types of learning: Hebbian learning, which is associated with statistical regularity (i.e. repetition), or as neuropsychologist Donald Hebb has said, “Neurons that fire together, wire together” and reinforcement learning, associated with reward and the neurotransmitter dopamine.

The model solves three tasks of increasing complexity at these levels, from visual recognition to cognitive manipulation of conscious perceptions. Each time, the team introduced a new core mechanism to allow it to move forward.

The results highlight two fundamental mechanisms for the multilevel development of cognitive abilities in biological neural networks:

  • synaptic epigenesis, with local-scale Hebbian learning and global-scale reinforcement learning;
  • and self-organized dynamics, through spontaneous activity and the balanced excitatory/inhibitory relationship of neurons.
The model solves three tasks of increasing complexity at these levels, from visual recognition to cognitive manipulation of conscious perceptions. The image is in the public domain

“Our model demonstrates how neuro-AI convergence highlights biological mechanisms and cognitive architectures that can fuel the development of the next generation of artificial intelligence and even lead to artificial consciousness,” said Guillaume Dumas, team member, assistant professor of computational psychiatry at UdeM, and principal investigator at the CHU Sainte-Justine Research Center.

Achieving this milestone may require integrating the social dimension of cognition, he added. Researchers are now seeking to integrate the biological and social dimensions at play in human cognition. The team has already pioneered the first simulation of two whole brains interacting.

The team believes that anchoring future computational models in biological and social realities will not only continue to illuminate the basic mechanisms underlying cognition, but will also help provide a unique bridge to artificial intelligence toward the ‘the only known system with advanced social consciousness: the human being. brain

About this computational neuroscience research news

Author: Julie Gazaille
Source: University of Montreal
Contact: Julie Gazaille – University of Montreal
Image: The image is in the public domain

Original research: Open access
“Multilevel Development of Cognitive Skills in an Artificial Neural Network” by Guillaume Dumas et al. PNAS


Summary

See also

This shows a pregnant woman being hugged by her partner

Multilevel development of cognitive skills in an artificial neural network

Several neural mechanisms have been proposed to explain the formation of cognitive skills through postnatal interactions with the physical and sociocultural environment.

Here, we introduce a three-level computational model of information processing and cognitive skill acquisition. We propose minimum architectural requirements for building these tiers, and how the parameters affect their performance and relationships.

The first sensorimotor level handles unconscious local processing, here during a visual classification task. The second or cognitive level globally integrates information from multiple local processors through long-range connections and synthesizes it globally, but not yet consciously. The third and cognitively highest level manages information globally and consciously. It is based on the Global Neural Workspace (GNW) theory and is known as the conscious level.

We use the trace and delay conditioning tasks to challenge the second and third levels, respectively. The results first highlight the need for epigenesis through the selection and stabilization of synapses both locally and globally to enable the network to solve the first two tasks.

On a global scale, dopamine appears to be necessary to correctly deliver credit allocation despite the temporal delay between perception and reward. At the third level, the presence of interneurons becomes necessary to maintain a self-sustained representation within the GNW in the absence of sensory input.

Finally, while balanced spontaneous intrinsic activity facilitates epigenesis at both a local and global scale, a balanced excitatory/inhibitory relationship enhances performance. We discuss the model’s plausibility in terms of both neurodevelopment and artificial intelligence.

Leave a Reply