Self-assembling neural networks can open new directions for AI research

Brain activity
Image credit: 123RF

This article is part of our coverage of the latest in AI research.

Biological and artificial neural networks, despite their shared nomenclature, are fundamentally different. High-performing deep neural networks often demand significant engineering effort. In contrast, biological nervous systems evolve through a dynamic, self-organizing process that starts with a single cell. 

In a recently published paper, scientists at the IT University of Copenhagen, Denmark, have proposed the concept of self-assembling neural networks that “grow through a developmental process that mirrors key properties of embryonic development in biological organisms.” 

Their early findings hint at a promising new direction in AI research that emulates the organic formation of biological neural networks in digital intelligence.

What’s missing in deep neural networks

Deep neural networks, unlike their biological counterparts, begin with a predefined, often manually designed structure. During the learning phase, they merely adjust their connection weights based on training examples. They lack the capacity to self-organize, grow, and adapt to new situations like biological neurons. 

In stark contrast, biological networks self-assemble, growing from a single initial cell to create highly complex systems. The information required to directly specify the wiring of an advanced biological brain far exceeds the information stored in the genome.

To put this into perspective, the human brain’s 100 trillion neural connections are encoded by merely around 30,000 active genes. 

The authors of the paper note, “Neuroscience suggests that this limited capacity has a regularizing effect that results in wiring and plasticity rules that generalize well.” 

The inherent limitations of biological networks may contribute to their remarkable adaptability and efficiency. How can the same characteristics be built into deep neural networks?

human brain gears
Image credit: Depositphotos

Self-assembling neural networks

The vision outlined in the paper is to develop an AI system where neurons self-assemble, grow, and adapt based on the task at hand, emulating the natural processes observed in biological networks. The scientists propose a unique graph neural network (GNN) encoding that involves two interconnected neural networks. 

The first is a policy network that governs the actions of an agent. This policy network is, in turn, controlled by a second network operating within each neuron, termed the Neural Developmental Program (NDP). The NDP receives input from the connected neurons in the policy network and determines whether a neuron should replicate and how each connection in the network should set its weight. Starting from a single neuron, this approach cultivates a functional policy network, relying exclusively on the local communication of neurons.

This approach allows the neural network of the agents to start with a very basic structure and be shaped by their experiences and environment. This is much more similar to how biological networks are created. 

“Allowing each neuron in an artificial neural network to act as an autonomous agent in a decentralized way similar to their biological counterpart, could enable our AI methods to overcome some of their current limitations in terms of robustness,” the researchers write. “Our goal is to inspire researchers to explore the potential of NDP-like methods as a new paradigm for self-assembling artificial neural networks.” 

self-assembling neural networks NDP
Self-assembling neural networks with neural development programs (NDP) (source: arxiv.com)

Putting NDPs to the test

The researchers put the self-assembling network to the test using both an evolution-based and a differentiable version of NDPs. The evolutionary algorithm employs a population-based process to optimize the NDP and assemble an efficient policy network. This system experiments with various configurations, retaining those that yield superior results. 

The researchers explain, “Similarly to how most cells in biological organisms contain the same program in the form of DNA, each node’s growth and the synaptic weights are controlled by a copy of the same NDP, resulting in a distributed self-organized process that incentivizes the reuse of information.”

In contrast, the differentiable NDP utilizes gradient-based methods that can optimize the network through algorithms such as gradient descent

The team applied both methods across a variety of settings, including the well-known XOR problem, MNIST, and reinforcement learning environments such as CartPole, LunarLander, and HalfCheetah. 

The initial results are encouraging. The self-assembling AI systems were able to find suitable solutions for the problems, although they were not as optimal as existing solutions. However, the researchers remain optimistic, stating, “Our method can learn to grow networks and policies that can perform competitively, opening up interesting future work in growing and developmental deep neural networks.” 

NDP examples
Examples of problems solved with neural development programs (NDP)

Current limitations of self-assembling neural networks

While the results of the research are promising, they also highlight some limitations of the current NDP methods. For instance, in the gradient-based approach, the researchers observed that “after a certain number of growth steps, the grown network’s performance can deteriorate, as policies do not really benefit from more complexity.” 

Another limitation is that the current version of NDP does not incorporate any activity-dependent growth. This means it will grow the same network regardless of the incoming activations the agent receives during its lifetime. This is a significant point of divergence from biological nervous systems, which often rely on both activity and activity-independent growth. Activity-dependent neural development allows biological systems to shape their nervous system based on environmental factors.

These limitations offer valuable directions for future exploration. Future research could focus on replicating the interplay between genome size, developmental steps, and task performance observed in biological systems. 

“NDPs offer a unifying principle that has the potential to capture many of the properties that are important for biological intelligence to strive,” the researchers write. “In the future, NDPs could consolidate a different pathway for training neural networks and lead to new methodologies for developing AI systems, beyond training and fine-tuning.”

Is self-assembly the ultimate solution for AI?

Nature has perfected the formula of self-assembly over billions of years of evolution and trillions of mutations. However, it’s important to acknowledge that the current structure of deep learning models offers certain advantages not found in nature. For instance, replicating DNNs is as simple as copying the model weights, while nature requires every new instance to be created from scratch, starting from a single cell.

NDPs can significantly contribute to AI research by enabling the exploration of the space of possible neural networks in ways previously unattainable, potentially leading to the discovery of new architectures. However, the benefits of digital neural networks are not likely to be eclipsed in the near future. The intersection of self-assembly and traditional deep learning presents an intriguing prospect, promising to yield a fascinating blend of biological inspiration and digital innovation in artificial intelligence.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.