- Scientists often struggle to simulate the brain’s complexity
- A new algorithm can predict how billions of neurons rewire in the brain
- This could help us better plan brain surgery or even advance artificial intelligence
The brain is a truly unique organ. From making your heart beat to translating symbols on your screen into meaning in your head, the space between your ears is always working. Despite all it can do—or perhaps because of it—we’re still struggling to fully understand how the brain functions.
One area that fascinates scientists is how the connections between neurons work. These are called synapses, and they aren’t hard-wired. New connections can form while old ones collapse in a process called structural plasticity. Anything from learning new skills to healing after a stroke can result in restructuring.
While we have models that help us understand these connections, they generally aren’t efficient enough to fully simulate the billions of neurons in a real brain. This is the problem that computer scientist Dr. Sebastian Rinke of TU Darmstadt endeavored to solve with a new algorithm developed with his colleagues.
“Our model forms groups of neurons and decides at the coarser level which groups a neuron connects to,” says Rinke. “It's basically a set of groups and then it’s decided that you want to connect to a specific group and then the corresponding group is further unfolded. The same procedure is repeated until we finally select one single neuron, which is then taken as the target neuron for forming the synapse.”
Mining existing research from particle physics and neuroscience, Rinke and the team were able to more efficiently map how neurons connect within the brain.
Out with the old
As with most science, Rinke’s work is built upon previous experience in the field. Prof. Dr. Felix Wolf, head of the research group where the core algorithm has been developed and co-author of the work, points out that an existing model from two neuroscientists gave them a good start.
“Neither Sebastian nor I are really neuroscientists, but we were working together with one,” says Wolf. “We met Dr. Markus Butz-Ostendorf, who has developed a model together with Arjen van Ooyen that describes how the connections are formed. He came to us with this model because it wasn't really scalable. That means he couldn't calculate this process for more than 10 to the 5 neurons –105 is 100,000. The human brain has about 1011, or 100,000,000,000 neurons.”
With so many connections to sort through, it’s easy to see why mapping them accurately would be a challenge. Inspired by the Barnes-Hut algorithm, a method from particle physics, designed to simulate, for example, the movement of stars, the answer that Rinke and his colleagues came up with was to organize the neurons into groups.
“What the algorithm does is that it builds a tree structure, which gives you these groups that you later need for the calculation of the probabilities,” says Rinke.
“In order to decide which groups of neurons to consider, we use the so-called acceptance criterion, which is taken from the Barnes-Hut algorithm. For instance, it might be that the root of the tree—which contains all the neurons—is too coarse because we grouped too many neurons there. That means we have to go one level deeper in the tree and into the next smaller groups. This is done until the acceptance criterion is satisfied for all the groups.”
According to Rinke, the acceptance criterion is the ratio of the size of the group divided by the distance of the virtual neuron representing the group. Basically, if this number is too large, that means the ratio of size to distance is too large and needs a deeper dive. This is where they were inspired by models used in particle physics.
“When we look at particle simulations, there are no probabilities calculated, but instead we examine the forces that the particles exert on each other,” says Rinke. “These forces depend on the distance between the corresponding particles. This is similar to our brain simulation where the probabilities also depend on the distance between the neurons in the brain.”
Rinke and his colleagues designed their algorithm within the Human Brain Project, a European Union effort to build a cutting-edge research infrastructure to advance knowledge in neuroscience, computing, and brain-related medicine. To evaluate the performance of their method, they used software tools created in the DFG Priority Program Software for Exascale Computing (SPPEXA).
This inspiration from another domain demonstrates that science doesn’t develop in a bubble. Interdisciplinary cooperation is the key to ensuring good ideas don’t become isolated in a particular field of study.
Planning for the worst
While the sheer scientific wonder of discovering a better brain simulation model is interesting enough, Rinke believes this work could have value in the real world. A tool such as this algorithm developed for brain modeling could help neurosurgeons.
“One potential use is for understanding how rewiring the brain works—this might be beneficial for assisting surgery planning,” says Rinke.
Rinke plans to refine the model with the help of colleagues from various backgrounds including medicine and neuroscience. The members of this interdisciplinary partnership hope to use simulation in combination with their algorithm to investigate how the brain evolves. For example, how the neural network changes after removing a tumor.
“Based on this simulation, you could adjust your surgical planning—which parts to remove—or you could even make predictions about which areas might be affected after a removal, in terms of how well the brain rewires again,” says Rinke.
And, he adds, “One day, our algorithm might even help better understand human learning with various applications in artificial intelligence.”
Even though we’ve long been using it to solve humanity’s greatest problems, the brain’s full function still remains a mystery. That said, research like this is helping us gain more insight into the most important organ we’ve got.