Day 3 - How to build a Brain (Rui Graça, Florian Engert, Robin Hiesinger, Bassem Hassan, and Stan)

Next, Robin Hiesinger stepped up to further elaborate on the differences between how biological and artificial neural networks are built, highlighting in particular the "ON switch" dilemma. Artificial networks cleanly separate development from function. After pre-specifying the initial conditions and network structure, you switch the network „ON“, give it data, and let it learn. In biology on the other hand, development and function are so entangled that it doesn’t make sense to even think of an „ON“ switch. Neurons begin to function, express channels, and fire while other neurons are still developing and differentiating. Even in ferrets, which are born blind, there is significant developmental activity happening before the photoreceptors are even wired up to the visual cortex!


From modularity and „messiness“ on Day 2, we moved on to the ultimate „neuromorphic“ question: How to build a brain.

Starting out strong, Rui Graca (designer of the SciDVS) explained to us the hardware that hosts our best attempts at artificial brains is manufactured. Here, design and fabrication are separated. You define the architecture with your specific function in mind, then you send it to the fab to implement it on silicon, and finally (once it arrives and some poor Master’s student has set up the PCB), you flip the switch and it turns on! It’s clean, top-down, and a miracle or modern engineering. Unfortunately, biology doesn’t work this way — as we would very soon learn. 

Why does Ronaldo clean his kitchen?

Because he’s not Messi. This is how Florian Engert set up his point that brains are not messy either. For the benefit of the engineers in the room, Florian recapitulated the three major stages of b
rain development. First, cell differentiation, where precursor cells divide into cells that have more specific identities (like dopaminergic or Purkinje cells) under genetic control. Second, individual axons are guided toward their correct target regions, often following morphogenic gradients. Third, these axons establish the correct synaptic partnerships based on complex cell-cell recognition.

These three stages culminate in the brain as we know it. What’s interesting is that when Florian’s lab mapped the connectome of a baby fish, the connectivity matrix was vastly more sparse than most artificial neural networks we build today. Even more astonishing is how reliable and specific this sparsity can reproduced across individual fish and even hemispheres! This point wasn’t met without resistance but after some back and forth, Florian clarified that this is with respect to connectivity between high-level motifs and the presence of them, conceding that at more fine-grained levels, significant individuality and asymmetry exist. 

And this sparse, replicable brain is ready for the real world immediately. Florian pointed out that all animals face the four fundamental challenges of life: feeding, fleeing, fighting, and mating (my words, not Florian’s). While a baby fish isn't quite ready for the fourth "F," it can flawlessly execute the other three right out of the gate. Just like a newborn giraffe, horse, or marine iguana, these animals are born fully competent and functional.  This readiness of the brain for the real world at birth is contrary to the intuitions of many of us engineers and AI researchers, whose networks need to be trained to do anything useful. At least, we would assume that a developing fish needs to gather some sort of data by interacting with the world to fine-tune its synapses. Luckily Florian tested this assumption. 

His lab grew zebrafish embryos in a neural activity blocker called tricaine, which acts as an antagonist to voltage-activated sodium channels. They were raised from the exact moment of conception with absolutely zero spiking activity. After seven days, they washed out the anesthetics. The fish woke up perfectly identical to their normally raised siblings, fully developed and completely capable of solving all the same behavioral problems.

This finding suggests that neural spiking activity driven by input plays a surprisingly marginal role in the initial shaping of the many complex circuits of the brain. While there was some pushback from the room (what about residual calcium activity?), Florian's point stood: biological brains are shipped with "factory settings" that are ready to go. Learning on the fly is far less important for the final product than the billion years of evolutionary learning preceding it! Interacting with the world is at most a fine tuning on top.

Stop thinking of blueprints



Because of this, Robin argued against the popular metaphor of the genome as a static "blueprint". The genome is not an IKEA manual for brains. Bananas have more genes than humans, but bananas don’t have brains (at least to our current understanding). How could less code make a vastly more complex organism if it’s only a „recipe“?

The answer is what Robin calls "algorithmic growth". The genome leads to cells, bodies, and brains by expressing a dynamic network of interacting relationships. For example, genomes encode transcription factors. Transcription factors are proteins that loop back on the genome to trigger the expression of another protein (quite possibly another transcription factor). Transcription factors creating transcription factors build cascading feedback loops over time. By continuously feeding back on itself, a relatively small set of genetic instructions can unfold over time to finally produce complexity at the scale of the brain.

In artificial networks, we usually think of "feedback" as a way to update synaptic weights based on network errors. But in biology, feedback is everywhere (all at once), and it operates across a whole range of levels and timescales. Robin pointed to "subcellular computation" (a concept already familiar to some of us). Dendrites compute, spines grow, and synapses chemically facilitate or depress within 10 to 20 milliseconds, building an additional level of dynamic regulation right underneath the simplified picture of weight matrices and non-linearities that we have gotten so used to. And underneath that level, we find further levels, all the way down to the genome itself.

Robin even pointed out that metabolism operates as a type of feedback loop across these levels that goes beyond mere energy regulation. It is thus critical to biological computation in a way that neuromorphic engineers might appreciate (given our endless concern about energy efficiency). To back this up with evidence, Robin cited a famous study showing that judges hand down significantly different legal verdicts right before lunch versus right after lunch. You can't separate the energy system from the thinking system.

A biologist’s Lala land chip

Finally, Bassem Hassan brought the day to a close by challenging the rigid perfection of Florian's specific models. While Florian highlighted precise symmetry, Bassem pointed out that at the electron microscopy level, actual neuronal connectivity can vary by up to 30% between individuals—even in genetically identical flies!

Bassem argued that the genome constrains the space of possibilities, favoring certain functional outcomes over others probabilistically instead of rigidly mapping to a specific pre-determined outcome. It’s a lot like protein folding: you could generate a massive number of random amino acid sequences, but most of them won't fold into anything useful. Biological development naturally evolved towards the tiny fraction of initial conditions and developmental rules that generate a self that can persist through time.

To synthesize his point, Bassem narrowed down the essence of brain building to three unavoidable, intertwined features: time, noise, and flux.


Time: Bassem emphasized that time is the hidden variable in development. Both the duration as well as the precise temporal order of developmental events fundamentally alter the final structure. In his words: „If you can change the order, you can change the outcome“.

Noise: At the molecular level, biology is chaotic. Unlike most engineers, biology often doesn’t even try to eliminate or reduce this noise but instead actively harnesses it to produce heterogeneity. It’s a feature not a bug! Individuality and even the precision of axonal wiring in the brain might actually be the product of highly noisy processes (Robin throws in a keyword: promiscuous synapses). Instead of one exact, rigid blueprint, biology produces a statistical distribution of viable brains, a sort of hedge bet against whatever could come.

Flux: Biological systems exist in a state of constant turnover. Molecules degrade, synapses reshape, states fluctuate, and yet, the overarching system remains robust. This is dynamic stability (as opposed to the static stability of the things we usually build which end up decaying at some point anyway). Biological entities persist because of continuous change, regeneration, and self-replication which lets them continuously adapt to non-stationary environments.

From these 3 observations in biological systems, heterogeneity, irreversibility (in practice), and „information“ (in the sense of complexity on the final system level) naturally emerges. 

Bassem ended on describing his „Lala land“ dream chip, encapsulating the features observed in biology by being dynamic, self-resembling across time, producing a distribution of chips (post-takeout), growing exponentially (somewhat controversial), and adapting as it grows. Concluding our morning, he left us with a question to ponder: What kind of intelligence would emerge on a chip with these properties?

H-trees

To help explain how this massive feat of navigation occurs without live practice, Stan Kerstjens brought his theoretical expertise to the floor. Expanding on the mystery of axon guidance, Stan discussed a fascinating model he designed. In his model, the process of cellular differentiation creates lingering signal traces that trace the history of each cell’s development. These chemical breadcrumbs effectively route axonal growth across long distances, solving a massive spatial navigation problem without the need for active trial-and-error firing.

Grow don’t build?

Day 3 left the computer scientists and neuromorphic engineers in the room with a lot to think about. As Bassem provocatively suggested: maybe brain-like systems simply shouldn't be built—maybe they need to be grown instead, maintaining dynamic stability while they assemble themselves.


Comments

Popular posts from this blog

Day 1 - Welcome to the workshop (Giacomo, Saray, Stan, Rodney, Andre, Tobi)

Day 2 - How parts make a whole (Dan Goodman, Bassem Hassan, Matthew Cook, Mariela Pekkova)