Day 9 - Evolutionary Plasticity (Tony Zador, Esteban Real,Ygit Demirag)

 

The morning discussion began with a question: how much of intelligence is already encoded in the genome? Tony Zador from CSHL arguedthat neural networks cannot simply “do anything.” If we want artificial systems to become more intelligent, we need to understand what makes biological brains special. Tony’s early interest focused on increasing computational power within neurons themselves - for example through dendritic computations. Later, this shifted toward the need to understand neural circuits in much greater detail, motivating the development of large-scale connectivity mapping tools such as BARseq.

A central theme of the talk was a critique of the heavy focus on learning in modern machine learning. Tony argued that learning is often overrated, at least as a model for how organisms operate in the real world. This brought us back to the old nature-versus-nurture debate. 

Drawing on Moravec’s paradox, Tony argued that animals come equipped with billions of years of evolutionary “experience” embedded in the genome. Being good at interacting effectively with the world is essentially the price of admission to the world. The key question is therefore not simply how animals learn, but how do you get from a genome to a connectome that can already generate adaptive behaviour?





This naturally led to comparisons between biological and artificial neural networks. In machine learning, supervised learning often relies on huge datasets and repeated weight updates through backpropagation. Yet children can learn object categories from very little data. And many animals display sophisticated behaviors from birth. 

Examples came up quickly: spiders hunting immediately after hatching, beavers building dams (with a memorable side note about Justin the beaver), and Hopi Hoekstra’s work on innate tunnel-building behavior in mice. The point was not that behaviour is rigidly “hard-coded,” but that evolution strongly shapes what organisms can easily learn, perceive, or do.

The discussion repeatedly returned to the idea that a large amount of behavioral "knowledge” must somehow already be encoded in developmental programs. The point was not that behaviour is genetically hard-coded in a simplistic sense, but that evolution strongly constrains and biases what organisms can easily learn or do. Many questions were raised on how this differs from selective breeding and evolutionary shaping over generations and how one should account for the enormous amount of developmental and embodied interaction data a baby receives before birth and during development. At some point the conversation crystallized into a funny but useful simplification: So… are computers bad and animals good? Probably not. The problem may simply be that the comparison is framed badly.

How do animals manage to do those things at all? This shifted the discussion toward embodiment.

Several participants emphasized the importance of embodiment and morphology. Bodies differ, and therefore the possible actions and learning dynamics also differ. This connects to the idea that there is no true tabula rasa: organisms begin life with strong inductive biases. This led to another question: if biological intelligence is deeply shaped by innate structure and evolution, does bio-inspired engineering mean copying biology? Not necessarily. This led to discussion of evolutionary strategies, evolvability, and robustness. Morphological variation enables organisms to occupy different ecological niches while preserving a common underlying body plan. Evolvability was framed as flexibility along constrained developmental axes. 

The conversation also touched on “cortical chauvinism” - the tendency to over-focus on the cortex as the unique source of intelligence. One participant suggested that canonical cortical circuits may provide a substrate that allows animals to flexibly repurpose behaviors and tools - for example, preserving abstract “ideas” about how objects can be used. Questions of modularity naturally followed: does evolvability require modular organization? And can there be evolvability without robustness? Sara Solla brought the discussion back mentioning that perhaps the only thing the genome should care about is to specify the local circuits and statistics of connectivity. 

But in insects some neurons really do connect to specific partners. And C. elegans also highly stereotyped connection matrix. Is there enough information capacity in the genome to specify a wiring diagram?

The proposed answer was that developmental rules act as compression algorithms. Rather than explicitly encoding every connection, the genome may encode generative rules for constructing circuits. One participant compared this to image compression: JPEG is not a stored image itself, but a set of rules for reconstructing one.

To search over functional architectures, you still need data and interaction with the environment. Toward the end, the discussion shifted to developmental neuroscience itself. Zador argued that the field is deeply relevant to intelligence, but often appears impenetrable to outsiders because it becomes buried under lists of molecules and pathways without clearly stating: what computational problem is actually being solved? 

The session closed with broader questions about the relationship between genomes, bodies, brains, and environments:

  • How are muscles specified?
  • How much of body structure is genetically encoded?
  • How much intelligence resides in the body itself?
  • And how much emerges only through interaction with the environment?

The overall message of the discussion was not anti-learning. If anything, it was a reminder that learning is only one part of intelligence. Evolution, development, embodiment, and innate structure may do much more of the work than we usually admit.


Evolutionary algorithms, broken robots anf AlphaFold spaghetti

After the coffee break we had a session out evolutionary algorithms with Ygit Demirag and Esteban Real.

The central idea was deceptively simple: instead of hand-designing intelligent systems, could we evolve them?

A lot of the discussion revolved around the connection between biological evolution and machine learning. Modern AI typically evaluates algorithms on datasets, adjusts weights, optimizes loss functions, and repeats until something works. Evolutionary approaches try to do something slightly weirder: mutate systems repeatedly and let selection figure things out.

Someone joked that standard weights adjustments by ANNs is just boring compard to evolving entire algorithms or architectures, not mentioning the biological analogy where evolution discovered useful solutions after millions of iterations, but without explicitly “knowing” the problem in advance. The interesting question is therefore not only what solution evolution finds, but also: what kinds of solutions tend to emerge at all?

A particularly interesting thread in the discussion with Esteban Real and the Google team revolved around neutral evolution - mutations that are not immediately useful, but may create space for future innovation.

This raised an important challenge for evolutionary algorithms: how do you preserve exploratory potential without optimizing it away too early?

Esteban showed surprisingly simple evolved programs - sometimes as short as ten lines of code - solving non-trivial tasks. The striking point was not just that evolution could find solutions, but that these solutions remained interpretable. Even in more complex setups, evolutionary search could produce compact algorithms whose logic could still be traced.

That immediately raised another question: are short programs favored by evolution? If deletion mutations happen more often than additions, does evolution naturally compress solutions?

The discussion then shifted to robustness. One example involved robots evolving locomotion strategies: if a robot “breaks a leg”, can evolution adapt and recover function? As Sara Solla pointed out, this mirrors biology - functions evolved for one purpose can become critical under unexpected conditions.

More broadly, we debated whether evolutionary search can remain understandable as it scales. Can evolved subroutines become building blocks for more complex functions? Can we preserve interpretability while increasing complexity?

At one point, Melika asked the obvious but important question: how do these systems actually start? This naturally connected back to inductive biases. Modern AI often succeeds because of carefully chosen architectural biases - convolutional networks, transformers - but the question remained whether some of these biases could themselves be discovered rather than imposed.

And behind all of this was a deeper tension: evolution is slow, exploratory, noisy, and wasteful - but incredibly robust. AI optimisation is fast, targeted, and efficient - but often brittle. Maybe the real lesson is not to replace one with the other, but to understand where each works best.

The discussion also drifted into the political reality of large-scale AI: compute is not infinite, energy is not cheap, and access to these systems is concentrated in very few hands. A useful reminder that technical choices are never fully separated from the material and social conditions in which they are made. At its core, the conversation came back to a recurring workshop theme: how do you design systems - biological or electronic - that can tolerate noise, exploit variability, and remain functional under imperfect conditions?

That discomfort, as several people noted, may be exactly where new ideas emerge.



Comments

Popular posts from this blog

Day 1 - Welcome to the workshop (Giacomo, Saray, Stan, Rodney, Andre, Tobi)

Day 2 - How parts make a whole (Dan Goodman, Bassem Hassan, Matthew Cook, Mariela Pekkova)

Day 3 - How to build a Brain (Rui Graça, Florian Engert, Robin Hiesinger, Bassem Hassan, and Stan)