Day 2 - How parts make a whole (Dan Goodman, Bassem Hassan, Matthew Cook, Mariela Pekkova)


Day 2
  opened with wonderful clear weather, light winds, and a “simple” question from Dan Goodman: Can skills (tools) ever be uploaded into a brain? Or on a neuromorphic device? This question opened up a bigger discussion on a fundamental problem in both neuroscience and AI: how is intelligence organized? What distinguishes humans? Ability to quickly learn to use tools? And how agentic AI increasingly includes LLMs that use added tools, e,g, calculators and IDE integrations to do particular tasks.

We talked about modularity - the idea that large functions can be decomposed into smaller specialized parts. But what exactly counts as a module?

At one extreme lies strict modularity: isolated units communicating through tight connections. Efficient, reusable, and great for generalization. At the other extreme is the “blob”: densely interconnected systems, where function emerges from entanglement rather than separation. This tension runs through both neuroscience and AI.

In brains, lesion studies have long suggested modular organization - damage one area, lose one function. Yet plasticity complicates that picture: functions can shift, compensate, re-emerge elsewhere. In AI, modularity can be imposed by design, but can it also emerge spontaneously? It seems it’s hard to achieve emerging ANNs, with LLMs being a good example, using “experts” models. Instead of sending every input through the whole system, the signal can be routed to different specialized experts. The structure is designed in advance, but not its content. Dan contrasted these two paths: engineered modularity versus emergent modularity.

The conversation kept returning to development. Is modularity something the brain starts with, or something it grows into? Perhaps both. Evolution provides constraints; development builds structure; learning refines it. Funny anecdote from Sara Solla: some psychedelics like peyote can often produce similar hallucinations across people (similar recurring patterns) - aspects of perception are constrained by built-in architecture? But we will talk about the development tomorrow.

Sara also mentioned a concept of “bag of hypotheses”: you fix an architecture in a way that only a set of functions can be applied/learned. Biological brains, unlike most artificial systems, can expand or reshape that bag - by adding, pruning, and rewiring components. Can learning happen at fixed architectures?

Dan brought an example of tool usage: a tool can be viewed as an external module. Humans are initially are not an expert in using the tool, but can quickly learn.

This naturally led to broader metaphors for brain organization, for example, like a society or multi-agent systems. How can you organize a society? Is the brain hierarchical, with top-down control? A democracy, where many weak opinions combine into strong decisions? Would each neuron be a society?

We also talked about the brain as a community of peers. Not strictly hierarchical, not fully fused, but semi-independent modules communicating over low-bandwidth channels. This could work, it’s iterative and allows flexibility. But as soon as you start training modules, they become intertangled with each other. Is there a better way for training it?

Bassem Hassan pushed back against overly rigid views of modularity by reframing plasticity as compensation. Lesion studies often suggest specialized functions because damage causes deficits, but we rarely hear about the negative results, where functions recover or are taken over elsewhere. This points to a kind of transferable modularity: modules may be specialized, but not irreplaceable. That flexibility is part of what makes brains robust. Biological systems tolerate damage, noise, and variability while maintaining function. Especially in sensory systems, some aspects of the input space are already structured by evolution and development, tuned to the kinds of problems the organism needs to solve. From fruit flies to humans, nervous systems seem to strike a similar balance: built-in structure for efficient niche-specific solutions, combined with enough plasticity to adapt when needed. If artificial systems want to match that efficiency, this trade-off may be worth taking seriously.

 

The recurring conclusion? We may still lack the vocabulary to describe these systems. As Dan put it, modularity may not be cleanly separable at all, perhaps what matters is entangled modularity, borrowing from Luiz Pessoa’s idea of the “entangled brain”.

Next, Matthew Cook grounded these ideas in cortical anatomy.

The cerebral cortex is often described hierarchically, but real connectivity tells a messier story: nearly two-thirds of cortical areas are directly connected, with different, asymmetric weights of these bidirectional connections. Feedforward and feedback pathways differ structurally and functionally. Communication is bidirectional, but not equivalent in both directions.

Can we expect similarities in organization across different scales? Sara argued that we should not expect the same principles to hold across scales, since functions differ fundamentally. Bassem pointed out that we should look at fully mapped systems like C. elegans and Drosophila to see if these systems reveal shared organizational principles,  including how hierarchical processing can coexist with dense, all-to-all connectivity.

We finished this part with an important remark: how much should theory speculate, and how much should it simply follow the data?

During the coffee break, this year's bloggers quickly met to plan how to split up the blogging

Bloggers: Tobi, Anna Olympia, Kathrin, in back Sam and Christian


In the third part of the morning session, Mariela Petkova brought the experimentalist’s perspective, talking about her connectomic work in larval zebrafish and electric fish.

Her central argument was provocative: brains are not messy. Not because they are deterministic, but because variability itself follows strict developmental stochastic connectivity rules. The debate quickly turned sematic philosophical: what do we mean by “messy”? Random? Stochastic? Variable? And what does it mean that something is not messy?

The consensus drifted toward something subtle: biological systems are not precise in outcome, but precise in distribution. Wiring rules harness stochasticity as a feature, not a bug. As Bassem Hassan pointed out, cell types may be better thought of as probability distributions in a multidimensional space, rather than fixed categories. Sara added that all we need is a brain that is statistically consistent for information transmission.

Brains meet chips: Mariela, Chenxi, and Lucas

The morning closed by comparing biological brains to chips.

Our neuromorphic engineers (Lucas and Chenxi) showed chips inspired by cortical principles: programmable neurons, plastic synapses, state-dependent computation. But compared to biological brains, today’s chips remain tiny and heavily scaffolded.

A larval zebrafish brain packs ~100,000 neurons and ~20 million synapses into a millimeter-scale structure. Current neuromorphic chips operate at a fraction of that scale, and without the rich developmental history that shapes biological computation.


This discussion exposed a contrast of neuromorphic hardware with biology: chips are programmable, but not neutral. Lucas pointed out that hardware itself has a “body”, its behavior depends on physical properties like temperature, resistance, and fabrication mismatch. In that sense, chips already carry their own form of variability and history-dependence.

This raised a deeper question: if variability is unavoidable, can it be harnessed rather than eliminated? Biological systems do this all the time, building robust function on top of noisy components. Neuromorphic systems may need the same strategy, not copying exact biological wiring, but borrowing the logic of statistical connectivity, plasticity, and state-dependent computation.

A recurring question throughout the morning was: can any of this be tested? Sara often brought the discussion back to experiments, while Bassem reminded us that science often starts with speculation. Not every idea needs an immediate answer, sometimes its role is simply to open new questions.

The afternoon continued with breakout discussions, where smaller workshop groups furiously discussed different project ideas and how to turn them into concrete projects. Kathrin, Simone, and Tobi went to check the workshop Friday banquet dinner location at Agristurisimo Barbagia to make sure it has room to present this year's Mahowald-Mead awards.





Sports was tennis with up to 12 people on one court. The evening brought demos in the disco lab followed by more extended discussions.

 


Comments

Popular posts from this blog

Day 1 - Welcome to the workshop (Giacomo, Saray, Stan, Rodney, Andre, Tobi)

Day 3 - How to build a Brain (Rui Graça, Florian Engert, Robin Hiesinger, Bassem Hassan, and Stan)