Day 4 - From Neural Populations to Societies (Sara Solla, Byron Yu)

 


After Day 3 left us questioning whether brains should be built or grown, we stepped away from “LALA LAND” (see day 3) into something more grounded, but no less mysterious…Manifolds🍥

  • What does the biological brain actually do at the population level?
  • How does collective neural activity give rise to behavior, and (how) can we use it?

These were the guiding core questions throughout this morning.


Sara Solla opened by reminding us how neuroscience began: single neurons, single responses, tuning curves etc. 

But the brain is not a collection of isolated units. It is a system where function emerges from interactions (and at multiple levels).

So the old picture that neuroscientists had in the past was incomplete to the point of even misleading. 

The key shift over the past decades has been technological in modern neuroscience. From recording a handful of neurons to hundreds, thousands, even tens of thousands with tools like Neuropixels. 

And what emerges from these recordings? We see not just noise or chaos, but structure. 

The “computational part” mostly lives in the collective, and not in individual neurons. 

This collective property is an emergent behavior that allows hundreds of millions of neurons to move a single arm. 


The Ambient vs. The Empirical Space

When we record from example 100 neurons using MEAs (Multi-Electrode-Arrays) or Neuropixels, we generate Spike Rasters (spikes across neurons and time). To analyze them, we bin the data (usually in 2ms to 30ms windows, depending on whether we care about physiology or muscle elasticity, for example). This creates a matrix where N (the number of neurons) is our ambient dimension.


But here comes the severe Subsampling Problem. Even with 10,000 neurons, we are looking only at a fraction of the population. Yet, we always find that neural activity lives or collapses onto a low-dimensional manifold.

Instead of exploring all 2^N possible states (hello, curse of dimensionality), the brain operates in a restricted subset. A structured region where dynamics actually happen.


Why does it happen? We don't fully know. 

It might be low-rank connectivity, or it might be an evolutionary trick.

An answer could be that the world itself is structured. Images, movements, and sounds are all highly correlated. They don’t span the full theoretical space either. So the brain maybe doesn’t need to represent everything, only what actually occurs. 

The Benefit is that it beats the Curse of Dimensionality. In a 100-neuron space, there are $2^{100}$ possible states/activity patterns. The brain ignores almost all of them, staying within a "wiggly" surface of just about 5–10 dimensions (a tiny subset).

This is a Universal Feature. We find it in the motor cortex, cerebellum, and even the hippocampus.

Spike rasters can be transformed into population activity matrices. Once you bin time (2 ms? 30 ms? depends on the question), count spikes, and stack neurons, you get a high-dimensional object:

  • N: number of neurons (ambient dimension)
  • K: number of time bins
  • X \in \mathbb{R}^{N \times K}

A Beautiful Striking Result from M1 - “8-Blob” Experiment 


Sara described the following experiment. A monkey performs a simple task with 8 targets arranged in a circle, it waits for the cue and moves to the target. 

What did they find? 

If you take neural activity around movement onset, reduce it to a few dimensions with linear PCA and plot each trial, you get:

Eight distinct clusters/“blobs”. From neural data projected into a 3D “neural mode” space. 


What’s even better? These blobs colour-coded by target, are perfectly organised in a way that mirrors the real-world task (geometry of the targets). Nearby targets in physical space → nearby in neural space. In conclusion, M1 does not just controlling movement by firing abstractly. It is encoding (embedding) the task geometry directly in population activity! 


Sara summarised it beautifully. 

If the brain doesn’t visit all possible states, and only explores those on the manifold, then learning means shaping trajectories within or across manifolds. 

Then crucially, dimensionally becomes a proxy for task complexity.


What can the brain actually learn?

And more importantly—what can’t it? 


This is where Byron Yu steps in and moves the conversation to BCIs (Brain Computer Interfaces). 

He walks up to the flipchart and writes:

“Brains can’t learn ANYTHING.”


Most people associate BCI studies and tools with applications such as controlling prosthetic limbs or restoring function in paralysis. 

But Byron emphasized something less obvious, and an equally powerful use:

BCIs are a (discovering) tool for basic science.

They let you probe the limits of learning in a way that’s almost impossible otherwise..



A bit of background…

Using a BCI shortly:

  • From Neural activity ($u_t$) → to cursor movement/velocity ($X_t$)
  • The mapping matrices (the rules, A and B here) are chosen by the experimenter
  • The monkey learns to control it via thought alone

This setup becomes a controlled perturbation of neural space. 

Now you can think about powerful experiments. 

In their study, they asked: 

What happens when we (experimenters) change the rules?

What happens when we make the task easier or harder?


Two cases or two types of learning:

  1. Within-manifold perturbation (same intrinsic neural space, different mapping)
  2. Outside-manifold perturbation (requires entirely new neural activity patterns)

At first, both are equally hard.

But then:

  • Within-manifold → fast learning (within 1-2 hours)
  • Outside-manifold → slow, limited learning

Even after two weeks (extended training), the “outside” case struggles to catch up.





The Takeaways/ Main 3 Findings


Finding 1: More Learning within Intrinsic Manifold than outside (Learning is Easier Within the Manifold)


Immediately after switching mappings, both cases look equally hard.

But over time:

  • Within-manifold → fast learning
  • Outside-manifold → slow, limited learning

Even with extended training (~2 weeks), outside-manifold learning barely catches up.


Finding 2: A Neural Repertoire Constraint


What does the brain actually do during learning?

Here are the two hypotheses:

  • H1: Optimize performance → fully reshape activity (“Global re-optimization”)
  • H2: Reuse existing activity patterns (“Repertoire Reuse)

Byron explained that the data favored H2. The monkey doesn’t find the optimal solution.

Instead, it finds a “quick and dirty” solution: quick, good enough and built from existing patterns (already “available”). A kind of:

“close enough, now give me the juice.”Efficient but Lazy (very biological)

This solution might be behaviourally suboptimal (cursor moves slowly), but the monkey gets its juice. It recycles old neural modes rather than building new ones.


This echoes Sara’s earlier idea: the brain doesn’t start from scratch (tabula rasa). It works within a structured space/ a “bag of available patterns”.


Finding 3: Temporal Constraints within the Neural Repertoire


Then came perhaps the most subtle constraint.

Even within the repertoire, can the brain rearrange activity arbitrarily?

In BCI tasks where the monkey moves between two targets (A ↔ B):

  • The forward trajectory (A → B) is consistent
  • But the return (B → A) does not retrace the same path

Instead, it takes a different route in neural space. You cannot move neural activity "backwards" in time. Byron showed that even if a BCI task would be solved by reversing a neural trajectory, the monkey cannot do it. The path is highly reliable and irreversible.


This suggests intrinsic temporal structure:

  • Some transitions are natural
  • Others are hard or impossible

Even in a 10D neural space, the system follows preferred directions.

Another constraint. Another limit.


So What Is the Brain, Really?

By the end of the session, we had a better idea of what the brain is NOT:

  • a blank slate 
  • a universal learner (limited repertoire)
  • an unconstrained system (evolving along preferred directions in time)

The consensus? Manifolds are the physical signature of our constraints. Unlike ANNs, which we treat as tabula rasa (as Sara pointed), BNNs come with a "factory setting" (as Florian mentioned at some point).

We aren't blank slates; we are "wiggly" surfaces shaped by evolution. We tend to learn faster when we stay within the lines, and we "recycle" our thoughts and skills to solve new problems.


Happy hour vibes and discussions 🍹


Nighttime snorkeling with Tobi's headlamps and the moon's light 🤿🌕




Comments

Popular posts from this blog

Day 1 - Welcome to the workshop (Giacomo, Saray, Stan, Rodney, Andre, Tobi)

Day 2 - How parts make a whole (Dan Goodman, Bassem Hassan, Matthew Cook, Mariela Pekkova)

Day 3 - How to build a Brain (Rui Graça, Florian Engert, Robin Hiesinger, Bassem Hassan, and Stan)