Day 5 - Computing with Neurons (Giacomo Indiveri, Jamie Knight, Thomas Notwonty)





Giacomo Indiveri started the morning under another day of perfect weather and clearly argued for analog multineuron systems, pointing out potential advantages in terms of small EI populations that in theory at least, it has been shown that precision improves N instead of law-of-large numbers averaging sqrt(N). Also populations can encode time more precisely (with more BW) than individual neurons as long as they are unsynchronized. Matt Cook argued that variability is great. Others in audience disagreed from an engineering viewpoint, it is generally very hard to deal with, and that biology has complex molecular computation missing in electronics. Marian argued that precision is important too. Rodney said let's move on so Giacomo can discuss compute. Sara Solla argued that dynamics has more richness; identical neurons don't have identical redundant responses if they have different weights or inputs. But others argued this is not useful by itself, only a necessary, not sufficient condition.

Then Giacomo went on to discuss EI architectures and necessary weights. He showed that they can form a flip-flop latch attractor but argued that this has additional properties, e.g., stochastic resonance, with potential advantages. It is also an amplifier with interesting properties, not just linear gain but selective and digital properties.  Audience agreed, but wondered how to really exploit it practically.




Robin Heinemann pointed out that biology has to be robust in the presence of constant flux (from Bassen Hassam), not just minimizing energy.





Next, Jamie Knight talked about optic lobe connections in ants, bees etc are like ELMs that can learn with simple STDP rules. But they cannot do some simple things like distinguishing crosses and rotated crosses, no matter how much you reward them. i.e., bees have an XOR problem. The basic problem is that this bee analogy does not scale to arbitrary problems, so we need to train, not just learn. By train, he means architecture is important since that is what evolution discovers. So do not stick too rigidly to particular biological architectures. Florent Le Moel pointed out that vision and sensing in general are active, not passive.



After the coffee break, Thomas Novotny very nicely reviewed the transition from hand-designed finite-state solutions to learned pattern recognition with spiking neural networks, focusing on EventProp and similar algorithms—event-based, exact-gradient methods for training SNNs. A speech recognition example (word classification) contextualized the learning setup. The group discussed algorithmic mechanics, hardware feasibility, performance scaling, and practical limitations, with emphasis on chip plausibility and training stability.

 

EventProp (and similar algorithms like Delgrad) method and scope

  • It does exact gradient descent for spiking neural networks via an adjoint/event-based backpropagation approach without surrogate gradients.
  • Operates with per-neuron adjoint variables (for voltage and current) that evolve backward in time; cross-neuron communication occurs only at recorded spike times from the forward pass.
  • Demonstrated on a simple feed-forward example (input → LIF hidden → non-spiking output integrator) using an area-under-voltage loss for classification decisions at word end.
  • So far only demonstrated for offline batch based training, since computation of gradients is still expensive and must be done with error at end of sample

The afternoon and evening brought lively workgroup action and sporting activities, football, volleyball, etc. Another great day!


Thank you to Marian Statache for this lovely tutorial (with beautiful sketches of all types of mechanoreceptors) on microneurography and intraneuron microstimulation, and for sharing how you study these fascinating processes in your lab. It’s amazing how something as natural as touch reveals such rich, precise, and event-driven neural encoding when you look under the hood.



Attempted Argo robot sailboat test in pool... but then discovered IMU was not working, no heading :-(


Comments

Popular posts from this blog

Day 1 - Welcome to the workshop (Giacomo, Saray, Stan, Rodney, Andre, Tobi)

Day 2 - How parts make a whole (Dan Goodman, Bassem Hassan, Matthew Cook, Mariela Pekkova)

Day 3 - How to build a Brain (Rui Graça, Florian Engert, Robin Hiesinger, Bassem Hassan, and Stan)