
The One Million Bit Question – Memory
Scientist
Let’s improve the way we process information!
We live in the age of information. Computers enable us to process information over long distances and at an incredible speed, yet there are still so many more opportunities. Viewing the brain as an information processing device, we all carry a fascinating Informatics teacher within us.
“The One Million Bit Question” – as tempting as it might sound to a computer scientist’s ear to model memory in the brain like a hard-drive, it probably doesn’t work like that. Current research decisively suggests that the brain’s efficiency might stem from its integration of both processing and storing within its network of neurons. This way it could bypass the CPU-RAM bottleneck of the von-Neumann-architecture.
Research Interests
What enables humans and animals to activate their motor systems once they become aware of a predator? How does a person give an inspiring speech about a potential future event? And how is it possible that humans invented Math to understand and manipulate their surroundings? Humans and animals have truly remarkable cognitive abilities which I’d love to understand better. As a computer scientist, it feels most natural to me to try to implement something that’s similar to the brain in software, and maybe even hardware. And that’s a major aspect of what neural networks are about.
Currently, I am reviewing literature, datasets, and pieces of software in “Neuroinformatics” – the study of information processing in networks of neurons. Above all, a specific kind of modeling approach called Spiking Neural Networks (SNNs) has caught my interest, because they seem to me like the most biologically interpretable of AI models. Their paradigm of modeling actual spikes and not just activations makes me curious as to whether we can improve performance on common AI tasks and what insights it will allow in the brain. SNNs show promising performance on a specific subset of AI tasks and they fill a bunch of neuromorphic hardware (NMH) solutions with life, such as Intel’s Loihi 2 chip or BrainChip’s Akida.
Artificial Intelligence vs. Neuroscience – an Overview
Figure 1 shows the spectrum of possible research around the extreme ends marked by Artificial Intelligence (AI) and Neuroscience. The goal of AI is to solve tasks to a very high accuracy that would be hard with non-data-driven algorithms. For example image or speech speech recognition. On the other side, the goal of neuroscience is to understand the brain better on various levels, ranging from molecular to psychological. An improved understanding can in turn lead to a better treatment of brain-related diseases or to better AI models.

Somewhere in the middle of the above spectrum, there is research combining both fields. First, there is neuromorphic hardware (NMH) engineering where researchers try to reach the power-efficiency of the brain when solving AI tasks using SNNs (Strukov, Indiveri, Grollier, Fusi; 2019). Then, there are researchers who try to make existing AI algorithms more biologically plausible, such as the backpropagation algorithm. Their primary goal is not to raise performance on AI tasks by another margin, but to solve them in a way in which the brain might solve them (e.g. Sacramento, 2018; Lillicrap, 2016). Understandably, this research relies heavily on neuroscientific data. Finally, researchers also make use of AI models in their neuroscientific research. For example, AI can automate the finding of synapses in brain slices in order to speed up the process of mapping out the entire connectome of an animal (Buhmann et. al, 2021).
Focus on Spiking Neural Networks (SNNs)
Figure 2 depicts a good summary of what makes up current research on learning in SNNs. I have grouped different parts of the tree to contextualize it with a deep learning framework proposed for neuroscientific research. This encompasses Objective Function (O), Learning Algorithm (L), and Architecture (A), which I like to call (O,L,A) (Richards et al., 2019). Since SNNs work solely with spikes, there is a separate group dealing with the conversion of input/output information into/from spikes.

Check out more good reads on the subject:
Current Work
I joined the Barcelona Artificial Intelligence in Medicine Lab, Faculty of Mathematics and Computer Science, University of Barcelona, as a private external collaborator. There we strive to enhance the AI-powered automatic analysis of mammography data. More specifically, we explore ways of sharing mammography data between hospitals without endangering any sensitive private data. In our approaches, we apply Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs), and Transformer models via PyTorch.
Past Work
The interpretability of trained neural network models is an important active area of research. A Convolutional Neural Network (CNN) may classify your data to a very high accuracy into two classes, e.g. 95.6% accuracy. However, how do you know that it derives its classifications from relevant features in your data and not just some spurious correlations which happen to be in there?
During my Master’s thesis at The University of Edinburgh I applied CNNs to the binary classification of Zebrafish swim bout movements. Each data point was one short grayscale video of the tail of a zebrafish moving in one of two characteristic ways: either the fish performs a prey-approach-move or just a spontaneous swim move. It turned out that my CNN was classifying sim bouts to a higher accuracy than existing hand-crafted algorithms.
Nevertheless, I wanted to know what my CNN has learnt. Consequently, I applied recently developed AI explainability techniques to visualize the features learnt by the CNN, using a toolbox called “iNNvestigate“. The analysis shows the importance of each pixel in the original image (or video) for each data point. This disclosed a peculiar interest of the CNN in the upper left corner. The CNN was picking up on movement of a thing called agarose, which is the mass to keep the fish in place. Possibly, the experimenter may have wanted to induce the fish to show a spontaneous movement by knocking at the petri dish. After correcting the misleading artefact in the videos, the CNN paid attention only to the tail while maintaining its classification accuracy.
Publications
- Poster at ICLR2020: “Analysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout Classification” (Breier & Onken, 2020) (Bennet Breier, Arno Onken)
- Accepted for oral presentation at IWBI2022: “Sharing Generative Models Instead of Private Data: A Simulation Study on Mammography Patch Classification” (Szafranowska et al., 2022) (Zuzanna Szafranowska, Richard Osuala, Bennet Breier, Kaisar Kushibar, Karim Lekadir, Oliver Diaz)