In this post, I return to what comes after LLMs. It’s true LLMs are useful tools and that fact is often hidden behind the Agent / AGI hype, but when considering the long term “the answer lies elsewhere.”
What progress, if any, has been made on the (current) most promising alternative -Neuromorphic Computing.
Neuromorphic Computing & Spiking Neural Networks
As a reminder, Neuromorphic Computing tries to mimic the structure and functionality of biological neurons and synapses. The aim is to deliver an order-of-magnitude improvement in speed, efficiency, and power consumption compared to LLMs.
What level of improvement? It largely depends on the application, but some suggest Neuromorphic Computing architecture could be up to 1000x faster than those based on LLMs. Defining power saving is even more problematic, but some have suggested up to a 100x improvement to approach (not match—see clarification below) the performance of the human brain.
Unlike traditional artificial neural networks that use continuous values, Spiking Neural Networks (SNNs) use discrete events (spikes) to communicate and process information. These networks are integral to many neuromorphic architectures as they mimic the brain's architecture and operation. Not all neuromorphic computing uses SNNs, but SNNs are currently the most promising approach for realising the potential of neuromorphic computing.
A Clarification
In a previous post, I compared the performance of LLMs with the human brain. I suggested that although the performance of LLMs is impressive, they are simply a step on the way and not the end game. I do not suggest Neuromorphic Computing can ever achieve the performance of the human brain—I simply suggest it is a better approach.
Neuromorphic computing currently enables recognition, classification, and reactive adaptation based on sensory input. It does not deliver true learning or problem-solving in the human sense. While neuromorphic computing shows promise, it currently captures only a fraction of the brain's complex functionality.
If we could perfectly replicate the brain's mechanisms in hardware, we could potentially achieve similar levels of performance and efficiency, but science does not fully understand how the brain works. Mimicking something you do not fully understand is a significant problem.
Mimicking in hardware what is already known about the operation of biological neurons and synapses with the same density, connectivity, and energy efficiency is a huge challenge. Then there’s the problem of making a comparison between a machine and the brain. What exactly are we comparing? Are those comparisons valid?
Ongoing Challenges
Specialised hardware, such as Neuromorphic chips and processors, is required to implement Neuromorphic computing systems. Routing spike events between chips requires specialised bus systems, custom routers, and (probably) optical interconnects.
Data movement between processing units and memory is a bottleneck in conventional computers. Neuromorphic systems aim to overcome this by tightly integrating memory and processing, but this presents a new set of challenges.
The overall memory in a Neuromorphic computer system is distributed (to mimic the brain), but each artificial neuron needs to store a large number of synaptic weights. Many Neuromorphic designs place storage close to the computing elements (artificial neurons) to maximise efficiency.
The memory at each computing element needs to be dense to store the information representing the connections to other neurons. As neuromorphic systems grow, denser memory is required at each computing element (neuron) to accommodate this growth.
The memory must be non-volatile and fast (a combination that has always been a problem with traditional memory technologies). Plus -and this is key- it must be low power.
Many neuromorphic systems use analogue circuits to represent and process information. This can be more energy-efficient for certain computations, but analogue circuits are susceptible to drift and noise. Hence, synaptic weights can be stored in analogue memory, while the neuron's processing might be done digitally. Constant A/D and D/A conversion can be challenging.
Neuromorphic Computing - A Progress Report
Significant advancements have been made in Neuromorphic computing, driven by research in materials, architectures, and algorithms. Key areas of progress include:
Neuromorphic Chip Designs: Researchers are developing novel Neuromorphic chips incorporating memristive devices that mimic synaptic connections in the brain. These chips enable complex computations with significantly lower power consumption than traditional architectures.
Neuromorphic Software and Algorithms: Advancements in Neuromorphic software and algorithms have led to more efficient training of neural networks.
Neuromorphic Systems: Researchers are focusing on building large-scale Neuromorphic systems.
Open-Source Platforms and Tools: Initiatives like THOR: The Neuromorphic Commons, launched by the University of Texas at San Antonio, provide access to open Neuromorphic computing hardware and tools, facilitating collaborative research.
Hardware Advancements
The development of dedicated Neuromorphic chips is a key focus in the field. These chips are designed to implement the principles of Neuromorphic computing. Some examples include:
Intel's Loihi 2 processor: Designed for low-power edge computing applications, forming the foundation of Intel’s HalaPoint Neuromorphic system.
IBM's TrueNorth chip: Features 1 million neurons and 256 million synapses.
BrainChip Holdings Ltd.: Developed event-based ultra-low-power neural processing units ("Akida").
Qualcomm: A major player in SNNs development(Zeroth processors), focused on integrating them into telecommunication devices.
Innatera Nanosystems: A provider of efficient Neuromorphic processors that mimic the brain's mechanisms for sensory data processing.
Different types of memory are being explored for application in Neuromorphic computing, including:
Phase-change memory (PCM)
Resistive random-access memory (RRAM)
Ferroelectric RAM (FeRAM)
Magnetoresistive RAM (MRAM)
IBM is investigating PCM to store synaptic weights in analogue form and is also a leader in RRAM. Intel is focused on memristor technology.
Academic research groups are also making significant contributions to Neuromorphic hardware development. The European Union's Human Brain Project was an early example. Others include:
Intel Neuromorphic Research Community (INRC): A global collaborative effort led by Intel Labs (Loihi-based) that brings together researchers from academia and government.
Neuromorphic Computing Group at UC Santa Cruz: Focuses on understanding the computational principles of the brain.
TENNLab at the University of Tennessee.
Fraunhofer Institute for Integrated Circuits IIS: Focuses on applications in digital signal processing and embedded AI.
Photonic Spiking Neural Networks Group at Princeton University.
Research Group at UC Irvine: Develops CARLsim4, an open-source library for simulating large-scale SNNs.
Conclusion
Neuromorphic Computing combined with Spiking Neural Networks offers significant potential advantages over traditional LLM approaches. While major hardware challenges remain, ongoing research at Intel, IBM, and leading universities are making progress in critical areas. These developments suggest a promising, if complex, path forward.
For years, deep-learning research remained in the background, and few took much notice. The transformer architecture paper was published in 2017, and then things started to get interesting. Perhaps Neuromorphic Computing is following a similar path.
At some point, something way out on the edge (be it technical or geopolitical) will have a sudden impact that drives Neuromorphic Computing forward, best be prepared.