Hector Gonzalez: Neuromorphic Computing at the Scale of the Human Brain With SpiNNcloud Systems

What if supercomputers were designed like the human brain, involving millions of cores for highly parallel processing and solving tough computational problems in real time?  

Since our last interview in the summer of 2022, SpiNNcloud Systems has made significant progress toward realizing this vision. Based on decades of research on the SpiNNaker 2 chip as part of the Human Brain project, they’re not only building the world’s largest neuromorphic supercomputer at TU Dresden but also now making it commercially available—customers can now pre-order the hardware.

We had the pleasure of speaking again with the Co-CEO Hector Gonzalez about what their supercomputer will be capable of, how it will enable real-time AI, its applications, and one of his key learnings of being a deep tech entrepreneur. 

What Makes SpiNNcloud Special to You?

SpiNNcloud is a startup building upon decades of research. Not just a single PhD thesis exploring a new design for a microchip, but really dozens of PhD theses and research by world-class scientists like Christian Mayr and semiconductor giants like Steve Furber, inventor of the ARM chip architecture and initiator of the SpiNNaker project, which laid the foundation for what we’re doing now with SpiNNcloud Systems. It’s very exciting to take part in commercializing a technology made possible by so many people.  

It’s fair to say that SpiNNcloud changed my life, taking me out of the lab and making me part of something much bigger than my own individual research. I encountered challenges I never thought of having, which has made me grow as a person. And it has made a dream come true of founding my own business. 

How Did SpiNNcloud Progress Since Our Last Interview in 2022?

Within the scope of the SpiNNaker2 research project, the first supercomputer is being built at TU Dresden, and we have made good progress in setting up the scene for what comes next. As a lot of money and effort went into developing it, we’re doing the setup gradually and cautiously. We’re working on connecting increasingly more parts until we have built a full-scale supercomputer. 

It will be the world’s largest neuromorphic supercomputer, comprising 34,560 SpiNNaker2 chips and a total of 5 million cores. Each SpiNNaker2 chip combines several different cores suitable for different kinds of algorithms, allowing us to natively run three different kinds of AI algorithms: mainstream deep neural networks, symbolic AI algorithms, and event-based machine learning methods, such as spiking neural networks, and others. 

SpiNNaker2 board

We can even combine the best of both worlds, for example, using deep neural networks to extract features and symbolic AI to classify them or reason over them with high precision. And even beyond AI, the hardware supports a broad range of algorithms, solving tough optimization problems or computational models, e.g., for fast drug discovery. 

The entire supercomputer is modeled after the human brain in a top-down way: Each core can be programmed and operated individually and on demand. Just as the human brain is only active when something interesting is happening, we can power up and use the cores exactly when needed. 

Over the past few years, we have validated our supply chain and demonstrated that we can acquire and manage pilot projects end-to-end with research and industry partners internationally in countries like the US, Canada, Switzerland, and Australia. We have shown that SpiNNaker can deliver value in various domains. 

We’re now going one step further: Customers can now pre-order the SpiNNcloud machine, the world’s first commercially available neuromorphic supercomputer. We’re excited to have Sandia National Labs among our first customers and look forward to working with many more customers and seeing how SpiNNcloud can help with their computational problems. 

How Does SpiNNcloud’s Machine Compare to Intel’s Hala Point System?

From my perspective, SpiNNaker2 and Loihi2 are designed at different levels of abstraction. Intel’s Loihi 2 chip is designed to implement spiking neural networks efficiently and provide fast, real-time responses on the chip level. Our SpiNNaker 2 chip takes a more flexible approach at a higher level of abstraction, integrating many different programmable ARM-based cores with dedicated accelerators that are not only designed for spiking neurons but also enable hybrid algorithms from even non-neural disciplines.

The neuromorphic supercomputers we offer commercially also achieve larger scales, an area where the SpiNNaker project has excelled since its inception. For instance, the “Big Machine” in Manchester features 57,600 SpiNNaker1 chips, enabling the fabric to deploy 1 billion neurons. The SpiNNcloud supercomputer that we are bringing up in Dresden integrates 34,560 SpiNNaker2 chips with the capacity to implement more than 5 billion neurons. 

Furthermore, the largest system we commercially offer can accommodate up to 69,129 SpiNNaker2 chips capable of implementing more than 10 billion neurons. Our expertise lies in this domain, where we pursue real-time processing at all scales, from chip to system level. Since our chips are highly programmable, our entire system is optimized for flexibility, which makes it the perfect complement for more specialized, highly efficient AI chips.

We are actually happy to see systems from big players on this scale because it validates that we’re going in an interesting direction. At the same time, we think keeping flexibility is important as new AI algorithms and architectures are developed. The transformer architecture might not be the go-to choice in five years, and computers will need the flexibility to deal with different kinds of AI models. 

Rack with several SpiNNaker2 boards

What Applications Are You Most Passionate About?

Early results from our pilot projects have demonstrated significantly faster execution times in running computational models for drug discovery. Our machine can deploy a larger number of small models that talk to each other very fast to achieve these large speed-ups. This is crucial to enable personalized medicine: At the moment, it’s too expensive and takes too long to develop a drug or vaccine for an individual patient, but with our machine, it can become feasible and economical in the future. 

There are also many QUBO-type optimization problems, such as routing in logistics, which have shown significant improvements in the execution time to find good enough solutions—or vice versa. With slightly larger execution times, we will be able to find more optimal solutions for such problems. 

Finally, we can combine different types of machine learning to achieve high accuracy for classification problems, using deep neural networks to extract features in data and symbolic AI algorithms to detect anomalies.

We can scale up rule-based systems with embedded knowledge, which can help AI systems make sense of the real world. And with the SpiNNcloud machine, we have the most programmable, most scalable neural supercomputer available today to run these models in real-time. 

What Is One of Your Key Learnings From the Last Two Years of Being a Deep Tech Entrepreneur?

I wish I had known how important it is to tell your story and have a media presence. Very technical teams often work in an environment where they are doing great stuff but rarely ever talk about what they’re doing and the milestones they’re achieving. There’s no merit in developing cool technology for its own sake; you need to tell people about it. That’s why we have put more communications efforts in place and just published a press release about our partnership with Sandia National Labs.

Comments are closed.