Modern neuromorphic computing is a return to basic neural-networking principles formulated decades ago. The initial idea was to try to mimic how the brain functions.
现代神经拟态计算实际上是重新回到几十年前准备构建神经网络的基本原理。当初的想法是模仿大脑如何工作。
The introduction of digital microprocessors in the early days of neural networking diverted the discipline, however. Processors became relatively cheap, and they doubled in power every two years, and that was as enticing to neural network researchers as anyone else.
但是,在构建神经网络的初期出现了数字微处理器,从而改变了这一学科。处理器变得比较便宜,其处理能力每两年翻一番,这对任何人都极具吸引力,神经网络研究人员也不例外。
Using microprocessors meant diverging from the way brains actually work, however. For example, neural networks began relying a great deal on reinforcement, whereas in biological brains there’s much less reliance on feedback loops. Nonetheless, the approach taken got results. What neural network researchers didn’t – perhaps couldn’t – anticipate early on was that they were heading toward a technological dead end. Eventually, the problem spaces began scaling faster than processor technology could keep up with.
然而,使用微处理器与大脑实际工作的方式并不相同。例如,神经网络极大地依赖于增强学习,而生物大脑对反馈循环的依赖要少得多。尽管如此,研究仍然取得了成果。但是,神经网络研究人员没有(也许无法)预料到,他们正走向技术的死胡同。最后,处理器技术已经解决不了出现的难题了。
So in the last few decades, more and more researchers have been getting back to first principles – trying once again to use mostly digital technology to more closely mimic the way brains work. It’s one of the reasons the term “neuromorphic” was coined: to distinguish these later efforts from some of the neural network approaches that are dead-ending.
因此,在过去的几十年中,越来越多的研究人员回到了最初的基本原理,再次尝试使用数字技术,以便更好地模仿大脑的工作。这是“神经拟态”一词产生的缘由之一:将以后的工作与那些没有结果的神经网络方法区分开来。
EE Times editor Sally Ward-Foxton recently wrote a story about Intel’s experiments with neuromorphic computing. The company has a neuromorphic chip called Loihi. Previously, the company put two of those chips together, which represented about the same number of artificial neurons as a fruit fly has real neurons. Sally’s story is about how Intel just scaled up that system to 768 of its Loihi chips and to about 100 million neurons, which she said is roughly the same as a naked mole rat. Here she is with international editor Junko Yoshida.
EE Times的编辑Sally Ward-Foxton最近写了一个关于英特尔神经拟态计算实验的故事。英特尔有一款名为Loihi的神经拟态芯片,将两颗这种芯片合在一起,实现的人工神经元数目与一只真实果蝇的神经元数目相同。Sally讲了英特尔如何将那个系统扩展到768个Loihi芯片,达到大约1亿个神经元,她说这跟一只裸鼹鼠的神经元数目差不多。接下来是她与国际编辑Junko Yoshida的对话。
JUNKO YOSHIDA: Alright, so it’s 100 million neurons, as you described in your stories, roughly the same number of neurons, a mole rat or hamster has in its brain. So that got me thinking, Sally, how many neurons you or I have in our brain? Do you have any guess?
JUNKO YOSHIDA:你在故事中说了,鼹鼠或仓鼠大脑的神经元个数大致相同,即1亿个神经元。Sally,我在想,我们人类的大脑中有多少个神经元? 你觉得呢?
SALLY WARD-FOXTON: So yeah, we think it’s about 86 billion (with a B), 86 billion neurons in the human brain. This actually was proved by a Brazilian scientist back in 2012. Up until then, we thought it was 100 billion. This is one of these figures that’s kind of widely quoted everywhere, but nobody could remember where this figure actually came from. So it was actually proved in 2012. It’s 86 billion.
SALLY WARD-FOXTON:嗯,我们认为人脑中大约有860亿个神经元,这是巴西科学家在2012年证实的。之前我们认为有1000亿个,这个数字被广泛引用,但是没人知道它的确切出处。所以真正得到证实是在2012年,860亿个。
JUNKO YOSHIDA: Wow. Well, that already makes me feel a little better or superior to a rat, I guess.
JUNKO YOSHIDA::哇!我感觉那个系统比老鼠好。
SALLY WARD-FOXTON: Better than a mole rat with a hundred million or a hamster at 90 million. Yeah. Or the poor old lobster with just 100,000, equivalent one Loihi chip. Yeah.
SALLY WARD-FOXTON:比1亿个神经元的鼹鼠或9000万个神经元的仓鼠好。或者说,一只可怜的老龙虾只有10万个神经元,相当于一颗Loihi芯片。
JUNKO YOSHIDA: Wow. That’s interesting. All right. So first thing first, Sally. I want you to step back and start from the very basics. You know, educate your ignorant editor here. What is neuromorphic computing? Let’s start from there.
JUNKO YOSHIDA:哇!这很有意思。Sally,让我们退后一步,从最基本的开始。现在你面对的是一个对此一无所知的编辑,那么,什么是神经拟态计算? 我们就从这儿开始吧。
SALLY WARD-FOXTON: So starting from page one, neuromorphic computing. It’s a branch of computing inspired directly by the structure of the brain. So the brain is a great computer. It does all kinds of complicated calculations. It does it very, very quickly. And it uses very little energy. The human brain only uses about 20 watts. So it’s very, very efficient. The idea behind neuromorphic computing is to combine our expertise in silicon with some structural concepts directly taken from the brain. Combine that with a neural network algorithm that’s also inspired by how the brain works to try and create a really efficient computer.
SALLY WARD-FOXTON:好,从第1页开始,神经拟态计算。它是计算的一个分支,直接受大脑结构的启发而形成。大脑就是一台很棒的计算机,它执行各种复杂的计算,速度非常非常快,而且消耗的能量很少,大约只有20瓦,因此它的效率非常高。神经拟态计算背后的思想是,将专业芯片技术与直接来自大脑的结构性概念相结合,将其与遵循大脑工作模式的神经网络算法结合起来,以获得真正高效的计算机。
JUNKO YOSHIDA: Yeah, all right. So you kind of summed up very nicely. So now I understand. That’s part of the reason that we hear now often terminology like “brain-inspired chips.” Right? And obviously, Intel’s Loihi chip. They use them in this announcement that neuromorphic computing system is in fact also a brain-inspired chip. Correct?
JUNKO YOSHIDA:嗯,总结得好,现在我明白了。所以我们现在经常听到“类脑芯片”等术语,对吧?,英特尔的Loihi芯片显然就是。神经拟态计算系统实际上也是一种类脑芯片,对吗?
SALLY WARD-FOXTON: Correct. Yeah.
SALLY WARD-FOXTON:对的。
JUNKO YOSHIDA: Yeah. All right, so tell us how that works.
JUNKO YOSHIDA:那给我们讲一下它是如何工作的吧。
SALLY WARD-FOXTON: Right. So basically the brain communicates using these neurons, the nerve cells and synapses. Neurons are the cells, synapses are the connections between the cells. And the connections can be electrical or chemical. A neuron only fires when its potential reaches a certain value. Neurons kind of pass these signals on through the synapses to neurons further down the chain, and they’re more or less likely to fire depending on the nature of the signal, let’s say, so that you basically get circuits made out of neurons and synapses that can process information.
SALLY WARD-FOXTON:好的。大脑通过神经元、神经细胞和突触进行交流。神经元是细胞,突触是细胞之间的连接,可以是电连接或化学连接。只有在电位达到一定值时神经元才会激发,信号通过突触沿着神经链路传递到其他神经元,神经元的激发取决于信号的性质,由神经元和突触构成的神经环路处理信息。
And Loihi’s architecture mimics the structure by using techniques like extreme parallelism, many-to-many communication and asynchronous signals. So there’s no multiply accumulate units or anything like what you’d see in a traditional chip. It’s nothing like it. You basically get these electrical pulses that are called spikes, and it’s all about the timing of these spikes. As the timing modulates the strength of the connections between the neurons, making them more or less likely to fire. So it’s kind of analogous to how weights affect the impact of the parameters in a more traditional kind of artificial neural network. Hopefully that made sense.
Loihi架构使用超级并行性、多对多通信和异步信号等技术来模仿这种结构,没有乘法累加器或传统芯片中用到的那些东西。它完全不同。你会得到尖峰电脉冲,一切都与尖峰脉冲的时间有关。它们调节神经元之间的连接强度,让神经元激发。这类似于更传统的人工神经网络中权重对参数的影响。希望我讲清楚了。
JUNKO YOSHIDA: Yeah, no, it does. And you actually… it’s a perfect segue to my next question, because one of the things that a lot of people wonder is, what are the differences between a so-called brain-inspired chip and an AI chip? Right? I mean, both seems like, okay, it has something to do with intelligence, something to do with brain. How are they different?
JUNKO YOSHIDA:你讲得很清楚……现在我们完美过渡到下一个问题,很多人都想知道一件事,所谓的类脑芯片与AI芯片有何区别?我的意思是,它们似乎都智能相关,也与大脑相关。它们究竟有何不同?
SALLY WARD-FOXTON: Right, so they’re both designed to run artificial neural networks or accelerate neural networks. But AI accelerators today are based on traditional computing, even if it’s some kind of really specialized, ASIC. You know, they rely on many cores in parallel and lots of memory to hold all the weights and parameters inside the chip. Sometimes they have these clever flexible data paths. All the information is passed along from stage to stage every cycle. And if you’ve got a neural network where the activation is quite sparse, that is, you know, most of the data are zero, it still takes energy to multiply those weights by zero.
SALLY WARD-FOXTON:它们都是设计来运行人工神经网络或加速神经网络的。但是,当今的AI加速器基于传统计算,即便它是真正的专用IC(ASIC)。你知道,它们在芯片中采用了很多并行内核和大量的存储器来保存所有的权重和参数。有时候它们具有灵活的数据路径。每个周期的每个阶段都在传送信息。如果你有一个神经网络,它很少在激活状态,也就是说,大部分数据为零,这些权重乘以零仍然需要能量。
JUNKO YOSHIDA: Right.
JUNKO YOSHIDA:是的。
SALLY WARD-FOXTON: So although both neuromorphic chips and AI accelerators both run neural networks, neuromorphic computing only runs these very special, very niche type of neural network called spiking networks. As we said, it’s kind of inspired by these electrical pulses in the brain. But it’s asynchronous computing. The timing really matters. So not every neuron fires every cycle. Only the ones where the data isn’t zero. So you can basically save tons of power. And this…. So there’s quite an important distinction between traditional neural networks and neuromorphic algorithms, which is that traditionally, neural networks need tons of time and energy and reams and reams of well-labeled data to be trained before they can do the inference. But with neuromorphic techniques, you can allow these kind of one-shot training techniques.
SALLY WARD-FOXTON:因此,尽管神经拟态芯片和AI加速器都运行神经网络,但神经拟态计算仅运行这些非常特殊、非常少见的神经网络类型,即脉冲神经网络。正如我们所说的,这是受到人脑电脉冲的启示,不过是异步计算。时间真的很重要。并非每一个神经元在每个周期都会激发。只有数据非零的才会激发,这可以节省大量能量。因此,传统的神经网络与神经拟态算法有一个重要的区别,那就是传统的神经网络需要大量的时间和能量,并且需要对大量标记良好的数据进行训练,然后才能进行推理。而神经拟态技术允许一次训练。
So Intel did a project on an electronic nose based on Loihi and that they talked about this week. They exposed it to just one sample of each smell, and that was it. Training done. You know, it effectively learns in real time. So it’s a totally different computing paradigm.
英特尔利用Loihi完成了一个电子鼻项目,他们将其暴露于每种气味的一个样本中,就是这样。你知道,它可以实时有效地进行学习。因此,这是一个完全不同的计算范例。
JUNKO YOSHIDA: Right. That got me thinking that maybe it limits the application of so-called neuromorphic chip compared to AI chip. I mean, it’s like I always thought that, you know, talking to some experts in the field of neuromorphic engineering, I’ve always thought that those chips work best on the edge rather than in data centers. Am I wrong about that?
JUNKO YOSHIDA:对。这让我想到,与AI芯片相比,它可能会限制所谓的神经拟态芯片的应用。我与神经拟态工程领域的一些专家聊过,一直认为这些芯片更加适合边缘而不是数据中心。我有没有说错?
SALLY WARD-FOXTON: No, you’re not wrong. You’re not wrong. The first commercial applications will certainly be outside the cloud. The idea is to compute very efficiently and save power, like multiple orders of magnitude in terms of power. And also, you can do it very, very quickly, even in real time. So those two things, you know, very useful for a lot of endpoint applications. The technology has been used so far to make sense of sensor data, as a great example. So for example, we’re seeing these image sensors, cameras come to the market based on this technology that do image processing using these techniques.
SALLY WARD-FOXTON:你没说错。最初的商业应用肯定会在云之外,这样做是为了实现高效计算,省电可以达几个数量级。你能很快完成计算,甚至是实时完成。这两点对于许多端点应用非常有用。这项技术目前已被用来理解传感器数据,例如,相机中有一些图像传感器,相机可以利用这种技术进行图像处理。
JUNKO YOSHIDA: Okay, so then, reading from your story, I realized that Intel appears to believe the neuromorphic chips are not just for the edge, but they believe the scaling the system up is important. Why is that then?
JUNKO YOSHIDA:读了你的故事,我觉得英特尔似乎相信神经拟态芯片不仅可用于边缘,他们也认为可以扩大系统应用范围。为什么呢?
SALLY WARD-FOXTON: So yeah, in the short term, Intel’s built this huge system to kind of enable researchers who are developing these spiking algorithms. It’s in the cloud kind of purely to allow them access to it, to allow them remote access, and they’re not really using it for what you think of as cloud computing today.
SALLY WARD-FOXTON:是的,就目前来说,英特尔构建了这个庞大的系统,供开发脉冲算法的研究人员使用。它在云中,研究人员只能远程访问,不是我们常说的云计算。
The further into the future, I guess, yeah, there may be data center systems, just like we have AI accelerators in data centers today. If you can save power by a factor of 1000, of course, that’s going to be useful in the data center, too. That said, I think, you know, neuromorphic computing really does lend itself to real-world, real-time processing. So yeah, I mean, it be interesting to see what happens. Intel have said they’ll build a big portfolio of systems of different sizes, perhaps even different chips of different sizes. Their eventual aim, I think, is to build bigger systems comparable to more like mammal brain or even the human brain. I mean, this is where it gets really exciting/scary, right? I mean, even getting close to artificial intelligence.
我认为未来可能会有数据中心系统,就像今天的数据中心有AI加速器一样。如果能够节电1000倍,这对数据中心当然是有用的。我想,神经拟态计算确实适合现实世界的实时处理。我们饶有兴味观注着发生的事。英特尔曾表示,他们将建立不同大小的系统,甚至不同大小的芯片。我认为他们的最终目标是建立更大的系统,就像哺乳动物的大脑甚至人类的大脑。这真是令人兴奋/令人害怕,不是吗? 这愈发接近人工智能了。
JUNKO YOSHIDA: That was the ambition of IBM’s chip. I think it was called Truenorth. I think IBM unveiled the Truenorth chip six years ago, I think, and at that time the company claims it was the first single self-contained chip to achieve one million individually programmable neurons. So tell us the challenges ahead, though. What sort of problems would neuromorphic computing engineers still have to solve?
JUNKO YOSHIDA:6年前IBM就雄心勃勃地推出了Truenorth芯片,声称它是首款实现100万个可独立编程神经元的独立芯片。那么未来会有怎样的挑战?神经拟态计算工程师还需要解决哪些问题?
SALLY WARD-FOXTON: So neuromorphic chips today are still in their infancy. I mean, they’ll develop, they’ll evolve as more work is done by companies like Intel. The algorithms, the spiking neural networks, those are essential to really taking advantage of this hardware. And they still have a long way to go. There’s a lot of research going on with the algorithms right now. Like Intel are working with brain scientists to try and understand more about how brains work in order to mimic the brain. That field still holds a quite a few mysteries. So, you know, it’s not, it’s not straightforward.
SALLY WARD-FOXTON:今天的神经拟态芯片仍处于起步阶段。随着英特尔等公司的研究不断深入,芯片会不断发展,不断进步。这些算法,即脉冲神经网络,对更好地利用此硬件至关重要。未来还有很长的路要走。目前研究人员也在大力研究这些算法。英特尔等公司正与大脑科学家合作,更深入地了解人脑如何工作,以期更好地模仿人脑。这一领域仍然有许多未解之谜。所以,这不是一件简单的事。
JUNKO YOSHIDA: Right.
JUNKO YOSHIDA:对。
SALLY WARD-FOXTON: In terms of commercialization, you know, it’s all about how will engineers or developers program this thing? I mean, there has to be software that has to be developed in parallel with the hardware. Engineers have to learn how to use it. And they… I guess they have to be convinced that it’s worth learning how to use it. So it becomes popular, I mean, just the same as for any new computing architecture. Right?
SALLY WARD-FOXTON:说到商业化,这取决于工程师或开发人员如何编写程序。我的意思是,软件必须与硬件并行开发,工程师必须学习如何用它。 而且,我认为还必须让他们相信学会使用它是值得的。就像其他任何新的计算架构一样,它会流行起来。是不是?
JUNKO YOSHIDA: Right. Right. So are we expecting different types of algorithm for the different applications when it comes to the neuromorphic computing?
JUNKO YOSHIDA:是的,是的。那对于神经拟态计算,不同的应用是不是有不同的算法?
SALLY WARD-FOXTON: Yes, absolutely. And that’s kind of where a lot of research is coming at right now is different. So algorithms are really good for some optimization problems, or there’s one called constrained parameter or something. There’s all different types of problems that these neuromorphic algorithms are good at. But you need different algorithms, even though they’re spiking neural networks, you need different algorithms for each property.
SALLY WARD-FOXTON:绝对是的。目前有很多不同的研究。算法对于某些优化问题确实非常有用,比如约束参数。神经拟态算法能解决各种类型的问题。但是,你需要不同的算法,即使是脉冲神经网络,每个属性都需要不同的算法。
JUNKO YOSHIDA: All right. Okay. Well, thank you very much. We’ll talk to you soon then.
JUNKO YOSHIDA:好的,谢谢。今天就到这里,我们下次再聊。
SALLY WARD-FOXTON: Thanks, Junko.
SALLY WARD-FOXTON:谢谢,Junko。