Machine Learning on the Edge with Zach Shelby, Episode 4 — BrainChip CEO Sean Hehir on Next-Gen AI Silicon IP

Join us for the latest edition of Machine Learning on the Edge, where Edge Impulse co-founder and CEO Zach Shelby talks with business and technology leaders about the latest developments in the world of edge AI. This month’s guest: Sean Hehir from BrainChip.

Watch here, listen to the podcast version, or enjoy the transcript below.

Zach Shelby: I’m Zach Shelby, and welcome back to Machine Learning on the Edge. We’re here at the Computer History Museum again to talk about what data and machine learning are doing for all industries: medical devices, automotive, logistics, consumer, you name it, and we’re here bringing the leading executives and technologists from the world to talk about what’s happening in this space. Today I’m really honored to have Sean Hehir, CEO of BrainChip, the company that’s bringing neuromorphic technology to the entire silicon industry. Sean, welcome.

Sean Hehir: Thanks Zach. Glad to be here.

Zach: Now, I want to dive into the history of this technology. Not a lot of people know what neuromorphic is. People start to understand AI, think at a high level what AI means and kind of what the edge means. But neuromorphic has always fascinated me because of the connection with the brain. What’s the backstory of neuromorphic in the BrainChip journey?

Sean: Great way to start. I always like to start at the beginning. Neuromorphic thought or technology or principles are really not that new. It was really established in the mid-eighties and the concept about, if you think about the word neuromorphic, it’s really based on how the brain processes and the brain is a very efficient way to do processing. Computer scientists for a long time started to think, "How can we mimic the brain to really take advantage of that kind of power of thinking and computation in the world?" Once it came out, it was typically deployed, more custom, very analogue, very one-off type of thing, really not very practical in the world, and we even see that today what some of the bigger companies are out doing research, very kind of custom research stuff. With BrainChip we tried decided to take the best of the neuromorphic principles, which is that very efficient computation engine, and apply it into mainstream technology. Let’s take the best of both worlds, and I like to think about this or talk about this a lot. Is we always accepted the traditional way of doing things, Neuromophic is saying why don’t we do it smarter, do it differentl? So we have two founders in BrainChip. One is Peter van der Made. We refer to him as "Brain." and Anil Mankar, we refer to him as "The Chip." And so with Peter he started to think very strongly about neurormorphic principles for a long time and how best to implement it, and he really cracked and worked really hard to get it into a digital implementation. As soon as he got it implemented a couple of proof, a concept, he quickly wanted to recruit Anil, who’s a 30-plus-year veteran in the semiconductor space and started to create BrainChip. They worked for many, many years to deploy it, and so it’s really the principles of the past brought into modern architecture of today.

Zach: Digital was a major leap here to go from this, like research, super analog, to the world of digital, which of course means we can fab it, we can produce it at volume at low cost. What did that do? What does that move to digital in practice do? I guess it bridges a gap?

Sean: It brings it to mainstream technology. Because you don’t want to be technology just for technology’s sake, you want to make it to a mainstream use case. To your point, you can produce it at scale, it’s stable, it’s predictable all the things you need in mainstream technology. So that’s the real key thing about us, it is a fully digital implementation on neuromorphic.

Zach: And what does neuromorphic do for us from from a technology perspective? You think about AI developers, devices OEMs... How should they think about neuromorphic at a very basic level, more efficient AI operator processing? But I’m guessing there’s other things that that come to mind for you.

Sean: Sure, a lot of it is efficiency, because the brain is so efficient and really what we do in our implementation is make things, It’s really event-driven so we look at, we take advantage of sparsity. We only compute on the change state of things. So it’s always incredibly efficient, which translates to low power, and we’ll talk a lot about power in the future part of our conversation. But lower power makes that incredibly powerful. Likewise, though, because it’s so efficient you can get tremendous performance. A lot of times people talk about the power elements of our offering, but the performance attributes are incredibly strong on the high-end as well. So you’ve got that and also allows, we’ve got some very unique implementation around localized learning instead of learning and training up in the cloud. You can actually do an implementation in use case where you can learn locally, add a new new class or a new object to something that you’ve deployed in the field. Some very unique benefits from this technology.

Zach: And now we’re making this technology easy-to-use and I think that’s like the holy grail of all of this is: let’s put it in the hands of engineers, and so for anybody that wants to try this out, we recently announced full Edge Impulse support, out-of-the-box multiple types of models running directly on BrainChip hardware. So go check that out, get yourself a kit, check out the performance yourself and what what this neuromorphic technology can do.

Sean, thinking about both of our backgrounds. I’m coming from Arm where we believed in ecosystem. You ran ecosystem at HP. What does ecosystem mean for this edge AI industry?

Sean: How much time we have? We could talk for a long time on this topic. You said it earlier. I ran all the ecosystem worldwide for Hewlett Packard, I believe very strongly in the power of ecosystem, meaning no technology is an island unto itself. You’ve got to work with other partners, what I call to the left and the right, above and below you at all times, for the benefit of our customers. In the end that’s really what it’s about, not for your benefit, not for my benefit. For the customers. They want to know, simply, does it work in the total solution and they want confidence that it will work. And only by working with partnerships — and we can talk about those type of partnerships in a moment — that you’ve you’ve tested, you’ve configured, you’ve documented, you’ve proven these things. They have the confidence to adopt your solutions as well. The other part that is really important is the easy part. If you look about any technology that you buy, any business buys, they want to ensure that they can deploy it easy. So the more you work with ecosystem partners, like our relationship, and they can see the ease of deployment, the ease of adoption, the ease of model, et cetera, they’re much more confident in that technology. So I think it’s fundamental to our strategy. One of the first things I did when I became CEO was established that ecosystem strategy, resource it. We’ve got some great relationships and we’re going to accelerate that effort this year.

Zach: I think this is spot-on. When we think about Edge Impulse and the journey we’ve been on, it’s very much the same thing. Right? Like our job is to make the deployment, the use of data and the deployment of custom models easy, accessible, commercializable for the end customers. And to do that we need system integrators, silicon vendors, silicon IP vendors, device makers who can open up their hardware. We have people making gateways, cameras, right? Industrial sensor manufacturers. They need to open up their hardware for AI. There’s a whole world of companies, hundreds of companies that we need to enable.

Sean: I’ll add a few more of some consultants and there are some application providers, all these companies.

Zach: So for people that are interested to check out the ecosystem in this space, Edge Impulse Imagine, the end of September, right here at the Computer History Museum, where we’re at, a wonderful place to shoot and hold any event, people are welcome to attend and and follow that show, because that’s where we try to bring the whole ecosystem together. But thinking about the, the neuromorphic BrainChip ecosystem, what about people like foundries, the big architecture vendors like Arm? How do you fit into that ecosystem?

Sean: Absolutely well, let’s take foundries to start. When you’re an IP provider, people want to have confidence that it’s portable, it can be deployed in any kind of technology fabric. And so, our first chip, because we do, even though we are an IP provider, we take our IP to silicon to prove it out, because that’s another thing. Customers want to know that it actually works. So we have chips, we have kits, we have boards, they can take it, they can try it, and this works incredibly well. Take a license and then design their customized chip in a very optimized fashion for them. So we make a conscious point so far to work with two major foundries, we’re also in a third foundery program that we announced recently, and we’re going to continue to expand these relationships so our customers can have confidence that they’re going to be able to deploy it in any kind of technology they want. Right? Because that’s their choice to do. Around kind of processor companies and architecture companies, it’s critical again to the comment I made a moment ago, they want to know that it works well, whether it’s Arm that you mentioned, or RISC-V. They want to have confidence that we work with them. So of course we work with them, we test, we configure, we document, so ease of adoption, because this technology will always work particularly on the edge with those two companies.

Zach: Yeah it’s really key from an architectural standpoint, for people to understand that acceleration can be mixed and matched with so many different processor architectures. You know, neuromorphic acceleration plus Arm MCUs, right? Neurmorphic acceleration plus Arm CPUs, neuromorphic acceleration plus RISC-V. On and on, right? There’s unlimited numbers of combinations of this and we have to give silicon vendors that design space freedom to solve different industry problems, going from automotive to wearables, for example, totally different types of goals.

Sean: Absolutely.

Zach: Let’s talk about industries for a moment, because it’s kind of where the rubber hits the road. We can dream of all the amazing acceleration and compute architectures, AI model architectures, but we really need to make industries, industry executives successful in improving their business. What are the top three industries for you that are driving edge AI business right now?

Sean: I’ll give you those three, but I would simply start by saying I’m always surprised, pleasantly surprised, by the calls and the inbound call from multiple industries. Now I’ll give you our three leaders right now, because in the end, as you and I talked in the past, it’s got to make practical business impact. And so the industries that we see the most, ones that are usually characterized by highly competitive, usually a leader in that market, forcing change. The rest of the market or the rest of the players in that market need to compete so they they want to do things that are breakthrough in nature and allow them to do that. So the ones we see most right now are: auto for sure. If you think about auto two major trends in auto: every car is getting smarter. Big push towards electrical. If you’re electric, obviously low-power in our offering is a critical part. If you’re trying to be smarter, that usually means many sensors. It also means distributed compute, the idea or the days of central ECUs is yesterday’s thinking. Distributed sensors, distributed compute, uploading the metadata, managing it centrally. That’s the model that cars are going for. So we’re seeing a lot of interest in the auto vertical.

The second one is industrial and industrial is a broad category in the use cases around it, but a lot of things that are happening right there. Everyone is looking for that competitive advantage and typically around the edge deployment in industrial. It’s typical things about vibration analysis, or sound, or something local that you can react, and a lot of times the driver is preventive maintenance. What can we do? Look at a signature pattern from sensor imputed, say, oh, that will keep this train on the track longer, it’ll keep up our our machinery that much longer, we can get more output. So you’re increasing the top line and the bottom line at the same time.

And then of course the home. Now you put the home and medical all in that same kind of general bucket, meaning anything that we wear, perhaps, or small appliance in your home or small device. So the things on your wrist, the things in your ear, all very power-sensitive, yet need very sophisticated models to do these things in a way. They’re untethered to the internet. Perhaps so those are the ones we see the most. But I’m always pleasantly surprised by the inbound calls.

I’ll give you another, a fourth one: communication devices, and I’m not talking mobile, very interesting, a lot of lot of innovation in communications. Mainstream stuff, video conferencing as an example. Things like that, where people say, how do I make my end product different and a better user experience for the end users?

Zach: Well, that jives with what we’re seeing in the market, medical and health wearable devices — so much going on. We have amazing customers and it’s really driven by the clinical data that they can scale-up with and the algorithm value that they’re able to generate from that. For them that is like the big value of their companies. Why they’re able to raise money, why they’re able to go public, is based on that data and algorithm generation. So for those customers they have a big driver to go and use edge AI technology and then an industrial in addition to predictive maintenance and sensors, we see a ton going on in industrial productivity and manufacturing. How can we make manufacturing more efficient? A great case study we did was with Advantec in Taiwan for their own manufacturing facilities, we put in a workers safety monitoring system with computer vision, just like overhead camera, not looking at the the worker themselves and their productivity, but their safety. Because if there’s a safety violation that shuts down a line, it’s a disaster and a lot of the management’s time is spent looking over the safety of the workers. So if we can automate that process, they found very quickly that there was a 15% improvement in productivity for the whole facility because it freed up so much time from the management of the facility to concentrate on other things. That’s a lot of money, right? Think about what what a manufacturing facility can produce in one day, 10 to 15%, it’s worth investing in data and AI.

Sean: Absolutely, we’re just scratching the surface of great opportunity right now. It’s interesting you mention productivity, what’s going on here in the US? Are we going to go to recession, not going to recession, and they’re wondering how is the economy doing so well yet with low unemployment and it’s the productivity gains are massive in this country, and it’s going to continue with this kind of technology.

Zach: We talked a little bit about value proposition already, but I want to come back to that again, because as technologists we kind of overlook this sometimes, right? We think about "wow it’d be so cool to put an AI camera in this thing or it’d be so cool to detect sleep on this wearable device, but it’s not really how executive business decision makers think. So what do we need to do to really have this take off and become mainstream in devices OEMs in the end-users of the technology?What’s the thing that’s driving value and especially in a market that’s more challenging this year? We’re having to justify value a lot more carefully.

Sean: Well, it’s almost related to the last question. All businesses that approach us, I’m sure with you, are looking for a couple of things. What is going to increase their top-line, what is going to give them a greater market penetration, or perhaps raise the price of their end product? So is it kind of technology if it’s a feature function? And they said, "oh, I could charge 20% more if I had this feature function." Those are the conversations I have a lot with business leaders, not the technology. I have great people in our organization that have those. I talk to the business leader saying, "yeah, if we could do this, you could charge more or you could gain some market share." They like that conversation or, conversely, on, what can you do to help us in the bottom line? Some of the things you just mentioned. So, as technologists, we often neglect that importance, which is translating that to the value of the of the enterprise, and we are spending a lot of time talking about those conversations I just outlined, which is yes, this will help you, it’s not just technology for technology’s sake.

Zach: And one data point that I have, we work with a lot of teams that are trying to get into this technology space across these industries. They know they need to start making use of their data, they have access to valuable data, whatever industry you’re in, you’re sitting on gold, which is your unique data from your business. Applying that data to algorithms is a big jump. The amount of skills you need, the ability to go deploy this on real accelerated edge compute that you need to take the benefits from. Power, right? Being independent on edge compute without having to go to the cloud, it’s very hard to make that jump. So we’re finding that just the teams that need to be built to take advantage of edge ML can be in the size of five to 10 people with 12 to 18 months of infrastructure building, just to have the the tooling and deployment capabilities for just one piece of edge compute, let alone several generations of different silicon targets. That can cost anywhere from three million dollars, a year, in just building up your own infrastructure, which is crazy if you think about the kind of cost savings we’re going after are sometimes in the million to three million dollars a year. That’s already a big win. If you’re spending that much and just building up the base infrastructure, it’s not really worth it. So there’s a lot to learn about how we apply this, how we make it easy, and I think this "making it easy" for the teams that exist, that are already skilled in the problems they’re solving in their own business and getting their time to market down as low as we can, literally months rather than years.

Sean: I think that’s a great way. In fact, when I gave you the answer to your last question, I wanted to talk about the ease of adoption and, quite frankly, why a relationship with Edge Impulse is so important to us. When I think of your company, I think the things you just described is exactly what you’re doing, making it easier, speeding up the deployment. These are critical needs in the market to help move it along that much quicker.

Zach: Awesome. Sean, thank you for joining me on ML on the Edge. I hope everyone enjoyed the conversation, really diving into what’s happening with neuromorphic technology, fascinating set of technologies. I think we only scratch the surface of what’s possible with this. I encourage everybody to check out what BrainChip is doing. Check out the latest support that we have, to go put it into real deployment and see for yourself what you can do with the latest in AI acceleration.

Sean: Thanks Zach, let’s do it again.

Comments

Subscribe

Are you interested in bringing machine learning intelligence to your devices? We're happy to help.

Subscribe to our newsletter