Blog post

So Many Ways to Export Your Impulse

Blog
By Edge Impulse Team
So Many Ways to Export Your Impulse

Everyone loves a system that is continuously improved with new interfaces, more ways to engage, and upgraded features, and Edge Impulse is happy to provide these constant improvements to users of our tools. The ways Edge Impulse’s Studio can deploy edge ML models are one area of regular expansion. Impulses made on our platform can already be used on almost any processor, but to take full advantage of the advanced features specific boards offer, we’re regularly updating to natively leverage the architecture and IP of those boards to give the best, quickest deployments possible.

Here are some of the various methods that our users can currently access to push their creations to the edge. This list is just a starting point as we’re regularly adding new output options and other features so keep your eyes peeled for announcements.

C++ library — Trained models can be deployed as a C++ library. This packages all your signal processing blocks, configuration, and learning blocks up into a single package. You can include this package in your own application to run the impulse locally, on embedded targets, and desktops alike.
docs.edgeimpulse.com/docs/deployment/running-your-impulse-locally

Arduino library — All-in-one package for Arduino boards. You can include this package in your own Arduino sketches to run the impulse locally. docs.edgeimpulse.com/docs/deployment/running-your-impulse-arduino

WebAssembly — Impulses can be deployed as a WebAssembly library. You can include this package in web pages, or as part of your Node.js application. This allows you to run your impulse without any compilation. docs.edgeimpulse.com/docs/deployment/webassembly/through-webassembly

TensorRT library — You can also use Edge Impulse to create models optimized for inference on NVIDIA GPUs and run on embedded boards such as the NVIDIA Jetson Nano. docs.edgeimpulse.com/docs/edge-impulse-for-linux/linux-cpp-sdk#tensorrt

Ethos-U — A C++ library with your model converted to custom ops targeting an attached U55 NPU for inference acceleration.
docs.edgeimpulse.com/docs/deployment/running-your-impulse-locally/running-your-impulse-alif-ensemble

Tensai Flow library — For those using the Synaptics Katana EVK, Edge Impulse has an option to create and deploy an impulse that runs natively on its KA10000 board Our impulse takes advantage of the KA10000 AI Neural Network processor, via the Tensai Flow software platform.
docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/synaptics-katana

Simplicity Studio components — Impulses can be deployed as components ready for use with Simplicity Studio and pushed to virtually any Silabs embedded microcontroller. docs.edgeimpulse.com/docs/deployment/running-your-impulse-locally/on-your-thunderboard-sense-2

Have a specific platform that you’d like to see us support natively? Send us a note.

Comments

Subscribe

Are you interested in bringing machine learning intelligence to your devices? We're happy to help.

Subscribe to our newsletter