Making things smarter

Edge Impulse is the leading development platform for machine learning on edge devices, free for developers and trusted by enterprises.


Crane Operational


Elephant Activity


Motion Detected


Crane Operation Healthy


Human Proximity Confirmed


Heart Rate Variability

Trusted by thousands of embedded developers running critical machine learning projects across millions of data samples.


Build a model in 5 minutes.

Want to see Edge Impulse in action? Build a model in real-time using your phone’s accelerometer, microphone or camera to collect data and train machine learning algorithms, and see what happens live on the platform. No signup required, just scan the QR code on the right to get started!

Try it now

No signup required

Use your phone’s camera or QR reader app to scan this code, and start building your tinyML model using your phone.

Go to your desktop to try it now

You need javascript enabled to use this feature

Start using your device data

Enable valuable use of the 99% of sensor data discarded today due to cost, bandwidth or power. From getting started to MLOps in production, Edge Impulse provides maximum efficiency on a wide range of hardware from MCUs to CPUs thanks to Edge Optimized Neural (EON)™ technology.

Acquire valuable training data security. Enrich data and train ML algorithms. Test impulses with real time data flows. Embedded and edge compute deployment options

Embedded TinyML for beginner and advanced developers

Edge Impulse was designed for software developers, engineers and domain experts to solve real problems using machine learning on edge devices without a PhD in machine learning. Check out the amazing cloud based UX, awesome documentation and open source SDKs.


The most innovative individuals and organizations use Edge Impulse

Meet some of the leaders who use Edge Impulse to power their embedded machine learning.


Meet the new industry standard:
Edge Optimized Neural (EON™) by Edge Impulse

This new compiler will kick your #TinyML code into overdrive, and run a neural network in 25-55% less RAM, and up to 35% less flash, while retaining the same accuracy, compared to TensorFlow Lite for Microcontrollers. EON achieves this magic by compiling your neural networks to C++, unlike other embedded solutions using generic interpreters, thus eliminating complex code, device power, and precious time.

See it in action
ArmSTMicroelectronicsHacksterTensorFlowThe Things NetworkTinyMLEta ComputeArduinoMicrochipHimax

Get in touch.

Are you interested in bringing machine learning intelligence to your devices? We're happy to help.

Contact us