Build a Keyword Spotting Model with Your Own Voice in 30K RAM

Always wanted to build your own “Hey, Siri”-type device? Want to control your lights with your voice? Want to send a message to the cloud when you make growl? You can do that with Edge Impulse! Over the last months we have added a bunch new features to make it easier to build Machine Learning models that can deal with human speech. We’ve added new processing blocks, new tools to automatically find keywords in long audio files, and new visualizations that show you whether your dataset is in good health. All of this comes together in our latest tutorial: Make your device respond to your voice.

This tutorial guides you through every step required to build a real TinyML model that responds to your voice. No pretrained models, no already created datasets, and no fixed keywords. Want your device to listen to your own name? Go for it! You’ll learn how to collect data from one of our fully supported development boards or your mobile phone, how to train an ML model, and finally how to deploy this back to your device where the model classifies audio in realtime. The final model runs in <30KB (yes, 30KB) of RAM, and can run 5x a second even on a 40MHz microcontroller.

Get started by clicking “Play” on the video above, or go here for the written tutorial: Make your device respond to your voice .

Looking for something else? You can also build models that recognize motions, detect non-voice audio or that recognize objects visually!

-

Jan Jongboom is the CTO and cofounder of Edge Impulse. He’s glad that at least something listens to him at home.

Subscribe

Are you interested in bringing machine learning intelligence to your devices? We're happy to help.

Subscribe to our newsletter