Sensor data is typically preprocessed with DSP in tinyML applications. As engineers deploy neural networks on ever smaller processors, it is becoming necessary to tune DSP algorithms in order to fit within RAM or real-time processing constraints. But not all steps in a DSP pipeline are created equal! Knowing how to find sections to slim down can mean the difference between giving up a few percent of accuracy, and ending up with a model that’s no longer usable.
Join our DSP Online Conference workshop on October 6th as Alex Elium will show experimentation with DSP parameter choices (number of cepstral coefficients, spectrogram frame size, etc) for an example keyword spotting classifier, and analyze the RAM, latency, and accuracy impacts of various scenarios. Attendees will leave with ideas on where to find elusive kB of RAM and mS of latency next time they need to optimize a DSP pipeline.
Space for the session is limited, so save your spot today! Registration is free.