Imagimob announced that its new release of the tinyML platform Imagimob AI supports end-to-end development of deep learning anomaly detection. A big strength with deep learning anomaly detection is that it delivers high performance as well as eliminates the need for feature engineering, thus saving costs and reducing time-to-market.
Not only is deep learning anomaly detection better for eliminating the need for feature engineering but it can also leverage and deliver excellent performance on the new generation of powerful neural network processors that is now hitting the market. This means that when going to the edge customers can make the most of their hardware.
Feature engineering, in simple terms, is the act of converting raw observations into desired features using statistical or mathematical functions. Feature engineering normally requires domain expertise and is in general very time consuming.
With the added support for autoencoder networks in Imagimob AI, developers can now build anomaly detection in less time, and with better performance. Customers will be able to reduce development costs and shorten time to market.
The anomaly detection solution from Imagimob has been verified on real-world machine and sensor data.
New anomaly detection features
- End-to-end training and deployment of convolutional autoencoder networks for anomaly detection/predictive maintenance
- Anomaly detection starter-project for rotating machinery to get you up and running in minutes
Other improvements:
- Support for quantization of models in the graphical user interface. Quantized models, reducing model size and decreasing inference time on MCUs without an FPU
- Improved model prediction – tracking of how models perform with millisecond resolution, before deploying given different confidence thresholds
- Faster training and model evaluation
- Increased support for large data sets
- In total 8 starter projects
- Starter project for Renesas RA2L1 – Capacitive Touch Sensing Unit