TinyRaptor is a fully-programmable accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. It helps to reduce the inference time needed to run Machine Learning (ML) Neural Networks (NN). TinyRaptor is particularly well suited for edge computing applications on embedded platforms with both high-performance and low-power requirements.
Key Benefits
- Effortlessly deploy sub-mW Vision AI applications in days, not months
- Hardware flexibility to cover various NN model architectures
- Native compatibility with standard AI frameworks (Keras, Tensorflow, Pytorch,...)
- Robust and easy-to-use SDK for seamless programmation of the hardware
- Easy and swift evaluation of the model performances using TinyRaptor model and virtual platform
Key Performances
- High energy efficiency >5 TOPs/W
- More than 90% of energy efficiency as compared to traditional MCU for AI/ML workloads
- Scalable from 32 to 128 MAC:cycle
- Extremely small area <0.1 mm²
- Configurable amount of tightly-coupled memory