TinyRaptor is a fully-programmable AI accelerator designed to execute deep neural networks (DNN) in an energy-efficient way.
TinyRaptor reduces the inference time and power consumption needed to run Machine Learning (ML) Neural Networks (NN) while being scalable and a seamless solution to deploy AI/ML in every SoCs.
TinyRaptor is particularly well suited for edge computing applications on embedded platforms with both high-performance and low-power requirements.
Read our news "Dolphin Design wins an Embedded Award for Tiny Raptor, its Energy-Efficient Neural Network AI Accelerator"
Key Benefits
- Near-memory compulting techology to improve energy efficiency and decrease memory bandwidth requirements
- Hardware flexibility to cover various NN model architectures
- Native compatibility with standard AI frameworks (Keras, Tensorflow, Pytorch,...)
- Robust and easy-to-use SDK for seamless programmation of the hardware
- Easy and swift evaluation of the model performances using TinyRaptor model and virtual platform

Key Performances
- High energy efficiency >5 TOPs/W
- More than 90% of energy efficiency as compared to traditional MCU for AI/ML workloads
- Scalable from 32 to 128 MAC:cycle
- Extremely small area <0.1 mm²
- Configurable amount of tightly-coupled memory