STMicroelectronics STM32Cube.AI Development Tool Adds Deep Quantized Neural Network Support | Heisener Electronics
Contact Us
SalesDept@heisener.com +86-755-83210559-834
Language Translation

* Please refer to the English Version as our Official Version.

STMicroelectronics STM32Cube.AI Development Tool Adds Deep Quantized Neural Network Support

Technology Cover
Post Date: 2022-08-09, STMicroelectronics

STMicroelectronics (ST) has released STM32Cube.AI version 7.2.0, the first artificial intelligence (AI) development tool from a microcontroller manufacturer that supports ultra-efficient deep quantitative neural networks.

STM32Cube.AI converts pre-trained neural network into C language code that STM32 microcontroller (MCU) can run. It is an important tool to develop cutting-edge artificial intelligence solutions by making full use of the limited memory capacity and computing power of embedded products. Moving AI down from the cloud to edge devices can bring huge benefits to applications, including native privacy protection, deterministic real-time response, higher reliability, and lower power consumption. Edge AI can also help optimize cloud computing usage.

Now, by supporting deeply quantized input formats such as qKeras or Larq, developers can further reduce neural network code size, memory footprint, and response latency. These advantages unlock more possibilities for edge AI, including economical and cost-sensitive applications. As a result, developers can create edge devices, such as self-powered IoT endpoints with advanced functionality and performance for longer battery life. From ultra-low-power Arm Cortex-MCU® microcontrollers to high-performance products utilizing Cortex-M7, M33 and Cortex-A7 cores, ST's STM32 family provides developers with many suitable hardware platforms.

STM32Cube.AI version 7.2.0 also adds support for TensorFlow 2.9 models, improved kernel performance, new scikit-learn machine learning algorithms and the Open Neural Network Exchange (ONNX) operator.

Related Products