Artificial Intelligence (AI) and Machine Learning (ML) have been hot topics recently, since OpenAI released ChatGPT to a wide audience for free. While mainstream media are focusing on large language models, some people are wondering whether AI can also be applied to limited resource hardware, like microcontrollers and embedded devices in general. You may probably think this is ridiculous since ML requires tons of data and a lot of computing power to train a neural network. This is true, but once you train your model, you can deploy it to a really tiny and energy efficient device. Read on to learn about a possible platform for edge computing.
Nvidia Jetson Nano
Nvidia also has its own ML dedicated chips – the Jetson series. Just looking at the numbers is impressive here – the most powerful Jetson AGX Orin offers 275 TOPS, but in comparison to the Coral, the Jetson Nano fits better. You can also find a Jetson Nano Developer Kit to quickly start with the modules. While both vendors are easy to work with, Nvidia is about double the price at $150 USD for a development board. That said, it is also more versatile – there is Linux with PyTorch and CUDA support enabled onboard, so you can run the models easily there, without any specific deployment procedure. When using this platform you can still connect external sensors via serial protocols like SPI or I2C, but you have to remember that Linux runs here, so access to peripherals is not that convenient. Jetson can be used for other tasks beyond edge computing, as it has powerful multipurpose GPU, so any calculations will be boosted.
Google Coral Dev Board Micro
Though not the first, Google Coral Dev Board Micro is one of the smallest and cheapest ($80 USD) standalone boards dedicated to AI. It consists of several major components – the heart is the NXP i.MX RT1176 which is the ARM Cortex-based host controller. The Coral Edge TPU (tensor processing unit) Coprocessor is connected to the host processor via USB interface. On the board, you can also find a microphone and camera, used for collecting data, along with secure elements and some extra FLASH and RAM memory. Let’s take a closer look to the Coral TPU module, which is the star attraction here. Google claims that the peak computing power of this ML accelerator is 4 TOPS, which stands for 4 trillion operations per second. This is an Edge device, meaning all the AI magic is done inside the device itself, without the need to send data to the cloud, have it processed and then sent back. If your use case needs more connectivity, there are simple click-on extension boards for this module including BLE, WiFi, or PoE (Power over Ethernet).
You can use a pre-trained models’ library from Google, or build your own, train it and deploy to Google Coral Micro. Using a pre-trained model prepared by Google enables you to start quickly and see what the hardware is capable of. Later, when constructing your own project, you can use Python or C/C++ to create your own applications (if you are not planning to use Linux on your host system, then you can use C/C++ only). This board is really designed for low-level embedded developers, so you can find an API for FreeRTOS and drivers for communication protocols like SPI or I2C (for the camera and microphone as well), which makes it easy to adopt to custom input data.
STM32
It may be surprising, but you don’t need to buy an expensive, fancy development board from leading AI companies to start playing with edge computing. Using TensorFlow Lite or STM32CubeAI, you can generate platform-optimized code that can be deployed on hardware as old as 10-15 years (e.g. STM32F4 family). This approach is of course less powerful than more recent platforms (like those mentioned above), but it’s way cheaper and may be enough for strictly embedded applications, where you just want to replace human-written logic with an AI-based one, without changing the rest of the features. It might be tempting for vendors that are already using such microcontroller units (MCUs) in their products and could possibly benefit from the firmware update. Real-life use cases demonstrate how predictive maintenance can be effectively used in factories and how machine learning can be used to classify motor faults.
Raspberry Pi and other mini-computers
You can tackle the Edge AI topic from many angles, and the next one is just starting with a simple tiny computer like Raspberry Pi, or other similar boards on the market. Some of them have graphics processing unit (GPU) acceleration built-in, some (like new Raspberry 5) have a PCI-e port for connecting TPU accelerators (e.g. Coral PCI-e). The advantage of this approach is versatility, as you can use Raspberry in many ways, so even if edge computing is not your thing, you can still build awesome stuff using this platform.
Conclusion
AI is here to stay, and for companies trying not to be left behind, both Google and Nvidia solutions are impressive options. While Google focuses more on processing video and sound, Nvidia seems to be more versatile and shares the opensource projects based on their modules for people to get to know the ecosystem. You can still use old-fashioned bare-metal MCU as a base for edge computing as well. In the coming years, we will probably observe dynamic development in the edge computing field.
To find out more about integrating AI into your embedded software projects, get in touch with our embedded specialists by filling out this form.
Sources
https://coral.ai/products/pcie-accelerator/
https://coral.ai/products/dev-board-micro#tech-specs
https://coral.ai/docs/dev-board-micro/datasheet/
https://theblue.ai/blog-pl/czym-jest-edge-ai-coral/
https://www.nvidia.com/pl-pl/autonomous-machines/embedded-systems/
About the authorAdam Bodurka
Embedded Software Specialist
An Embedded Software Specialist with 9 years of experience, Adam has been involved in building solutions for companies across industries. A programming background has given Adam practical experience with various wired and wireless protocols, technology stacks and work methodologies. Open to new challenges, Adam is passionate about staying up to date with the newest technologies.