Consentium Edge Machine Learning
Overview
EdgeNeuron is an Arduino-style wrapper for TensorFlow Lite Micro, making it easier to deploy machine learning models on microcontrollers. It abstracts TensorFlow Lite Micro functionalities into intuitive functions that resemble standard Arduino API usage. This library aims to simplify the process of setting up and running machine learning inferences on embedded devices such as the ESP32, Nano 33 BLE, and Portenta, without requiring deep knowledge of TensorFlow Lite's internal workings.
Features
Easy integration with TensorFlow Lite Micro models.
Simple function-based interface for model initialization, input handling, and inference execution.
Avoids advanced C++ structures (e.g., pointers) commonly not recommended in Arduino sketches.
Efficient memory management using a user-defined tensor arena buffer.
Supports a variety of microcontroller architectures such as mbed_nano, esp32, mbed_nicla, mbed_portenta, and mbed_giga.
Architecture
EdgeNeuron is built on top of the TensorFlowLite Micro
library and offers a simplified interface to interact with TensorFlow Lite models. The core components of the library include:
Model Initialization: Handles loading the TensorFlow Lite model into memory and allocating the required tensors.
Input Data Handling: Provides an interface to set input data to the model's input tensor.
Inference Execution: Executes the inference cycle and retrieves the output predictions.
Tensor Memory Management: Defines a user-controlled memory area for storing tensors and running inferences efficiently.
Supported Platforms
EdgeNeuron supports the following microcontroller architectures:
mbed_nano
esp33
esp8266
Installation
Using Arduino Library Manager
Open the Arduino IDE.
Go to Sketch -> Include Library -> Manage Libraries.
Search for EdgeNeuron.
Click Install to add the library to your project.
Manual Installation
Download the latest release from the GitHub repository.
Extract the contents into the
libraries
folder inside your Arduino project directory.
Getting Started
This section provides a step-by-step guide to using the EdgeNeuron library for running machine learning models on an Arduino-compatible board.
1. Model Conversion
Before using EdgeNeuron, you must convert your TensorFlow model into a TensorFlow Lite Micro format (.tflite
). This can be done using TensorFlow's Python API by following these steps:
Once you have your model.tflite
file, you can convert it to a static byte array for embedding into the Arduino project using xxd or an online tool.
2. Basic Example
Below is a simple example that uses EdgeNeuron to perform inference with a TensorFlow Lite Micro model that predicts sine wave values:
API Reference
1. bool initializeModel(const unsigned char* model, byte* tensorArena, int tensorArenaSize)
bool initializeModel(const unsigned char* model, byte* tensorArena, int tensorArenaSize)
This function initializes the TensorFlow Lite model and prepares the interpreter for inference.
Parameters:
model
: Pointer to the model data in memory (typically aconst unsigned char*
).tensorArena
: Pointer to the memory buffer that will be used to store tensors.tensorArenaSize
: The size of the tensor memory buffer.
Returns:
true
if the initialization was successful, otherwisefalse
.
2. bool setModelInput(float inputValue, int index)
bool setModelInput(float inputValue, int index)
Sets the input value at the specified index in the input tensor.
Parameters:
inputValue
: The value to set in the input tensor.index
: The index of the input tensor where the value should be placed.
Returns:
true
if the input was successfully set, otherwisefalse
.
3. bool runModelInference()
bool runModelInference()
Runs the inference cycle of the model.
Returns:
true
if the inference was successful, otherwisefalse
.
4. float getModelOutput(int index)
float getModelOutput(int index)
Retrieves the output value from the specified index of the output tensor.
Parameters:
index
: The index of the output tensor to retrieve the value from.
Returns:
The output value, or
-1
if an error occurred.
5. void cleanupModel()
void cleanupModel()
Optionally cleans up the interpreter to free up memory after inference.
Example Use Cases
1. Gesture Recognition
Using an IMU sensor like MPU6050 to recognize hand gestures. The sensor data (acceleration and gyroscope) is fed into a pre-trained model to classify gestures such as "punch" or "flex."
2. Voice Command Recognition
You can run a simple audio classification model on the microcontroller to recognize voice commands, such as "yes" or "no," by converting raw audio signals into features that are input into the model.
Known Limitations
Limited Memory: EdgeNeuron operates on devices with limited RAM, which restricts the complexity of models you can deploy.
Inference Speed: The performance of the model inference can be slow depending on the complexity of the model and the microcontroller's processing power.
Limited Model Support: Only TensorFlow Lite Micro models are supported. Complex models like RNNs and CNNs with many layers may require optimization or pruning to fit within microcontroller memory constraints.
Conclusion
EdgeNeuron is a powerful, easy-to-use library for integrating TensorFlow Lite Micro models into Arduino-based projects. It simplifies running machine learning inferences on microcontrollers, making it suitable for applications such as gesture recognition, speech recognition, and simple sensor-based predictions.
For advanced use cases, users may need to delve into TensorFlow Lite Micro documentation to understand the limitations of running deep learning models on constrained devices.
For more examples and detailed guides, visit the GitHub repository.
Last updated