|
Wio TerminalSeeed Studio
|
x 1 | |
|
uKit ExploreUBTECH Robotics
|
x 1 |
|
Edge Impulse Studio |
|
|
arduino IDEArduino
|
Gesture Recognition on Microcontroller with TinyML
Alright, let's get started with our Gesture Recognition on Microcontroller with TinyML! Let's go! Using gesture recognition with the in-built light sensor, the Wio Terminal will be able to recognize the rock, paper and scissor gesture and display the corresponding image on the screen.
This project is inspired by the game and I would like to try out how well the device such as Wio Terminal from Seeed Studio able to perform the edge classification using the TinyML technology powered by Edge Impulse.
In this detailed tutorial, we will cover the following:
- What's TinyML?
- Why TinyML Is Such A Big Deal?
- Create and select models
- Data Acquisition (rock, paper, scissors)
- Training and Deployment
- Programming & Model Usage
This model uses the Wio Terminal to recognize hand gestures: rock, paper and scissors using built-in Light sensor. It is quite hard to accomplish using rule-based programming because gestures are not performed the same way all the time. Trying to solve this problem using traditional programming requires hundreds of different rules for each mode of action. Even if we consider an idealized situation, we still miss a lot of motions such as speed, angle, or direction. A slight change in these factors requires a new set of rules to be defined but machine learning can handle these variations very easily. A well-known possibility is to use camera sensors combined with machine learning to recognize gestures. Using a light sensor is equivalent to using only a one-pixel point of the camera to recognize gestures which is a completely different level of challenge. Let's meet the challenge with this project.
Project Video
What's TinyML?
TinyML is a type of machine learning that shrinks deep learning networks to fit on tiny hardware. It brings together Artificial Intelligence and intelligent devices.
It is 45x18mm of Artificial Intelligence in your pocket. Suddenly, the do-it-yourself weekend project on your Arduino board has a miniature machine learning model embedded in it. Ultra-low-power embedded devices are invading our world, and with new embedded machine learning frameworks, they will further enable the proliferation of AI-powered IoT devices. Source
Why TinyML Is Such A Big Deal
While machine-learning (ML) development activity most visibly focuses on high-power solutions in the cloud or medium-powered solutions at the edge, there is another collection of activity aimed at implementing machine learning on severely resource-constrained systems.
Known as TinyML, it’s both a concept and an organization — and it has acquired significant momentum over the last year or two.
“TinyML deployments are powering a huge growth in ML deployment, greatly accelerating the use of ML in all manner of devices and making those devices better, smarter, and more responsive to human interaction,” said Steve Roddy, vice president of product marketing for Arm‘s Machine Learning Group.
Source
So, TinyML is the trend and BIG OPPORTUNITY right now. Probably many of you think that it's very complicated to do projects that involved TinyML? Well, thanks to tools such as Codecraft by Seeed Studio, you can now get started easily, literally by spending just 1 hour, thru this article, you will be successful in deploying your first TinyML project! Don't believe me? Try it out and you will be surprise!
Seeed Studio's graphical programming platform, Codecraft has made it so easy for everyone to get started creating their TinyML projects! The TinyML engine is powered by Edge Impulse! Even better is that, it will automatically convert the blocks based coding into text based coding (C++) for you to expand your projects ideas! It's incredible awesome! You should definitely check it out!
Block based coding
Text based coding
Step 1: Create and select models
Go to https://ide.tinkergen.com/. Select "(for TinyML) Wio Terminal".
1.1 Create the "Gesture Recognition (Built-in Light Sensor)" model
Click on "Model Creation" on the embedded machine learning box on the middle left. Then select the "Gesture Recognition (Built-in Light Sensor)" as shown below.
Enter the name for the model according to the requirements.
Click Ok and the window will automatically switch to the "Data Acquisition" interface.
Step 2: Data Acquisition (on-board)
2.1 Default Labels
There are 3 default labels (rock, paper, scissors) that are automatically created for you. You can use it without any changes unless you want to have different names for your labels or add additional labels such as "idle" label for when there is no gesture presented.
Default labels
2.2 Collecting data from custom labels and Data Acquisition Program Modification
Sampling data for custom labels is similar to the steps for capturing default labels.
Add or modify the labels.
Upload the data acquisition program.
Collect data.
2.2.1 Adding labels
In the label screen, click the " + " sign as shown below:
Enter the label name and click "OK". In this case, I am going to add a label named "idle" to represent the result of no gesture.
The new label "idle" is added to the labels bar after successful addition.
"idle" label is added
I have modified the default data acquisition program to include the data collection of "idle" label when the 5-way switch is pressed up!
2.3 Connect Wio Terminal and Upload Data Acquisition Program
Connect Wio Terminal to laptop using the USB-C Cable. Click on the Upload button, and this action will upload the default data acquisition program. Typically, it takes around 10 seconds to upload. Once successfully uploaded, a pop-up window will appear to indicate “Upload successfully”.
Click “Roger” to close the window and return to the data acquisition interface.
Note: You need to download “Codecraft Assistant” to be able to Connect and upload code on Codecraft online IDE.
Caution: For the web version of Codecraft, if you don't install or run the Device Assistant, you may get the message in the image below that you haven't opened the Device Assistant yet. In this case you can check this page for further information: Download, installation and "Device Assistance" Usage .
Prompt if Codecraft Device Assistant is not downloaded/running
2.4 Data Acquisition
In the upper right hyperlink, you will find a step-by-step introduction to data acquisition.
Follow the instructions to collect data.
Pay attention on below:
Wio Terminal button location (A, B, C, & 5-way switch)
Animated gif has been accelerated; the actual action can slightly slow down.
Please notice the red tips.
Point the curser over Description Texts for more detailed content
Step by step data collection steps for rock, paper and scissors
Position of 5-way switch on Wio Terminal
Wio Terminal will be displaying below information during the data collection process.
Start and end collecting data according to the Wio Terminal screen:
Data is being collected
Indicates the data is being collected
Data collection is completed
Indicates the Data collection is completed
Now, the data acquisition step is completed.
Step 3: Training and Deployment
Click on “Training & Deployment”, and you will be seeing the model training interface as shown below.
Waveforms for rock, paper, scissors and idle raw data are as shown below for reference. (can be viewed from the "Sample data" tab.
rock
paper
scissors
idle
3.1 Select neural network and parameters
Select the suitable neural network size: small, medium and large
Set parameters:
- number of training cycles (positive integer),
- learning rate (number from 0 to 1)
- minimum confidence rating (number from 0 to 1)
The interface provides default parameter values.
In this case we are using medium. It will take quite a long time. Be patient!
Neural network parameters optimization
3.2 Start training the model
Click “Start training”. When you click “Start training”, the windows will display “Loading..”! Wait for the training to be done!
Model training in progress
The duration of “Loading..” varies depending on the size of the selected neural network (small, medium and large) and the number of training cycles. The larger the network size is and the greater number of training cycles are, the longer it will take.
You can also infer the waiting time by observing the “Log”. In the figure below, “Epoch: 68/500” indicates the total number of training rounds is 68 out of total of 500 rounds.
After loading, you can see "TrainModel Job Completed" in the "Log", and "Model Training Report" tab will be appeared on the interface.
3.3 Observe the model performance to select the ideal model
In the “Model Training Report” window, you can observe the training result including the accuracy, loss and performance of the model.
If the training result is not satisfactory, you can go back to the first step of training the model by selecting another size of the neural network or adjust the parameters and train it until you get a model with satisfactory results. If changing the configurations do not work, you may want to go back to collect the data again.
Model Training Report (for my case, not bad huh?)
3.4 Deploy the ideal model
In the “Model Training Report” window, click on “Model Deployment”
Once the deployment is completed, click “Ok” to go the “Programming” windows which is the last step before we deploy the model to the Wio Terminal.
Step 4. Programming & Model Usage
4.1 Write the program for using the model
In the “Programming” interface, click on “Use Model” to use the deployed model.
I have created the below sample program to display the rock or paper or scissors image on the Wio Terminal screen when the result of the prediction is rock or paper or scissors respectively!
Sample code
Check out Seeed Studio guide on how to display custom images on the Wio Terminal:
rps.bmp
rock.bmp
paper.bmp
scissors.bmp
4.2 Upload the program to Wio Terminal
Click the “Upload” button. You will see the “Just chill while it is uploading” window.
The first upload time usually longer, and it increases with the complexity of the model.
The uploading time for smaller models takes about 4 minutes or even longer (depending on the performance of your laptop).
Once done uploaded, the “Upload successfully” window will be shown.
4.3 Testing
Make a gesture “scissors” to see if Wio Terminal's screen can shows scissors image. Try other gestures and see if the Wio Terminal can recognize your gestures and shows the corresponding image on the screen.
Congratulations! You have completed your TinyML model!
Well, there is still room for improvement to train a ML model to achieve higher accuracy. Now, I would like to challenge you to make that happen and leave a comment below if you ever made this!
Demo Time!
Thank you!
Vincent Kok
www.facebook.com/VKElectronics
Gesture Recognition on Microcontroller with TinyML
- Comments(0)
- Likes(1)
- Weazel Sep 20,2021
- 0 USER VOTES
- YOUR VOTE 0.00 0.00
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
More by vick.kok
-
-
kmMiniSchield MIDI I/O - IN/OUT/THROUGH MIDI extension for kmMidiMini
71 0 0 -
DIY Laser Power Meter with Arduino
83 0 2 -
-
-
Box & Bolt, 3D Printed Cardboard Crafting Tools
120 0 2 -
-
A DIY Soldering Station Perfect for Learning (Floppy Soldering Station 3.0)
413 0 1 -
Custom Mechanic Keyboard - STM32
239 0 3