Introduction

This project was done as part of the Coursera course Introduction to Embedded Machine Learning by Shawn Hymel and Alexander Fred-Ojala.

The objective of the project is to demonstrate how machine learning in embedded devices can be used to monitor motion and vibration in machines with the help of Edge Impulse. I picked a toy tower crane as the machine to be monitored and an Arduino Nano 33 BLE Sense as the development board (it comes with an in-built 3-axis accelerometer sensor which we will be using). The steps followed in this project are similar in general to those described in the Continous Motion Recognition tutorial.

Data Aquisition

The Arduino board is installed on the crane jib (the crane working arm) as shown below

The toy crane can be operated by remote control, we collect data in three modes:

  • stop: with the crane still
  • motion: with the crane arm moving clockwise or counter-clockwise
  • vibrations: with the crane arm still, but vibrating

For clarity, the following video illustrates the motion state:

while the following video displays the vibrations state

In the stop satte the crane is still and the acceleration measurements are flat lines.

I collected acceleromter data distributed accross the 3 modes in order to create a training and test set. In total, the training set contained around 3 minutes of recorded data while the test set contained around 1 minute. This gives a train test split of aroud 70% / 30%. The acceleromter data was collected for the three axisl it was sampled at a frequency of 62.5 Hz.

Impulse Design

The Impulse configuration used is displayed below

Its based on features generated by the Edge Impulse Spectral Analysis component which are passed then to the Classification (Keras) component. The results obtained show that the performance on the training set is good overal, with the major issue being misclassifications of vibrations states as stop states.

The figure below shows the performance on the training set

The total accuracy in the training set is around 89%. The misclassification of vibrations into stop state is close to 19%. We can examine the feature space in some detail to understand better how the three states differ between each other. In the plot below we display the vibrations state taking into account one of the features generated by the Spectral Analysis feature engineering component

The same feature for the stop and motion states are also displayed here

Model Testing

Performance of the model in the test set shows mainly a decreased performance in the classification of vibration states

But overall the performance is good, taking into account that we havent spent much time optimising the model and exploring alternative feature engineering settings. For further improvemets we can also improve the data collected for vibration states to reduce the overlap with the stop states.

Deployment

I proceeded to deploy the trained machine learning model to the Arduino Nano 33 BLE Sense. Flashing the board results in

Finally, a video with a sample of the machine learning model deployed on the device doing online classification