CORONAVIRUS (COVID-19) RESOURCE CENTER Read More
Category: Environmental%20Controls

Accelerometer based gesture recognition for controlling a LED

AbleData does not produce, distribute or sell any of the products listed on this website, but we provide you with information on how to contact manufacturers or distributors of these products. If you are interested in purchasing a product, you can find companies who sell it below.

Accelerometer based gesture recognition for controlling a LED is designed to enable a wider range of people to use the application such as elderly, deaf, and hard of hearing individuals with limited upper and lower extremity strength and coordination.

This project shows how to develop the application with low cost device for recognizing gestures that is based only on a 3-axis accelerometer that is embedded on a $30 microcontroller, with Bluetooth embedded too.

The material and the information contained in this Instructable are provided by students enrolled at Software of Places (www.softwareofplaces.com) Class at PUC-Rio University. The content represented here is the student’s final project for class evaluation presented for publication by the student and is solely the responsibility of the page author. Statements made and opinions expressed are strictly those of the author and not of PUC-Rio University.

Technical Specifications: 

Step 1: Devices Worn
To prototype this project the author used one LightBlue Bean (http://legacy.punchthrough.com/bean/). It was place into a band and used as a bracelet on the wrist. LightBlue Bean already has Bluetooth and a 3-axis accelerometer embedded in it. These two components along with the "band" can recognize gestures and send the information to another device. To receive the recognized gestures and control the LED, the author used one Intel Galileo microcontroller with a shield for Grove connectors.

Step 2: Machine Learning Algorithm to Recognize Gestures
The first thing to decide is the Machine Learning (ML) algorithm. The most used algorithms for problems with time series data, such as audio and gesture recognition, are Hidden Markov Model (HMM) and Dynamic Time Warping (DTW). In this prototype case, the processor used to process the accelerometer data and the ML model, to classify a gesture, is an ATmega328P. This processor has 32KB of memory space to put code in it and 2KB of space of RAM memory, so it would be very difficult to implement a robust ML algorithm, like HMM and DTW, in the LightBlue Bean processor. Thus, the author used the J48 decision tree model built by Weka (http://www.cs.waikato.ac.nz/ml/weka/) to recognize gestures.

Step 3: Creating a Dataset
Before recording the data I chose three gestures to interact with the LED. I used the same gesture to turn on and off the LED (toggle). The chosen gesture for it is a movement like the Brazilian sign language (LIBRAS) sign used as a gesture for saying "turn on" a device (http://www.acessobrasil.org.br/libras/). The other two chosen gestures were worn to accelerate and decelerate the LED blinking. In a study done by Kühnel, Christine, et al. (2011) most of participants moved either the arm or iPhone downwards to reduce the lighting brightness. I used this idea to choose a movement like a slap to top and slap down to accelerate and decelerate the LED blinking. A fourth "gesture" that needs to be trained is a nongesture. Since the application classifies time series data all the time, the ML model must know when the measures from the accelerometer do not indicate a valid gesture. For recording the dataset I developed two applications. One application was developed in C and runs in the LightBlue Bean to capture the accelerometer data while the gesture is being executed. The other application runs on Processing, that receives the gesture data via serial port and write it to a text file.

Step 4: Preprocessing Data for Decision Tree Use
Unlike HMM and DTW algorithms, Decision Tree is not prepared to deal with classification problem in time series data. A time serie is a sequence of measured data. Each gesture has its own time serie data. Each time serie can have different number of measures, even being the data from a same gesture. To use Weka Decision Tree in this case, we need to extract some features from the time serie and build a file with .arff extension. Waleed Kadous investigated it in his PhD thesis and proposed two approaches to extract features from a time serie. I used an approach that, according him, worked surprisingly well even being a simple algorithm. This approach consists in dividing each time series (regardless of length) into a number of windows. Thus, if an example is 45 samples long, the first window will have the values from the first to ninth measure, the second window will have the values from the tenth to eighteenth measure and so on. Then we compute statistics for each of the windows. A R application was developed to, based on the text file generated by the Processing application, pre process the time series and generate the data part of the arff Weka train file. This application calculates the mean and the standard deviation of each window of each axis (x, y and z) as the features.

Step 5: Generating the Machine Learning model
This part Weka did almost everything. I needed only to put all pre processed data into an arff file and load it to Weka. Then you can select the ML algorithm (J48) and run the training with the default settings just pressing the Start button. The results will be on the right side of the application. On the left of it, you can select the generated model with the mouse right button and select "Visualize tree", translate this image into ifs and elses code, then you have a model to recognize gestures in your application.

Step 6: Developing the Gesture Recognizer
Now we develop an Arduino application to recognize gestures, using the created model, and send the gesture information. This application also needs to pre process the accelerometer data, extract the ML model features, for then use it to recognize the time serie as a gesture, or non gesture. The application developed does not implement continuous gesture recognition. The gesture to be recognized must occur while the red LED in LightBlue Bean is on, the same way that gestures dataset was created. The recognized gesture information is sent via Bluetooth to a computer running a Processing application, that reads the gesture in the serial port.

Step 7: Processing the Recognized Gesture
Also, another application in Processing was developed. Its goal is to read the recognized gesture and do a specific "HTTP" to get request to Intel Galileo's local IP. In this prototype, both the computer running the application and Intel Galileo must be connected to the same local network.

Step 8: Controlling the LED
Finally, the authour developed the Node JS application that runs on Intel Galileo microcontroller and controls a LED. With a few lines of Node JS, with the help of Express framework, you can create an API to receive a "HTTP" get request. Based on the request parameter received (recognized gesture), this prototype turn can turn on and off a LED as well as control its blinking speed, with the help of the cylon framework.

Author: BrunoPontes

Available

Price Check
Price: 
0.00
as of: 
12/07/2015
Additional Pricing Notes: 
Cost of supplies and materials may vary.
Seller(s): 
Accelerometer based gesture recognition for controlling a LED