The rise of machine interactions
We are living in a world where Human-Machine Interactions (HMIs) have crucial importance in our everyday lives. As matter of fact, the growth in complexity of machines tasks and the need for flexibility brought the community to develop multi-modality interfaces to facilitate open-ended dialogue between humans and machines. It has gone fast from mouse and keyboard to natural language processing (NLP) while, every day more, catering for the need of humans as technology users.
Thus, since the transmission of a message between two humans is based on non-verbal communication too (i.e. posture, gestures and facial expressions), it shouldn’t be surprising that a big part of the scientific community nowadays is busy in teaching machines how to identify and interpret the human body language.
The Gesture Recognition – intended as the set of methodologies meant to interpret the face movements and the body gestures – has the final goal of creating a richer communication bridge between machines and humans, removing mechanical barriers and allowing a more natural (human-to-human-like) interaction.
Among all the applications, most of which rely on image-/video-based recognition, it is possible to consider a variety of tools and environmental conditions defining an interface.
Of course, depending on the application, one might consider it convenient to use a specific type of interface. However, the majority of users would agree that a wearable and wireless approach suits the most challenging applications in terms of system flexibility and usability. Unfortunately, image-/video-based systems relying on cameras are not made to meet such requirements. Instead, systems based on magnetic or inertial tracking technology, like the wired-gloves (also called datagloves or cyber gloves) lend themselves well to such an approach. They are very handy and precise and the most expensive ones can also provide haptic feedback. Still, one might find them bulky for daily usage. And that’s where the surface Electromyography-based (sEMG-based) Gesture Recognition comes into the picture.
The sEMG-based interfaces are non-invasive systems relying on a certain number of EMG sensors capable of detecting the electrical activity of the muscles they are placed on. Once the gesture is detected, an algorithm takes care of the classification of sensors signals, which then is used as an input to a certain machine.
These systems are considered the best trade-off between reliability, flexibility and usability.
It is not a coincidence that this technique is “the choice” when it comes to Hand Gesture Recognition (HGR) for the controlling of upper limb prostheses.
The electronics are compact, the sEMG sensors are thin enough to be embedded in the prosthesis socket, and the environment constraints are roughly zero. Last but not least, a classification of hand gestures is possible even for partial amputation cases, as long as there are working muscles to place the sensors on.
More recently the Hand Gesture Recognition became popular thanks to the launch of Myo armband from Thalmic Lab.
Their lateral and visionary thinking led to multi-purpose-usage approaches to this methodology.
Nevertheless, while promising, Hand Gesture Recognition does not address the problem when a hands-free machine control operation is required.
Think about full upper limb amputees who want to control complex prosthetic arms, or simply someone working in a Collaborative Work Cells and willing to control a cooperative robot while keeping hands busy.
Foot Gesture Recognition as a solution?
One of the possible solutions for a handsfree control is Foot Gesture Recognition (FGR), although the least known – and used – branch of the Human-Machine Interaction. But then, what did prevent the spread of Foot Gesture Recognition as a possible methodology for hands-free general-purpose Human-Machine Interaction?
During my permanence at ISR (Institute of Systems and Robotics, Coimbra, Portugal), I had the possibility to deeply investigate the Foot Gesture Recognition field and to identify what I consider as the main challenges.
- The first challenge is related to system wearability and long term usability.Even though based on sEMG technology, previous attempts of building Foot Gesture Recognition systems failed in providing comfortable solutions.
What I have learned is that finding a compromise among different aspects can be a solution.
System dimensions are of course linked to the number of EMG sensors and, in turn, this affects the classification rate, the number of discernible gestures and the response time.
As a rule of thumb – considering an equal number of gestures – the higher the number of sensors, the higher the classification rate but also the calibration and the classification response time.
- The second challenge is linked to gestures discern-ability. Lower limbs daily general movements (e.g. walking, jumping, climbing stairs etc.) provide the systems with high variance, which is difficult to interpret and prone to get misclassified with most of the defined gestures. This kind of misclassification turns into a safety problem when it comes to using this technology for critical applications.
An innovative alternative
The Institute of Systems and Robotics recently proposed its own solution to overcome the challenges described above: the ISR band. The system uses a Support Vector Machine (SVM) to classify five-foot gestures with accuracy higher than 90%.
It consists of only two surface EMG sensors connected via an elastic band and, thus, it can be comfortably worn under the trousers all day long. Furthermore, the gesture discern-ability is addressed by using one of the five gestures as a lock command, in combination with an ad-hoc adaptation of the SVM kernel thresholds.
ISR band for Foot Gesture Recognition
The applied methodology is deeply explained in a recent article from ISR published on IEEE Sensors Journal. With the help of three volunteers, and by the means of specific tests, the authors prove good sessions independence and total disturbance rejection.
They report that after a short calibration – less than 10 minutes, considering database population and training – even beginner users demonstrate a fast adaptation and a high level of confidence in the system usage, suggesting ISR band as an intuitive interface.
Only one initial calibration is needed per user. Unless the user changes, it only takes the re-positioning of the sensors on the same spots on the legs to take advantage of a previously saved calibration. Moreover, the experimental results show that this innovative technology provides the system with total protection against unwanted gesture recognition. The user is free to move while the system is locked, and this broadens the usage range towards hands-free control applications with higher levels of criticality.
ISR band how to wear it
This ISR investigation is one of the early studies about sEMG Foot Gesture Recognition showing that it is possible to safely adopt a complete hand-free approach. Definitely, several aspects of this work may be further improved, for instance, starting from the hardware: even though optimized, the set up can still be minimized. As matter of fact, the ISR Lab already announced the willingness to apply ultra-thin stretchable patches internally recently developed, which can be directly stuck to skin. However, the adaptations will not end there. The learning and classification methods will be updated to new and more efficient deep learning techniques, resulting in more benefits for the gesture recognition rate and the response time.
With no doubt, this work already paved the way to conceive Foot Gesture Recognition as a real alternative in a world where, every day more, people and machines need to collaborate in an integrated manner within flexible environments.