Optical hand gesture recognition improves accuracy and complexity thanks to new algorithm – sciencedaily



In the 2002 hit sci-fi movie Minority report, Tom Cruise’s character John Anderton uses his hands, sheathed in special gloves, to interface with his transparent, wall-sized computer screen. The computer recognizes his gestures to magnify, zoom and scan. Although this futuristic vision of computer-human interaction is now 20 years old, humans today still interface with computers using a mouse, keyboard, remote control, or a small touch screen. However, much effort has been devoted by researchers to unlock more natural forms of communication without requiring user-device contact. Voice commands are a vivid example that have found their way into modern smartphones and virtual assistants, allowing us to interact and control devices through speech.

Another important mode of human communication that could be adopted for human-machine interactions is hand gestures. Recent advancements in camera systems, image analysis, and machine learning have made optical gesture recognition a more attractive option in most settings than approaches relying on wearable sensors or data gloves. , as used by Anderton in Minority report. However, current methods are hampered by various limitations, including high computational complexity, low speed, low precision, or low number of recognizable gestures. To address these issues, a team led by Zhiyi Yu from Sun Yat-sen University in China recently developed a new hand gesture recognition algorithm that strikes a good balance between complexity, accuracy, and applicability. As detailed in their article, which was published in the Electronic Imaging Journal, the team adopted innovative strategies to overcome key challenges and realize an algorithm that can be easily applied to consumer devices.

One of the main features of the algorithm is its adaptability to different types of hands. The algorithm first tries to classify the user’s hand type as thin, normal, or wide based on three measurements that take into account the relationships between palm width, palm length, and body length. fingers. If this classification is successful, the next steps in the hand gesture recognition process only compare the input gesture with stored samples of the same type of hand. “Traditional simple algorithms tend to suffer from low recognition rates because they cannot handle different types of hands. By first classifying the input gesture by hand type and then using sample libraries matching that type, we can improve the overall recognition rate with almost negligible resource consumption, ”explains Yu.

Another key aspect of the team’s method is the use of a “shortcut function” to perform a pre-recognition step. While the recognition algorithm is able to identify one input gesture out of nine possible gestures, comparing all characteristics of the input gesture with those of the samples stored for all possible gestures would take a long time. To solve this problem, the pre-recognition step calculates a ratio of the area of ​​the hand to select the three most probable gestures out of the nine possible. This simple functionality is sufficient to reduce the number of candidate gestures to three, among which the final gesture is decided using a much more complex and high precision feature extraction based on “Hu invariant moments”. Yu says, “The gesture pre-recognition step not only reduces the number of calculations and hardware resources required, but also improves the recognition speed without compromising accuracy.

The team tested their algorithm in both a commercial PC processor and an FPGA platform using a USB camera. They asked 40 volunteers to do the nine hand gestures several times to build the sample library, and another 40 volunteers to determine the accuracy of the system. Overall, the results showed that the proposed approach could recognize hand gestures in real time with greater than 93% accuracy, even if the images of the input gestures were rotated, translated or updated. ladder. According to the researchers, future work will focus on improving the performance of the algorithm in poor lighting conditions and increasing the number of possible gestures.

Gesture recognition has many promising areas of application and could pave the way for new ways to control electronic devices. A revolution in human-machine interaction may be just around the corner!

Source of the story:

Materials provided by SPIE – International Society of Optics and Photonics. Note: Content can be changed for style and length.


Previous Pune Police Deliver Bank Chairman Saraswat and 7 Others in Fraud Case
Next Intelligence, artificial and other: our ruling class