As part of its three-part series on the future of human-computer interaction (HCI), Facebook Reality Labs recently published a blog post describing a wrist-based wearable device that uses electromyography (EMG) to translate electrical motor nerve signals that travel through the wrist to the hand into digital commands that can be used to control the functions of a device. Initially, the EMG wristband will be used to provide a “click,” which is an equivalent to tapping on a button, and will eventually progress to richer controls that can be used in Augmented Reality (AR) settings. For example, in an AR application, users will be able to touch and move virtual user interfaces and objects, and control virtual objects at a distance like a superhero. The wristband may further leverage haptic feedback to approximate certain sensations, such as pulling back the string of a bow in order to shoot an arrow in an AR environment.
One general promise of neural interfaces is to allow humans to have more control over machines. When coupled with AR glasses and a dynamic artificial intelligence system that learns to interpret input, the EMG wristband has the potential to become part of a solution that bring users to the center of an AR experience and frees users from the confines of more traditional input devices like a mouse, keyboard, and screen. The research team further identifies privacy, security, and safety as fundamental research questions, arguing that HCI researchers “must ask how we can help people make informed decisions about their AR interaction experience,” i.e., “how do we enable people to create meaningful boundaries between themselves and their devices?”
For those of you wondering, the research team did confirm that the EMG interface is not “akin to mind reading”:
“Think of it like this: You take many photos and choose to share only some of them. Similarly, you have many thoughts and you choose to act on only some of them. When that happens, your brain sends signals to your hands and fingers telling them to move in specific ways in order to perform actions like typing and swiping. This is about decoding those signals at the wrist — the actions you’ve already decided to perform — and translating them into digital commands for your device. It’s a much faster way to act on the instructions that you already send to your device when you tap to select a song on your phone, click a mouse, or type on a keyboard today.”