360info
360Info is an open-access global news agency staffed by journalists working with academics to tackle the world’s biggest challenges and deliver practical solutions
Science
Technology that can interpret mood and visualize it using augmented reality is just the first step in a promising new area of brain science, says Nathan Semertzidis of Melbourne’s Monash University , in the third part of a series on brain-machine interfaces. Read part one here, and part two here
In an apartment in Melbourne, Australia, Charlie* can clearly see his roommate Robbie*’s emotions. It’s not Robbie’s facial expressions or movements that help Charlie understand – his face is obscured by augmented reality goggles – on the contrary, swarms of colorful patterns swirl around Robbie, moving with him, changing color and of shape. Charlie can see the swarms through his own augmented reality set: they both use Neo-Noumenabrain detection software.
Brain-machine interfaces have recently been used as assistive devices, restoring mobility and communication to people with various forms of paralysis. They allowed people to control robotic limbswheelchairs and computer keyboards, all endowed with the power of thought.
But newer generations of brain-computer interfaces are leveraging AI pattern recognition to decode more complex brain information, such as emotions.
Neo-Noumena, a headset developed by researchers at Monash University, reads a person’s emotions by detecting electrical activity on their scalp. When users see themselves or other users through the headset’s glasses, they are surrounded by colorful repeating patterns called fractals. These move and change with the user’s emotions, like an aura around them, allowing everyone to understand them in real time.
When study participants took the system home to use in pairs, Monash researchers had no idea how they would use it. Participants immediately began exploring the possibilities, using the system in emotionally charged activities to see how it would change their fractal swarms, discovering new things about themselves and their emotions in the process.
The technology could one day help people with autism in social situations and improve brain conditions that are resistant to other forms of treatment. It might even create a networked human consciousness.
Charlie and Robbie became better at regulating their emotions as a couple as they became more aware of their emotional patterns and reactions over time. Interestingly, the transparency and constant availability of each user’s emotional state has resulted in the emergence of unexpected social phenomena. One half of each pair began to extrapolate information from the environment by interpreting the other’s emotions. Another pair of participants discovered that they could infer the quality of their partner’s hand in a card game without speaking.
Brain-machine interfaces capable of reading emotions are closer than some think. Some commercial products are already available for purchase, focused on meditation training, sleep tracking, productivity, and gaming. A growing open source community has also emerged, sharing tools and lessons for enthusiasts to make the technology more accessible.
But the legal and ethical risks of “mind-reading” technology must be considered before it becomes widespread. Only a handful of jurisdictions have so far passed “neurological rights” laws to protect a person’s right to keep their thoughts and emotions clean.
The law struggles to keep up with the privacy risks of current technologies, but even more egregious privacy breaches may still be possible. The ability to read the mind as if it were a spoken word or text is still far beyond our reach – at least without invasive surgery – but recent studies have shown that information from identification can be extracted from brain-machine signals.
Emerging “two-way” interfaces, which promise to alter brain activity using electrical or magnetic impulses, further magnify these risks. This research is still in its infancy, limited to the perception of flashes of light, sudden limb movements, and mild stimulation of large swaths of the outer layer of the brain, but it may one day be possible to send and receive targeted and precise messages. , which would completely change our legal conceptions of autonomy and individual personality.
Despite these risks, the potential good is even deeper. In the near future, devices like Neo-Noumena could help people with conditions that can disrupt individual and interpersonal interpretation of emotions, such as autism. Such conditions can make social interaction extremely difficult.
Likewise, technologies that “write” in the brain, such as deep brain stimulation and transcranial magnetic stimulation, already show great promise in treating seriously debilitating mental health conditions such as treatment-resistant depression.
The promise of this technology hints at possibilities that even science fiction writers would struggle to write about. With brain-machine interfaces, we may very well be able to link our minds into vast webs of deep empathy, challenging any preconceived notions of what makes us “human.”
*Names have been changed to comply with study ethics
Originally published under Creative Commons by 360info™.