Robotic Body Augmentation | Nature Machine Intelligence


Can the human brain handle the control of an additional robotic arm or finger added to the body?

There are a variety of reasons for increasing the physical capabilities of humans with robotic systems controlled by the brain. For example, a doctor with an additional robotic arm could perform surgery without the need for an assistant. A person with an extra robotic finger or thumb could hold and manipulate objects in new ways.

Credit: Ilona Koeleman / Alamy Stock Photo

What holds back such abilities? In the sci-fi movie Westworld (1973), guests at a futuristic theme park interact with androids. When one character asks how to tell if a person is a robot or not, the other character responds that these are the hands, which the robot designers had not yet perfected. This answer was far-sighted, as it’s always a huge challenge to create robotic hands that can make skillful movements like humans. Even a seemingly simple action like picking up a pen to write is incredibly complex and involves several cognitive and motor control processes: the desire to use a pen and the decision to take it; a souvenir or representation may indicate where the pen is; head movement is made to the location, followed by eye movements and fixation on the pen; arm movements are generated to reach the pen with motor variables such as direction and speed; fingers are controlled with precise levels of force, at specific times, to grip the pen according to its precise pose and expected center of gravity. Only then can the act of writing itself begin, coordinating eye movements, reasoning and decision-making (about what to write), as well as planning and writing. execution of movements at any time.

In other words, movement control involves many areas of the brain, spinal cord, and different parts of the body, all working in unison in real time, often without too much conscious effort on the part of the person. movement. Movement is thus embodied, and the moving person has evolved to interact with the physical environment. Moravec’s paradox is relevant for appreciating the nuances of motor control: sensorimotor skills are highly evolved, often unconscious, and require more computational resources than high-level intelligence, such as reasoning, which has occurred more late in evolution and required relatively fewer calculations. This complexity of movement in robotics may be a reason why OpenAI recently dissolved his robotics team after years of working on motor tasks such as solving a Rubik’s cube.

In a Review article in this issue of Nature Machine Intelligence, Giulia Dominijanni et al. describe an approach to the augmentation of the robotic body that attempts to combine neuroscience, engineering, human-machine interaction, and wearable electronics. The authors discuss how the human brain can take over the control of additional robotic limbs, and they introduce the “neural resource allocation problem” as the voluntary control of augmentation devices without compromising the control of the biological body. . The latter problem is crucial because the brain will need to accommodate and control additional robotic limbs in a variety of behavioral contexts, which can include somatosensation (sense of touch) and proprioception (awareness of position and movement) of additional limbs and bodies. biological members, for example. The authors point out that many technical and conceptual challenges remain unresolved, for example whether the representation of a biological limb in the brain could be modulated or reconfigured through the use of additional robotic limbs, and what sensing technologies are needed for this. interface between additional robotic limbs and the user’s brain. The authors call for a new area of ​​robotic body augmentation with its own challenges and scientific and technological foundations. A recent development in this direction is the creation of the Yang Center for Bionics at the Massachusetts Institute of Technology, where one of the priorities is to “restore the natural movements controlled by the brain as well as the sensation of touch and proprioception … of the bionic limbs”.

Researchers studying augmented limbs and brain interfaces face similar challenges to robotics researchers. Robots, augmented limbs, and brain interfaces are fundamentally embedded in the physical world, learn from imperfect and uncertain information, and potentially have to adapt all the time. Interaction with the physical world limits the amount of data that can be collected in various environments and under different conditions – a drawback compared to computer vision and language modeling applications. Although approaches are proposed to use simulations and non-robotic data sources, for example, through the use of very large models of neural networks trained on general multimodal data (see the recent white paper by Bommasani et al. on the “basic models”), the tiny difference between the simulation assumptions and the physical reality – the so-called reality gap – always poses a real problem. In the near future, the vision of research described by Dominijanni and colleagues to combine perspectives from engineering, neuroscience and ethics will help us understand how additional robotic limbs are represented in the brain and how to put them. implemented effectively so that they can be easily controlled by the person’s intentions.

About this article

Check currency and authenticity via CrossMark

Quote this article

Robotic body augmentation.
Nat Mach Intell 3, 837 (2021). https://doi.org/10.1038/s42256-021-00406-y

Download the quote


Source link

Previous One week left to apply for Spend Local cards
Next Chinese GDP growth slows as real estate and energy wreak havoc

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *