Humans and robots working together toward a common goal

28 February 2017
Table of contents0

In recent decades the study of human physiology and robotics have become intrinsically intertwined, particularly when in comes to the study of motor behavior, i.e. the ability to move about and manipulate objects within our environment.

Whereas early studies of how humans move concentrated on the physiology of muscle and nerves, it quickly became apparent that a mastery of control principles, information theory and basic Newtonian physics, in addition to biochemistry molecular biology, is crucial to understanding why muscles, sensory organs and neural circuits evolved the way they did.

Starting with Merton in the ‘50s, engineering concepts such as “servo control mechanisms” have been applied to understand the complex mix of sensory and motor nerves that connect muscles to brain. Modern scientific advances in this area rely heavily on so-called “computational models” to illustrate and comprehend the fundamental properties of the human sensorimotor system. Thus Human Physiology and Health Sciences have benefited greatly from overlaps with the Engineering Arts.

Conversely, engineers have been inspired by observations of human motor behavior and the properties of the human motor apparatus to build adaptable, robust mechatronic robots. As a case in point, consider the advances made by the close collaboration between Prof. Emilio Bizzi, M.D. and Prof. Neville Hogan, Ph.D. both of MIT albeit from two distinct departments and disciplines (Brain and Cognitive Science and Mechanical Engineering, respectively).

Based on experiments performed with monkeys, it was observed that the animals could successfully point to targets with their arms even in the total absence of sensory feedback from the limb. These investigators posited that the elastic properties of the “actuators” (i.e. the muscles) allowed the animal to establish stable equilibrium postures of the limb corresponding to each target in a feed-forward manner.

Thus, the concept of “impedance control”, currently at the forefront of research activities in robotics was born. Interdisciplinary research that spans both human movement science and robotics continues, therefore, to provide a promising avenue for advances in both fields.

A few pertinent examples of possible lines include:

  • Admittance tuning for optimal human-robot interactions

New generations of robots are currently being developed that are considered safe enough to allow direct, physical interactions between a human and the robotic device. Thus, rather than remaining isolated behind a safety fence, future robots may physically share assembly tasks with humans on the shop floor in a synergistic fashion, exploiting the strength, speed and stamina of the robot and the perceptual capabilities, intelligence and adaptability of the human.

But how should the human and robot share the task? One should take care that rigid, preprogrammed movements of the robot do not force the human worker to execute awkward or painful motions that may cause long-term damage to muscles, ligaments and joints. On the other hand, leaving all aspects of the control to the human rescinds all the advantages that the robot can bring in terms of strength and accuracy.

One approach is to leave the bulk of the effort to the robot, but tune the mechanical properties of the robot to react “kindly” to forces and motions applied by the human. We term this methodology “optimal admittance shaping”. By taking into account the physiological properties of the human, both in terms of anatomy and muscle physiology, one can hope to obtain cooperative assembly operations that are faster, more robust, safer and more comfortable for the human partner.

The same principle can be applied to the operation of surgical robots, where the surgeon directs the operations of the robot through a “haptic” workstation. The haptic device, which is essentially a small robot designed to be moved by the surgeon, can be programmed to push back on the surgeon’s hand in different ways. By taking into account the perceptual capabilities of the human, the haptic joysticks can in theory respond in such a way as optimize his or her perceptual capabilities and dexterity.

  • Bio-inspired control laws for bimanual robots

Humans possess two arms that they can use to perform sophisticated coordinated actions such as playing a violin, unscrewing the lid on a jar or stabilizing a tray full of food. Today’s humanoid robots also possess two arms in an explicit attempt to mimic a human’s morphology. But the question remains as to how to benefit from the presence of these two arms.

Shall they be treated as two independent robots simply choreographed to perform a single bimanual task, or should the control circuits connect the two arms at a more intrinsic level to render bimanual control more efficient? It is known that human spinal circuits include direct and indirect connection between the two limbs, although the precise purpose of those connects has yet to be determined.

As dual-arm robots become more common in the marketplace, significant insights into how the two robots could be interconnected to facilitate bimanual control could be gained through deeper study of human bimanual reflexes and coordination.

  • Teaching through demonstration

An intrinsic roadblock to the use of multi-purpose robots in the works space is the enormous effort required to program the robot to achieve a new task. Even minor changes to a product design requires a major reprogramming effort to adapt the movements of the robot to the new shape, sizes and placement of the component parts. Thus, automated assembly is economically feasible only when the projected production runs are sufficiently large to justify the overhead costs for robot programs. Humans, however, are highly adaptable and through reasoning they can quickly adjust a previously learned skill to a novel situation.

Imagine the cost savings if the robot could quickly learn a new skill simply by observing how a human solves the task. This ability would enable a much broader application of robots for short manufacturing runs of designs with semi-custom characteristics. But key to this methodology is an understanding of how the humans perform these tasks. It is not sufficient to simply mimic the movements of the hands and arms of the human. Assembly requires the application of forces and torques to perform, for instance, a snap fit of two adjoining pieces.

A robot cannot “see” these forces, nor can the robot understand what control policy the human is using to decide to apply more force or not in a given situation. If one can successfully transfer these skills from human to robot, one will not only achieve better, more efficient robot programming, our understanding of human motor behavior will have advanced significantly as well, leading perhaps to better therapies or assistive devices for patients with motor disorders. Thus, the goal of “teaching through demonstration” presents yet another context in which human studies and robotics may be intertwined to the benefit of both disciplines.

The study of human sensorimotor behavior therefore provides a wealth of insight to inspire the engineering and robotics communities.

The challenge remains, however, to “reverse engineer” the human physiology underlying this behavior and translating the resulting knowledge into effective solutions for robotics, especially where humans and robots should work together toward a common goal.

Joseph McIntyre


Joseph McIntyre

Doctor in Neurosciences Computationel (1990) by the Massachusetts Institute of Technology (MIT), BS in Engineering (1982) and Biology (1983) by the California Institute of Technology (Caltech). Dr. McIntyre has authored or co-authored over 100 articles published in international journals that have attracted over 2500 citations.

Read more +

Author:Joseph McIntyre
Sign up for the newsletter
*compulsory fields