Robotics in Healthcare: The Power of Assistive Technology
In my last blog post, I talked about fiction vs. reality of robots and how we all view the impact of robots in our lives. Robots in the near future will have impact across many areas, some sweeping, some more incremental.
What we are beginning to see is robots with assistive capabilities. Assistive robots are the next step beyond early technologies that could perform repetitive, predetermined motions, such as those used in the manufacturing industry – automotive robots used in the assembly line to perform a handful of repetitive motions is a good example of this. Assistive robots possess capabilities that are on a heightened level, and which are capable of performing tasks effectively in a safer, faster, less expensive way under the control of a well trained user.
Assistive robots work in tandem with humans to improve the status quo and to complete tasks that neither could achieve on their own. Surgical robots are a prime example of this – early technologies were designed to reduce the trauma patients experience during surgery by decreasing incision size and enabling surgeons to sew through minimally invasive surgery. Through a series of relatively small incisions, these robots use laparoscopic techniques to perform surgery in a way that few humans could before. This technology was greatly beneficial for patients in reducing pain and shortening recovery time, but had significant barriers for acquisition and lacked the surgical design for comfortable adoption.
It was these shortcomings that inspired us to found Vicarious Surgical. We wanted to create a device that brought significant benefits to patients, but not at a cost to surgeons or hospitals. One that not only enabled better patient outcomes, but that offered improved value to hospitals and to surgeons.
This meant going back to the drawing board and completely re-imaging the device with the surgeon in mind. Zhi Jane Li, Assistant Professor of Robotics Engineering at Worcester Polytechnic Institute (WPI), explains this concept beautifully:
“So the golden standard is simple, is if you can control the robot as your own body, that is the ideal situation. So a lot of the interface design comes from the understanding of the natural behavior and the conference of human motor control.”
Our robot moves like a surgeon does, bending at the “wrist,” “elbow,” and “shoulder” so that the operator does not have to learn an entirely new set of motions. The surgeon wears an augmented reality headset when using the system, which not only effectively places the surgeon inside the patient along with the robot, but also means that when the surgeon turns his or her head, the robot does too. It also provides haptic feedback so the surgeon can understand what the robot “feels,” allowing them to judge pressure, tension, and force to better understand what’s going on inside the patient. Because of these human-equivalent motions and sensations, the learning curve and cognitive load for the surgeon is greatly improved.
I believe that it’s this type of human-robot collaboration that will bring about more widespread adoption of the technology in healthcare. As we progress in designing robots that are more intuitive to use and that mimic human skills, the technology will eventually become second nature. Much like when you pick up the next generation of smartphone or download a new app and it already feels familiar to you, we’ll start to experience the same comfortability with robotic technology.