Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 1 - SARAS (Smart Autonomous Robotic Assistant Surgeon)

Teaser

The goal of SARAS is to develop the next-generation of surgical robotic systems that will allow a single surgeon to execute Robotic Minimally Invasive Surgery (R-MIS) without the need of an expert assistant surgeon. The robot developed by the SARAS project consists of a pair...

Summary

The goal of SARAS is to develop the next-generation of surgical robotic systems that will allow a single surgeon to execute Robotic Minimally Invasive Surgery (R-MIS) without the need of an expert assistant surgeon. The robot developed by the SARAS project consists of a pair of cooperating and autonomous robotic arms holding off-the-shelf laparoscopic instruments. The SARAS system will perform the same tasks that are now carried out by the assistant surgeon during a robotic or a laparoscopic procedure.
The SARAS consortium is developing new technologies in terms of Human-Robot Interaction, Perception & Decisional Autonomy, Cognitive Control, and Advanced Planning & Navigation. According to the following technical and medical objectives:
• Design of a perception module that, using the pre-operative information about the procedure and the data collected intra-operatively will infer the status of the procedure and the actions performed by the main surgeon.
• Design of a cognitive control module which predicts the next steps of the procedure, makes decisions about the robot future actions, and sends high-level commands to the autonomous SARAS arms to cooperate with the surgeon in tasks requiring coordination.
• Design of a low-level multi-robot control architecture in which high-level commands will be mapped into commands for the SARAS assistive robotic arms.
• Development of the two robotic assistive arms capable of handling off-the-shelf laparoscopic tools.
• Translation of medical knowledge into an engineering formalism easy to be interpreted by the autonomous system.
• Enhancement of the da Vinci console by integrating multi-modal visual feedback to provide the surgeon with a better understanding of the status of the intervention and to add features (e.g. speech recognition) to interact with the autonomous robot arms in a collaborative and cooperative fashion.
These technologies are integrated togheter within the SARAS’s cognitive architecture in order to test our system on new disposable synthetic human abdomen and pelvic region phantoms.
It is worth mentioning that the SARAS project is developing an embodied AI system specific to the surgical robotics scenario. However the same technology can be applied to autonomous car systems and Industry 4.0 factories on which a great deal of researchers are working on and on which the EU is investing lots of money in the current H2020 program and, easily guessed, in Horizon Europe.

Work performed

In SARAS we planned to develop three platforms of increasing complexity:
1. MULTIROBOTS-SURGERY platform: the main surgeon will use a commercial robotic system whereas the assistant surgeon will teleoperate the SARAS assistive robotic arms. In our experimental setup, the main surgeon will use the da Vinci robot (by Intuitive Surgical Inc.) equipped with the da Vinci Research Kit (DVRK). The assistant surgeon will remotely control the SARAS arms by using haptic devices with force feedback.
2. SOLO-SURGERY platform: the SARAS system will be autonomous and will play the role of the assistant to help the main surgeon at the da Vinci console performing the surgical procedure.
3. LAPARO2.0-SURGERY platform: SARAS system will play the role of the assistant as in the SOLO-SURGERY case, but with the main surgeon using standard handheld laparoscopic tools.
Such platforms were planned to be ready at M12, M24 and M36, respectively. We have now developed and validated the MULTIROBOTS-SURGERY platform and we are now developing the SOLO-SURGERY platform. It is worth highligthing that we are still improving the first platform with features that will be needed also on the second one (e.g. strain gauges to measure the interaction forces instead of estimating them via vision).
More specifically, this is in a nutshell the work performed on the technical side and the achived results:
1. We developed the SARAS robotic arms and SARAS console,
2. We developed the phantoms for the prostatectomy and we are developing the ones for the nephrectomy,
3. A bilateral teleoperation architecture has been developed to allow the assitant surgeon to remotely control the SARAS arms. This step is crucial to collect the data necessary to train SARAS AI subsystems,
4. A basic version of the perception module has been developed: it allows to recognize the action in basic tasks (this will be enhanced to consider the whole procedure); to detect organs (this has been tested on videos of real interventions); to model the 3D environment,
5. The first version of the cognitive module is able to collect the output of the perception module, to make decision about simple procedures, to plan collision-free trajectories for the SARAS arms, and finally, to execute basic tasks.
The last two points are work-in-progress and will be integrated in the second platform (SOLO-SURGERY system) by the end of the second year.
Besides the technical work, we designed the SARAS website, activated the social channels and organized several workshops at major venues (ERF, ICRA, Hamlyn) to disseminate the outcomes of the SARAS project and to increase awareness on stakeholders about what embedded AI can do in the surgical scenario in the near future (technical improvements but also ethical and economical issues).

Final results

Embedded AI is a crucial technology in many research areas quite popular in the last years, such as autonomous driving and Industry 4.0. In SARAS we are planning to leverage on embedded AI to improve robotic surgery. We think this is the right time to show the advantages of robotics and AI algorithms (based on, e.g., Machine learning, Deep learning, etc) such as:
• multi-modal feedback to provide the surgeon with a better understanding of the status of the intervention
• integration of procedure workflow, interventional checklist, suggestion for the next steps, virtual fixtures helping the surgeon to follow optimal paths or virtual walls to inhibit harmful surgeon’s movement
• speech recognition module to interact with the autonomous robot in a collabora-tive and cooperative fashion
• advanced disposable synthetic human abdomen and pelvic region models for the training of junior surgeons or for senior surgeons not expert in robotic MIS
• decrease of the number of surgeons within the operating rooms: this will imply to cut the cost and to reduce waiting lists.
These advantages will positively affect all the stakeholders: patients, surgeons and hospitals.
The MULTIROBOTS-SURGERY platform, with its intrinsecally safe teleoperation architecture, could change the way surgery is performed, in particular if coupled with the upcoming 5G communciation technology. The SOLO-SURGERY and LAPARO2.0-SURGERY platforms will pave the way to fully autonomous surgical systems when the ethical, legal and economical barriers will be overcome and properly regularized.

Website & more info

More info: http://www.saras-project.eu.