top of page

Learning to Automate Surgery

 

Surgical robots, such as Intuitive Surgical’s da Vinci Surgical System, have brought about more efficient surgeries by improving the dexterity and reducing fatigue of the surgeon through teleoperational control. While these systems are already providing great care to patients, they have also opened the door to a variety of research including surgical task automation. Surgical task automation have furthermore been an increasing area of research in an effort to improve patient throughput, reduce quality-of-care variance among surgeries, and potentially deliver automated surgery in the future. We are developing algorithms and control policies that automate surgical tasks to work towards this future. 

 

Reinforcement Learning (RL) is a machine learning framework for artificially intelligent systems to solve a variety of complex problems. Recent years has seen a surge of successes solving challenging games and smaller domain problems, including simple though non-specific robotic manipulation and grasping tasks. Rapid successes in RL have come in part due to the strong collaborative effort by the RL community to work on common, open-sourced environment simulators such as OpenAI’s Gym that allow for expedited development and valid comparisons between different, state-of-art strategies.We aim to bridge the RL and the surgical robotics communities by presenting the first open-sourced reinforcement learning environments for surgical robotics, called dVRL. Through the proposed RL environment, which are functionally equivalent to Gym, we show that it is easy to prototype and implement state-of-art RL algorithms on surgical robotics problems that aim to introduce autonomous robotic precision and accuracy to assisting, collaborative, or repetitive tasks during surgery. Learned policies are furthermore successfully transferable to a real robot. Finally, combining dVRL with the over 40+ international network of da Vinci Surgical Research Kits in active use at academic institutions, we see dVRL as enabling the broad surgical robotics community to fully leverage the newest strategies in reinforcement learning, and for reinforcement learning scientists with no knowledge of surgical robotics to test and develop new algorithms that can solve the real-world, high-impact challenges in autonomous surgery.

Students and Collaborators


Florian Richter

Jingpei Liu

Zih-Yun Chiu

Fei Liu

Ryan Orosco

Emily Funk

Publications

Bimanual Regrasping for Suture Needles using Reinforcement Learning for Rapid Motion Planning

Z.Y. Chiu, F. Richter, E.K. Funk, R.K. Orosco, M.C. Yip

IEEE Conference on Robotics and Automation (Accepted). Xi'an, China (2021). [arxiv][video]

Real-to-Sim Registration of Deformable Soft-Tissue with Position-Based Dynamics for Surgical Robot Autonomy

F. Liu, Z. Li, Y. Han, J. Lu, F. Richter, M.C. Yip

IEEE Conference on Robotics and Automation (Accepted). Xi'an, China (2021). [arxiv][video]

Model-Predictive Control of Blood Suction for Surgical Hemostasis using Differentiable Fluid Simulations

J. Huang*, F. Liu*, F. Richter, M.C. Yip

IEEE Conference on Robotics and Automation (Accepted). Xi'an, China (2021).  [arxiv][video]

SuPer Deep: A Surgical Perception Framework for Robotic Tissue Manipulation using Deep Learning for Feature Extraction

J. Lu, A. Jayakumari, F. Richter, Y. Li, M.C. Yip

IEEE Conference on Robotics and Automation (Accepted). Xi'an, China (2021). [website][arxiv][video]

Optimal Multi-Manipulator Arm Placement for Maximal Dexterity during Robotics Surgery

J. Di, M. Xu, N. Das, M.C. Yip

IEEE Conference on Robotics and Automation (Accepted). Xi'an, China (2021). [arxiv][video]

Autonomous Robotic Suction to Clear the Surgical Field for Hemostasis using Image-based Blood Flow Detection

F. Richter, S. Shen, F. Liu, J. Huang, E.K. Funk, R.K. Orosco, M.C. Yip

IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1383-1390.  [arxiv][video]

SuPer: A Surgical Perception Framework for Endoscopic Tissue Manipulation with Surgical Robotics

Y Li, F Richter, J Lu, EK Funk, RK Orosco, J Zhu, MC Yip

arXiv preprint arXiv:1909.05405, 2019. [pdf][website]

Open-Sourced Reinforcement Learning Environments for Surgical Robotics 

F. Richter, R. K. Orosco, M.C. Yip

arXiv preprint arXiv:1903.02090, 2019. [arxiv][vid][git]

bottom of page