RoadPrintz Inc., a company co-founded by Case Western Reserve University engineering professor Wyatt Newman, has received a Phase II Small Business Innovation Research award of nearly $1 million from the National Science Foundation.
"Autonomous Medical Robots Guided by Real-Time 3D Imaging"
Abstract: Modern surgical procedures require delicate tissue interactions and thus benefit greatly from the precise manipulations offered by medical robots. Similarly, live 3D imaging modalities (e.g., optical coherence tomography, ultrasound) offer rich clinical data streams useful for guiding surgical instruments. Patients see few advances, however, that leverage both domains to deliver medical robots guided in real-time by live intraprocedural imaging. In this seminar, I report on my translational work with optical coherence tomography and ophthalmic applications to bridge medical robotics and live imaging. This work spans autonomous eye imaging of freestanding subjects, image-guided needle insertions for superficial cornea transplantation, and a robotic surgery framework for maximizing surgeon efficiency when using live volumetric imaging for guidance. In addition, I discuss early results in breaking the framerate-resolution barrier for scanned imaging modalities with online algorithms for adaptive acquisition.
"Feeling Through Seeing: Vision-based Force Estimation in Robot-assisted Surgery by Humans and Machines"
Abstract: Tissue handling is an important skill for surgeons to perform safe and effective surgery. In robot-assisted minimally invasive surgery (RMIS), such skill is difficult to acquire due to lack of haptic feedback. RMIS surgeons learn to estimate tissue interaction forces through visual feedback, often over many hours of in-vivo practice. Tissue handling skills are notoriously difficult for surgical educators to quantitatively evaluate and provide feedback on because the human raters cannot directly know the force applied by surgical instruments. Thus, such gold standard expert video review can have poor inter-rater consistency, while also being time-consuming to conduct and lacking actionable feedback. My research leverages the RMIS telesurgical robotic platform as both a sensor and actuation suite to (a) develop automated data-driven vision-based force estimates that can provide objective measures of tissue handling skills or serve as input data to facilitate robot autonomy, and (b) provide multimodal robotmediated real-time feedback to the RMIS surgeon to improve their tissue handling skill during actual surgery and in training.
In this talk, I will present models and algorithms for vision-based force estimation in RMIS from both human and machine perspectives. From the human perspective, I evaluate the effect of haptic training on human teleoperators’ ability to visually estimate forces through a telesurgical robot. From the machine perspective, I design and characterize multimodal deep learningbased methods to estimate interaction forces during tissue manipulation for both automated performance evaluation and delivery of haptics-based training stimuli to accelerate tissue handling skill acquisition. The results demonstrate that human teleoperators and machines can learn visual force estimation from haptic training and multimodal manipulation data respectively, setting the stage for future work in improved methods for human-machine skills development and autonomous robot-assisted surgery.