A Multirobots Teleoperated Platform For Artificial Intelligence Training Data Collection in Minimally Invasive Surgery
A Multirobots Teleoperated Platform For Artificial Intelligence Training Data Collection in Minimally Invasive Surgery
Abstract—Dexterity and perception capabilities of surgical environment thanks to a unilateral teleoperation system and an
robots may soon be improved by cognitive functions that can accurate 3D vision system.
support surgeons in decision making and performance monitor- Despite that, it did not decrease the amount of people
ing, and enhance the impact of automation within the operating
rooms. Nowadays, the basic elements of autonomy in robotic required in the operating room. Indeed, while the main surgeon
surgery are still not well understood and their mutual interaction works at the da Vinci® console teleoperating the surgical robot,
is unexplored. Current classification of autonomy encompasses the assistant surgeon stands next to the patient to carry out the
six basic levels: Level 0: no autonomy; Level 1: robot assistance; supporting tasks using traditional laparoscopic tools. Typically,
Level 2: task autonomy; Level 3: conditional autonomy; Level 4: the assistant surgeon is an expert surgeon, to properly support
high autonomy. Level 5: full autonomy. The practical meaning of
each level and the necessary technologies to move from one level the main surgeon, and s/he is the second highest cost com-
to the next are the subject of intense debate and development. ponent of a surgery. Moreover, both the main and assistant
In this paper, we discuss the first outcomes of the European surgeons need to rest for some hours between interventions,
funded project Smart Autonomous Robotic Assistant Surgeon limiting the number of available surgeries in a day, and thus
(SARAS). SARAS will develop a cognitive architecture able to leading to unnecessarily long waiting lists.
make decisions based on pre-operative knowledge and on scene
understanding via advanced machine learning algorithms. To The introduction of automatic/autonomous systems for
reach this ambitious goal that allows us to reach Level 1 and 2, robotic surgery recently gained popularity, with a number
it is of paramount importance to collect reliable data to train the of projects aiming at automatizing specific tasks, leading to
algorithms. We will present the experimental setup to collect the several products –i.e. surgical robots– explicitly designed to
data for a complex surgical procedure (Robotic Assisted Radical perform one single operation or provide specific features (e.g.
Prostatectomy) on very sophisticated manikins (i.e. phantoms of
the inflated human abdomen). The SARAS platform allows the the Johns Hopkins Steady Hand microsurgical robot, [3], the
main surgeon and the assistant to teleoperate two independent Acrobot system [4]).
two-arm robots. The data acquired with this platform (videos, Against this task-specific viewpoint, we believe that recent
kinematics, audio) will be used in our project and will be released achievements in artificial intelligence, and in particular in
(with annotations) for research purposes. machine learning, can lead to the development of robotic sys-
Index Terms—robotic surgery, minimally invasive surgery,
laparoscopy, machine learning, cognitive control. tems able to cooperate with surgeons over the entire surgical
procedures. In our view, the system should learn from its own
I. I NTRODUCTION experience how to behave during the procedure, interpreting
The introduction of robotic surgery, and in particular of the scene and interacting with the other agents in the theater
Robotic Minimally Invasive Surgery (R-MIS) has improved in a human understandable manner.
the surgeon’s efficiency, by lowering the physical and cognitive According to the six levels of autonomy discussed in [5],
load during each surgery, and increased patient’s safety [1]. we believe that the technology available in the very near future
The widely deployed da Vinci® telesurgical robot by Intuitive will allow us to reach Level 1 and Level 2 with reliable robotic
Surgical Systems, Sunnyvale, CA, [2] extends surgeons’ ca- systems jointly controlled by the surgeon in charge of the
pabilities by providing the dexterity of open surgery in a MIS intervention and an AI-based cognitive control architecture.
This work has received funding from the European Union’s Horizon A. Smart Autonomous Robotic Assistant Surgeon
2020 research and innovation programme under grant agreement No. 779813
(SARAS project).
The EU funded Smart Autonomous Robotic Assistant Sur-
geon project (SARAS, saras-project.eu) aims at developing
by the end of 2020 the next-generation of surgical robotic
978-1-5386-7825-1/19/$31.00 ©2019 IEEE systems that will allow a single surgeon to execute R-MIS
Authorized licensed use limited to: Carleton University. Downloaded on March 17,2023 at 09:56:07 UTC from IEEE Xplore. Restrictions apply.
without the need of an expert assistant surgeon. This Solo-
Surgery platform consists of a pair of autonomous robotic arms
holding the surgical instruments, able to cooperate with the
main surgeon by recognizing his/her actions, interpreting the
scene, predicting future evolution of the surgical procedure
based on a priori medical knowledge, and taking decisions
about actions to perform.
Despite a huge effort spent from the scientific commu-
nity towards automatizing surgical robots, studies are still
limited to the recognition of surgical actions [6]–[10] or to
the automation of specific tasks [11]–[13]. SARAS will go
beyond any existing systems for R-MIS, as it will leverage
on a ground-breaking artificial intelligence (AI) module to
Fig. 1. Multi-Robots Surgery (MRS) platform architecture.
develop a cognitive robotic system capable of autonomously
understanding the present and future surgical situation, and
performing its predefined actions at the right place and time. •force feedback time series provided to the operator by
As a result, SARAS assistive robotic arms will be able to the multi-robot bilateral teleoperation system.
perform the same tasks that are now carried out by the assistant
This paper addresses the key aspects and challenges of
surgeon during a robotic or a laparoscopic procedure: suction,
the MRS platform and the medical knowledge related to the
cutting, handling of the endobag, clipping and moving internal
Robotic Assisted Radical Prostatectomy (RARP) which is the
structures. To do that, SARAS system will leverage on four
gold standard in robotic MIS [15]. We will present the MRS
main components:
platform in all its components: a bilateral teleoperated robotic
1) the Perception module will take care of understanding system, a simplified surgical procedure for RARP surgery, and
the complexity of the surgical area, by reconstructing, the accurate physical models of the human abdomen and pelvic
labelling and tracking all its elements as observed by region. These phantoms, manufactured using 3D printing and
the available sensors; based on human CT, MRI and ultrasound imaging, will re-
2) the Cognitive module will learn from real intervention produce the anatomy and key mechanical properties of human
data the structure of complex laparoscopic procedures organs and internal structures to allow realistic simulation of
to identify anomalies, understand the surgeon’s actions procedures with the SARAS system for pre-clinical study.
(situation awareness), and his/her future needs (decision
making); The paper is organized as follows. In Section II, the Mul-
3) the Planning module translates the autonomous deci- tirobots Surgery Platform is briefly described; in Section III
sions made by the Cognitive module into appropriate tra- the simplified RARP procedure is discussed explaining the
jectories for the laparoscopic tools mounted on SARAS motivation and the rationale that drove us. Section IV describes
arms; and finally, the phantom model of the pelvic region whereas in Section
4) the Human-Robot Interface will give the surgeon full V the assessment procedure for the validation of the MRS
control over the procedure through a multi-modal inter- platform is sketched. Conclusions are drawn in Section VI.
face with novel capabilities. II. M ULTIROBOTS S URGERY P LATFORM
B. Contributions The MRS platform is an example of multi-master/multi-slave
(MMMS) bilateral teleoperation system, where two users
In this paper, we present the Multi-Robots Surgery (MRS) cooperate on a shared environment by means of a telerobotics
platform developed within the SARAS project as a preliminary setup. The general system architecture is reported in Figure 1.
step toward the Solo Surgery platform. The MRS is a new In this scenario the main surgeon controls the da Vinci® tools
HW/SW facility to perform different kinds of surgeries on from the da Vinci® console, whereas the assistant surgeon
phantoms and collect data from several sources (i.e. cameras, teleoperates standard laparoscopic tools mounted at the end
microphones, kinematics, etc.). These data will be used for effectors of the SARAS assistive robotic arms from a remote
designing and training the cognitive control architecture for control station equipped with virtual reality and haptic devices.
the next Solo Surgery platform, as well as to generate datasets The assistant surgeon will perform the same actions as in
useful for the scientific community in different fields such as: standard robotic surgery, but this time by teleoperating the
• a detailed dataset of an almost-complete complex surgi- tools instead of moving them manually.
cal procedure in order to move the testing of machine Virtual reality is exploited to improve the visual feedback
learning algorithms beyond the JIGSAWS [14]; provided to the surgeons. By using pre-operative and intra-
• a dataset of fully annotated videos with synchronized operative data we produce virtual fixtures (e.g. virtual walls
kinematics measurements; impassable for the tools or optimal paths the surgeon is guided
• a dataset of audio recordings for speech recognition; to follow during delicate phases of the procedure) to help
Authorized licensed use limited to: Carleton University. Downloaded on March 17,2023 at 09:56:07 UTC from IEEE Xplore. Restrictions apply.
Fig. 2. The MRS slave side and phantom. Fig. 3. The MRS master console.
Authorized licensed use limited to: Carleton University. Downloaded on March 17,2023 at 09:56:07 UTC from IEEE Xplore. Restrictions apply.
the main surgeon operating at the da Vinci® console, and the
assistant teleoperating the standard laparoscopic tools need to
be recorded.
Based on these premises, the work of re-engineering the
RARP procedure into a simplified model was carried out
in close collaboration with the urological surgical team of
Ospedale San Raffaele (Milan, Italy) and with the technical
partners of the Consortium. In the following sections, the
different steps that led to the definition of the RARP simplified
procedure are detailed (see Figure 4).
Authorized licensed use limited to: Carleton University. Downloaded on March 17,2023 at 09:56:07 UTC from IEEE Xplore. Restrictions apply.
D. RARP procedure modelling
Starting from the complete da Vinci® IS1200-based RARP
procedure, a subset of surgical steps has been selected to
be replicated with the SARAS MRS platform, taking into
account: (i) the simplifications previously described, (ii) the
feedback of the expert surgeons involved, and (iii) the results
obtained from a process risk analysis applied to the present
case (i.e. Healthcare Failure Mode and Effects Analysis [23]).
Such analysis identifies the main risks that may impair the
positive outcome of the procedure and which should be prop-
erly handled autonomously by the SARAS cognitive architec-
ture. We assembled a team of 5 expert urological surgeons
of Ospedale San Raffaele, capable of performing both the
main surgeon and the assistant role. Starting from the RARP
model the surgeons identify, for each phase and (whenever
applicable) tasks, a list of potential Failure Modes (FMs)
that can occur both from the main surgeon’s and assistant’s
perspectives, clarifying also the corresponding Failure Causes
and Effects. All the evaluated assistant’s surgical actions are
characterized by a Criticality Index (CI) positioning in the area Fig. 5. Schematic cross-section (upper) and real implementation (bottom) of
of low risk and the linked FMs are all concerning the level of the RARP Phantom.
accuracy and skills of the assistant. This could be explained by
the fact that this procedure is so standardized and well-known
Fixed trocar positions are used to reduce positioning errors
that (1) the role of the assistant is mainly of support over very
during setup and provide RARP platform relative robotic
specific tasks and does not have a direct impact on the patient
tool localization (see Section III). The anatomy within the
safety, and (2) the CI’s associated to the main surgeon actions
RARP surgical path and field of view have been developed
are themselves in the low-medium area of risk.
as a replaceable internal module containing the synthetic
The proposed SARAS RARP simplified procedure consists
tissue mimicking (TM) material prostate target and relevant
of the list of the chosen surgical steps with description of
surrounding anatomy structures: Peritoneum, Pelvic Floor
the linked main surgeon’s and assistant’s actions and tasks,
Muscle, Rectum, Seminal Vessels, Endopelvic Fascia, Bladder,
the related anatomical targets, the surgical instruments used
Urethra, Prostate (see Figure 5). The design of the RARP
to accomplish the tasks, and the corresponding access points
Phantom focused on accuracy of anatomical geometry (i.e.
(trocars). All these information are currently driving the im-
topologies and dimensions of the organs) and related organs
plementation of the first SARAS test-bed, but are also meant
tissues properties (i.e. mechanical ones and tissue patterns) for
to fed, in the near future, the SARAS artificial intelligence
the Peritoneum, Bladder, Urethra and Prostate with a generic
module so as to perform autonomously the involved surgical
filler as surrounding synthetic tissue. The phantom models
actions. Table I reports the SARAS simplified RARP model,
have been designed and produced by the Austrian Center for
with the detail of the assistant’s surgical actions.
Medical Innovation and Technology (ACMIT)3 .
IV. SARAS P HANTOMS MODEL
V. A SSESSMENT P LAN
To train the MRS cognitive control architecture and to
demonstrate its capability to replicate the chosen bench- We will collect a participants’ pool sample of at least 5
mark procedure, the SARAS robotic system will operate on urological surgeons experienced in robotic-assisted MIS and
specifically developed phantom models. During the SARAS capable of performing both the main surgeon and assistant
simplified RARP, the patient (i.e. the SARAS phantom model) role.The assessment plan of the SARAS MRS platform will
will be positioned supine, as it is in the real practice, and in encompass both technical and functional aspects:
the Trendelenburg position, which is 30° from horizontal to • Technical evaluation: we validate the platform in terms
facilitate exposure of the pelvic content [24]. Moreover the of positioning accuracy, force reflection, workspace,
phantom will replicate the condition of pneumoperitoneum, in promptness, and stability of the control architecture;
which the patient’s peritoneal cavity is insufflated with carbon • Functional evaluation: it is meant to prove the effective-
dioxide to enlarge the surgical working space [25]. ness of the SARAS platform in performing the simplified
We took a modular approach to create reusable surgi- RARP, as well as its level of usability and surgeons’ sat-
cal training models with a replaceable internal part to be isfaction [26], [27]. Different surgeons, playing the part
destructively operated upon. For the simplified RARP, the of the main one and the assistant, will alternate in the use
phantom consists of a pelvic to lower abdomen anatomical
region shell structure with a flexible synthetic skin layer. 3 http://www.acmit.at
Authorized licensed use limited to: Carleton University. Downloaded on March 17,2023 at 09:56:07 UTC from IEEE Xplore. Restrictions apply.
Step Action Task Target Instrument Trocar
Incision of the Upper anterior traction on the peritoneum Holding / Peritoneum Grasper 5
peritoneum and entry Traction
into the retropubic Upper anterior traction / Suction and cleaning of blood Traction / Peritoneum / Suction- 4
space of Retzius Suction Blood irrigator
Upper traction of the bladder/ Suction and cleaning of blood Traction / Bladder / Suction- 4
Bladder mobilization
Suction Blood irrigator
Anterior prostatic fat Upper traction of the prostatic fat/ Suction and cleaning of Traction / Prostatic fat / Suction- 4
(AFP) dissection blood Suction Blood irrigator
The assistant provides the prostatic fat’s traction Holding Prostatic fat Grasper 5
(Traction)
Upper traction / Suction and cleaning blood Traction / Bladder / Suction- 4
Bladder neck incision
Suction Blood irrigator
Move the catheter from a surgeon’s grasper to assistant’s Holding Catheter / Grasper 5
grasper Seminal
vesicles
Catheter and seminal vesicles are collectively grasped, pulled Holding Catheter Grasper 5
through the open bladder neck and handed to the assistant (Traction)
for upper traction
Vas deferens and Upper anterior traction on the peritoneum Holding Retroprostatic Grasper 5
arteries are exposed (Traction) tissue
Suction and cleaning of smoke and blood Holding Smoke Suction- 4
(Traction) irrigator
Division of vas The vas deferens are controlled using the clips Putting Vas deferens Automatic 4
deferens Hemoclip/s clip applier
Exposure of the The fibrovascular tissue is controlled using the clips Putting Fibrovascular Automatic 4
seminal vesicles hemoclip/s tissue clip applier
Upper traction of the seminal vesicles Holding Seminal Grasper 5
(traction) vesicles
The seminal vesicles are controlled using the clips Putting Seminal Automatic 4
hemoclip/s vesicles clip applier
The seminal vesicle is grasped to help liberate the posterior Holding Seminal Grasper 5
avascular plane (Traction) vesicles
Traction/ Suction and cleaning blood Traction / Urethra / Suction- 4
Urethral division
Suction Blood irrigator
Seminal vesicles are collectively grasped Holding Seminal Grasper 5
(traction) vesicles
Vesicourethral The assistant passes the needle and the suture thread to the Holding Needles Grasper 5
anastomosis surgeon (traction)
The assistant cuts the suture’s thread Cutting Suture’s Scissor 5
thread
The needles can be snapped out and removed from the body Holding Needles Grasper 5
(traction)
TABLE I
RARP SURGICAL ACTIONS OF THE ASSISTANT WHICH THE SARAS SYSTEM IS SUPPOSED TO REPLICATE .
of the MRS platform and will evaluate it on the basis of independent surgical robots and acquire data from several
their experience. Particular attention will also be paid on sources, such as endoscope camera, robots kinematics and
the coordination/cooperation of the main surgeon at the audio streams. The data acquired with this setup will be used
da Vinci® console and the assistant surgeon teleoperating within the SARAS project to train an Artificial Intelligence
the SARAS robotic arms, and on the impact of force (AI) that will take the role of the assistant surgeon. For
feedback and virtual fixtures on the surgical performance instance, action and scene-based video-annotation from the
[28]. The estimation of the surgeon’s cognitive workload endoscope recordings will be used to construct a new machine
will be one of the main outcome of the SARAS project. learning dataset, capable to train the AI in understanding the
The collected experimental data will be used to train the current surgical phase and the anatomical structures. Audio
SARAS Artificial Intelligence for the subsequent implemen- recordings will be analyzed and semantically grouped in order
tation of the autonomous robotic Solo-Surgery platform. to develop a new glossary to be exploited for an easy and
user-friendly interacting with the SARAS platform. All the
VI. C ONCLUSIONS data (with annotations) will be made available to the scientific
In this paper, we presented the Muti-Robots Surgery plat- community for research purposes.
form developed in the SARAS project as an experimental setup
to collect data for a complex surgical procedure (Robotic As- ACKNOWLEDGEMENTS
sisted Radical Prostatectomy) on very sophisticated manikins We thank all the SARAS partners for their contribution in
(i.e. phantoms of the inflated human abdomen). This platform the design and implementation of the Multi-Robots Surgery
allows the main and the assistant surgeons to teleoperate two platform.
Authorized licensed use limited to: Carleton University. Downloaded on March 17,2023 at 09:56:07 UTC from IEEE Xplore. Restrictions apply.
R EFERENCES [19] F. Ferraguti, N. Preda, A. Manurung, M. Bonfè, O. Lambercy,
R. Gassert, R. Muradore, P. Fiorini, and C. Secchi, “An energy tank-
[1] R. H. Taylor, A. Menciassi, G. Fichtinger, P. Fiorini, and P. Dario, “Med- based interactive control architecture for autonomous and teleoperated
ical robotics and computer-integrated surgery,” in Springer handbook of robotic surgery,” IEEE Transactions on Robotics, vol. 31, no. 5, pp.
robotics. Springer, 2016, pp. 1657–1684. 1073–1088, 2015.
[2] G. S. Guthart and J. K. Salisbury, “The intuitive telesurgery system: [20] M. Minelli, F. Ferraguti, N. Piccinelli, R. Muradore, and C. Secchi, “An
overview and application,” in Robotics and Automation, 2000. Proceed- energy-shared two-layer approach for multi-master-multi-slave bilateral
ings. ICRA’00. IEEE International Conference on, vol. 1. IEEE, 2000, teleoperation systems,” in submitted to ICRA, 2019.
pp. 618–621. [21] C. C. Abbou, A. Hoznek, L. Salomon, L. E. Olsson, A. Lobontiu,
[3] A. Üneri, M. A. Balicki, J. Handa, P. Gehlbach, R. H. Taylor, and F. Saint, A. Cicco, P. Antiphon, and D. Chopin, “Laparoscopic radical
I. Iordachita, “New steady-hand eye robot with micro-force sensing prostatectomy with a remote controlled robot,” Journal of Urology, vol.
for vitreoretinal surgery,” in Biomedical Robotics and Biomechatronics 197, no. 2, pp. S210–S212, 2017.
(BioRob), 2010 3rd IEEE RAS and EMBS International Conference on. [22] G. H. Ballantyne and F. Moll, “The da vinci telerobotic surgical system:
IEEE, 2010, pp. 814–819. The virtual operative field and telepresence surgery,” Surgical Clinics of
North America, vol. 83, no. 6, pp. 1293–1304, 2003.
[4] M. Jakopec, S. Harris, F. Rodriguez y Baena, P. Gomes, J. Cobb, and
[23] M. Scorsetti, C. Signori, P. Lattuada, G. Urso, M. Bignardi, P. Navarria,
B. Davies, “The first clinical application of a hands-on robotic knee
S. Castiglioni, P. Mancosu, and P. Trucco, “Applying failure mode effects
surgery system,” Computer Aided Surgery, vol. 6, no. 6, pp. 329–339,
and criticality analysis in radiotherapy: Lessons learned and perspectives
2001.
of enhancement,” Radiotherapy and Oncology, vol. 94, no. 3, pp. 367–
[5] G.-Z. Yang, J. Cambias, K. Cleary, E. Daimler, J. Drake, P. E. Dupont,
374, 2010.
N. Hata, P. Kazanzides, S. Martel, R. V. Patel et al., “Medical robotic-
[24] S. Van Appledorn, D. Bouchier-Hayes, D. Agarwal, and A. J. Costello,
sregulatory, ethical, and legal considerations for increasing levels of
“Robotic laparoscopic radical prostatectomy: Setup and procedural tech-
autonomy,” Sci. Robot, vol. 2, no. 4, p. 8638, 2017.
niques after 150 cases,” Urology, vol. 67, no. 2, pp. 364–367, 2006.
[6] G. De Rossi, F. Setti, F. Cuzzolin, and R. Muradore, “Efficient time- [25] A. Cestari, M. Ferrari, M. Zanoni, M. Sangalli, M. Ghezzi, F. Fabbri,
interpolated convolutional network for fine-grained temporal action F. Sozzi, and P. Rigatti, “Side docking of the da Vinci robotic system
segmentation,” in submitted to CVPR, 2019. for radical prostatectomy: advantages over traditional docking,” Journal
[7] F. Despinoy, D. Bouget, G. Forestier, C. Penet, N. Zemiti, P. Poignet, and of Robotic Surgery, vol. 9, no. 3, pp. 243–247, 2015.
P. Jannin, “Unsupervised Trajectory Segmentation for Surgical Gesture [26] J. D. Doyle, E. M. Webber, and R. S. Sidhu, “A universal global
Recognition in Robotic Training,” IEEE Transactions on Biomedical rating scale for the evaluation of technical skills in the operating room,”
Engineering, vol. 63, no. 6, pp. 1280–1291, 2016. American Journal of Surgery, vol. 193, no. 5 SPEC. ISS., pp. 551–555,
[8] L. Ding and C. Xu, “TricorNet: A hybrid temporal convolutional 2007.
and recurrent network for video action segmentation,” arXiv preprint [27] A. Q. Ereso, P. Garcia, E. Tseng, M. M. Dua, G. P. Victorino,
arXiv:1705.07818, 2017. and T. S. Guy, “Usability of Robotic Platforms for Remote Surgical
[9] M. J. Fard, S. Ameri, R. B. Chinnam, and R. D. Ellis, “Soft Boundary Teleproctoring,” Telemedicine and e-HEALTH, vol. 15, no. 5, pp. 445–
Approach for Unsupervised Gesture Segmentation in Robotic-Assisted 453, 2009.
Surgery,” IEEE Robotics and Automation Letters, vol. 2, no. 1, pp. 171– [28] A. Marttos, F. M. Kuchkarian, E. Palaios, D. Rojas, P. Abreu-Reis,
178, 2017. and C. Schulman, “Surgical telepresence: The usability of a robotic
[10] C. Lea, M. D. Flynn, R. Vidal, A. Reiter, and G. D. Hager, “Temporal communication platform,” World Journal of Emergency Surgery, vol. 7,
convolutional networks for action segmentation and detection,” in IEEE no. Suppl 1, p. S11, 2012.
International Conference on Computer Vision and Pattern Recognition
(CVPR), 2017.
[11] H. Mayer, F. Gomez, D. Wierstra, I. Nagy, A. Knoll, and J. Schmidhuber,
“A system for robotic heart surgery that learns to tie knots using
recurrent neural networks,” Advanced Robotics, vol. 22, no. 13–14, pp.
1521–1537, 2008.
[12] R. Muradore, P. Fiorini, G. Akgun, D. E. Barkana, M. Bonfe, F. Boriero,
A. Caprara, G. De Rossi, R. Dodi, O. J. Elle et al., “Development of a
cognitive robotic system for simple surgical tasks,” International Journal
of Advanced Robotic Systems, vol. 12, no. 4, p. 37, 2015.
[13] N. Preda, F. Ferraguti, G. De Rossi, C. Secchi, R. Muradore, P. Fiorini,
and M. Bonfè, “A cognitive robot control architecture for autonomous
execution of surgical tasks,” Journal of Medical Robotics Research,
vol. 1, no. 04, p. 1650008, 2016.
[14] Y. Gao, S. S. Vedula, C. E. Reiley, N. Ahmidi, B. Varadarajan, H. C.
Lin, L. Tao, L. Zappella, B. Béjar, D. D. Yuh et al., “Jhu-isi gesture and
skill assessment working set (jigsaws): A surgical activity dataset for
human motion modeling,” in MICCAI Workshop: M2CAI, vol. 3, 2014,
p. 3.
[15] J. C. Hu, X. Gu, S. R. Lipsitz, M. J. Barry, A. V. D’Amico, A. C.
Weinberg, and N. L. Keating, “Comparative Effectiveness of Minimally
Invasive vs Open Radical Prostatectomy,” Jama, vol. 302, no. 14, p.
1557, 2009.
[16] J. Ren, R. V. Patel, K. A. McIsaac, G. Guiraudon, and T. M. Peters,
“Dynamic 3-d virtual fixtures for minimally invasive beating heart
procedures,” IEEE Transactions on Medical Imaging, vol. 27, no. 8,
pp. 1061–1070, 2008.
[17] G. A. Puerto-Souza, J. A. Cadeddu, and G. Mariottini, “Toward long-
term and accurate augmented-reality for monocular endoscopic videos,”
IEEE Transactions on Biomedical Engineering, vol. 61, no. 10, pp.
2609–2620, 2014.
[18] M. Franken, S. Stramigioli, S. Misra, C. Secchi, and A. Macchelli,
“Bilateral telemanipulation with time delays: A two-layer approach
combining passivity and transparency,” IEEE Transactions on Robotics,
vol. 27, no. 4, pp. 741–756, 2011.
Authorized licensed use limited to: Carleton University. Downloaded on March 17,2023 at 09:56:07 UTC from IEEE Xplore. Restrictions apply.