IJGII Inernational Journal of Gastrointestinal Intervention

pISSN 2636-0004 eISSN 2636-0012
ESCI
scopus

Article

home All Articles View

Review Article

Int J Gastrointest Interv 2019; 8(1): 20-25

Published online January 31, 2019 https://doi.org/10.18528/ijgii180043

Copyright © International Journal of Gastrointestinal Intervention.

From image-guided surgery to surgeOmics: Facing the era of intelligent automation and robotics

Juan Manuel Verde1,2,* , Mariano E. Giménez1,2,3,4

1General and Minimally Invasive Surgery, University of Buenos Aires, Buenos Aires, Argentina
2DAICIM Foundation, Buenos Aires, Argentina
3Percutaneous Surgery, Institute for Advanced Studies, University of Strasbourg, Strasbourg, France
4Percutaneous Surgery, IHU-IRCAD, Strasbourg, France

Correspondence to:*General and Minimally Invasive Surgery, University of Buenos Aires and DAICIM Foundation, Calle Viamonte 430, Buenos Aires 1053, Argentina.
E-mail address: jverde.md@gmail.com (J.M. Verde). ORCID: https://orcid.org/0000-0002-9127-8467

Received: November 2, 2018; Revised: December 11, 2018; Accepted: December 11, 2018

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Digital images do not represent anatomy, they are just a reliable translation from a full-colored and three-dimensional physical world to a grayscale and two dimensions image domain. As a consequence of the imaging technologies disruption in the medical field, a vertiginous “race to see” had been seeking higher definitions. This process got closer and even exceed human naked-eye resolution and the human brain skills to decode gray-scale bidimensional pictures. In the diagnostic field, as a result, the last decade showed important technological advances addressing this issue and computer-aided detection and diagnosis emerged to complement and enhance radiologist frameworks. In this new era methodologies designed to extract more information from images are a must. Radiomics addressed this item using images as datasets, focusing in the region of interest and extracting features, allowing reproducibility, and finally introducing radiology into the quantitative sciences. Technological disruption in medical imaging also found momentum in the therapeutics arena, empowering a bunch of audacious surgeons to perform less invasive procedures with more confidence and precision, finally launching and shaping the image-guided surgery (IGS) discipline. Here instead, there is no “use as data” counterpart and surgeOmics is the first proposed approach. A wide range of semantic and agnostic features can be extracted from the different phases of the IGS’s workflow, like entry point coordinates, angle, distance, target location, amongst the most important ones. In summary, we are coining the surgeOmics neologism to name this approach, which holds great promises to deal with future demands. It has the potential to improve surgeon-hardware interaction and its central hypothesis is that distinctive algorithms using images as data can provide valuable information for personalized and precision surgery in the era of robotics and intelligent automation. SurgeOmics has emerged from IGS, but can be applied to other wide range of medical problems.

Keywords: Computer-assisted, Image guided surgery, Image processing, Precision medicine, Robotics

The acceleration of technological progress has been central in the past decades, and in the medical and healthcare fields has been delayed, but not excluded. As Vinge2 anticipated in 1993, “we are on the edge of change comparable to the rise of human life on Earth and the precise cause of this change is the imminent creation by technology of entities with greater-than-human intelligence”.

The far future is coming sooner than expected, but when it comes to history, we are used to thinking in straight lines, and when we imagine the progress of the next 10 years, we look back and observe the previous 10 as a reference. Instead of thinking linearly, we should be thinking exponentially in order to anticipate the future in a more reliable way, and this implies conceiving things moving at a much faster rate.

We are not merely in an era of change, but a change of era and we are facing the robotics, intelligent automation (IA), and artificial intelligence (AI) revolution. In the meantime, the human-machine interfaces (HMI), human-augmentation, and other related technologies are trends because of their potential to amalgamate and synthesize this transition.

In the fields of medicine and healthcare, the accidental discovery of x-rays by Röntgen in 1895 led to a revolution in medical imaging (MI) and transformed medicine almost overnight. This new technology allowed doctors to view the internal structures of the body without surgery or autopsies and, as a result, the diagnostic, prevention, prognostic and, even the therapeutical fields changed forever. The disembark of this new technology found a prolific environment and well-predisposed professionals, collaborative networks started abruptly and suddenly MI departments were born as techno-native facilities and radiologists as early adopters. Since that moment, MI cracked the inertia and transformed in the arrowhead of the vanguard innovative movement of the med-tech industry in the medicine and healthcare fields.

“If I had asked people what they wanted, they would have said faster horses.”

Henry Ford, about innovation

Like in other big industries, technological disruption in the MI field accelerated the processes and impaired the vision of the future. The vertiginous and customer-centered competition lead to deploy the efforts and allocate resources to develop equipment that offered pictures with so much definition that reach and even surpassed the human naked-eye resolution and human-brain skills to understand them. In order to alleviate this task, more technologies came to the rescue and three-dimensional (3D) rendering, immersive technologies, and versatile software solutions came along.

As the innovation in the mobility industry, if H. Ford had focused his efforts in customer’s needs and demands, he would have sought the way to rise faster horses instead of boosting and shaping the automotive industry.

MI does not escape the information age and, paraphrasing Ford, if the focus is on customers and users, they will probably ask for more resolution and higher definition equipment, able to provide sharper and colored pictures. The migration from the traditional conception of images as pictures to the new trend of conceiving them as datasets is facing a big challenge, and an entire brand new industry needs to be developed.

The past: images as pictures

Since the beginning, all the efforts and resources in MI were destined to improve vision, starting a radiologist-centered vertiginous “race to see”, where the best equipment was the one that provided the best pictures, according to their resolution above all other characteristics (Fig. 1).

As a result, MI advanced in different ways3:

  • Hardware: MI devices evolved individually and combined with other technologies. For example, in the last decade, we moved from multi-slice to dual-energy and photon-counting computed tomography (CT) scanners and its combination with positron emission tomography (PET) and magnetic resonance imaging (MRI).

  • Agents: contrast, radiotracers, biomarkers, and imaging agents have evolved steadily accompanying MI evolution.

  • Protocols: historically, radiology has been focused on qualitative features, but the widespread of the standardized protocols helped its transformation to a more quantitative and reproducible science.

  • Analysis: in contrast to the traditional practice of treating images as pictures intended solely for visual interpretation, in the last decade and under the radiomics and radiogenomics disciplines, images are treated as data, aiming multi-dimensional analysis combined with other patient data, improving diagnostic, prognostic, and predictive accuracy.1

This evolution pushed forward the limits, offering results that suddenly were beyond human skills to be operated. As in the case of mobility, for example, the car disruption ran the “race of speed” and made available to almost everyone faster cars, that also required new technologies to bridge car speed and the human skills to control them. New HMI flourished to make them drivable, shaping and converging in brand new spinoff technologies, as the autonomous or self-driving vehicles, aiming to use sensory data to navigate without human drivers.

In summary, the vast majority of MI’s advances previously mentioned had been deployed during the aforementioned race, seeking better images, sharper and with higher definition, less blurry with more contrast and brightener, facilitating their visualization and interpretation. Suddenly, the resulting images had more details that human naked-eye could see or the human brain could understand. At this point, HMI solutions came from computer science and bioinformatics, and computer-vision and computer-aided detection (CADe) or diagnosis (CADx) systems emerged.

The growing industry of CAD systems leads uncovered the intrusion of the datacentric trends in the field of medicine and, a new conception of images as data became visible.

The present: medical images as data

Digital medical images are not just pictures and do not represent anatomy, they are a somehow translation from a tridimensional and full-colored physical world to a grayscale and bidimensional picture or slice. Despite which modality is employed to see through the skin, image domain cumulates distortions resulting from the wide-range of calculations, estimations, discretizations, algorithms, and inferences applied during the process (Fig. 2). In summary, modern imaging techniques have in common: (1) they are just about a representation of the human body, (2) resulting images contain compiled data, (3) are intended to display human intelligible information (see Glossary & Lexicon section).

To be useful in MI, and just to brief and recap, two huge steps must be fulfilled:

  • Analogue to digital conversion (ADC): from the physical world to the raw-data domain. The data is discretized from a continuous stream into discrete signals (sampling), then transformed into values (quantization) and finally coded in bit sequences (coding).

  • Image reconstruction: from data to images. Using analytical or algebraic reconstruction algorithms, the acquired data is treated through the computational power to create human-readable images.

Each step involves multiple sub-steps, depending on the imaging technique, protocol, equipment, etc. Individually and altogether, these steps are committed to being human readable but introduce somewhat distortions and attempts against its exactitude and verisimilitude. Trending immersive technologies like augmented reality for example, not only requires algorithmic editions but also manual performed by different operators, resulting in more deformed images.

At this point, we are in front of the selfish and human-centered techno-evolution paradox, where humans design a technology to help with a purpose, (in this case to see the inside part of the body) but asking them to do it in a human way, instead of using their abilities and skills to accomplish the duty. Innovations in this field should keep in sight that forcing computers to “see like humans” through computer vision (CV) instead of using the computational power to analyze multi-dimensionally the underlying data is just a necessary step, but not the endpoint.

In the last decade, MI has grown exponentially due to the wider availability and the advances of technology, an increased demand by patients and physicians, and also the improvements deriving in a lower threshold for using it. As a result, image analysis tools has exploded too, and radiomics, radiogenomics and a huge number of CAD solutions flourished, combining AI, CV, and data sciences.

The envisage and treatment of images as mineable and analyzable data contrasted the previous traditional way, just intended solely for visual interpretation. In the current era of collaboration, robotics, and IA, methodologies to extract more information from images are a must. In the field of MI, radiomics addressed this item using images as data sets, allowing protocolization and reproducibility, and finally switching radiology to the branch of quantitative sciences. This discipline exploits sophisticated image analysis tools, employing image-based features for precision diagnosis, and therefore providing a powerful instrument in modern medicine.

As a clinical case, it is currently well recognized that solid tumors are not homogeneous, but rather composed of multiple sub-populations of cells, with spatial and temporal variations. Radiomics intend to extract tumoral phenotype, technically called “radiomic signature”, composed by data related to its intensity, shape, volume, size, and texture (radiomic features) and analyze along with extra-imaging data as lab tests results or demographic, for example.

The resulting explosion of data creates an environment ideal for machine-learning (ML) and data-based sciences and has exponentially increased over the past two decades, also having a stronger trend than MI itself.4 In consequence, the role of MI is evolving from being an almost exclusive diagnostic discipline to also be the arrowhead and include a central role in the context of personalized and precision medicine.

Raising the bet in the field of basic sciences, new trends in radiomics are focusing on bypassing or intercepting raw-data (see CT scan workflow as an example) before manipulated to be seen, and allocating computational power to perform multidimensional analysis, a discipline growing up under the name of rawDiomics.5

The future: image-guided surgery (IGS) data as information

In the theragnostic and therapeutic fields, images acquired throughout the planning phase, the surgical procedure itself and the follow-up, are usually useless and discarded. Their use as data sources in order to obtain valuable information is the surgeOmic’s starting point (Fig. 3).

Assuming that images are data by themselves, and aiming to avoid the execution of steps that may reduce their accuracy and precision just to be shown to the human eye, this deeper data (raw data) can be intercepted and processed (Fig. 4; raw-surgeOmics, see rawDiomics).

The outcomes of each IGS procedure may vary depending on patient factors (demographic, morphologic, etc.), disease characters (local, regional and systemic), the surgeon (expertise, training, etc.) and, equipment related conditions, all of these can be used as data sources.

The information gathered can be analyzed along with other data coming from a wide range of sources (demographics, hybrid operating room [OR], disease-related, etc.). This higher and multidimensional analytical approach should be based on data science and AI technologies, using computational power to obtain valuable information.

The resulting information promises to be valuable in the edge of the new era of robotics, IA, and AI.

In the theragnostic and therapeutic areas, audacious experts supported by MI technological disruption (among others) and the confidence provided by high-definition images, progressively performed procedures with growing complexity, launching and shaping disciplines as interventional radiology and percutaneous surgery.

New needs quickly were imposed, speeding up technological translation processes and aiming to close the gap between design and implementation in clinical areas. Surgeons, cardiologists, radiologists, gastroenterologists, urologists, just to mention a small number of specialties, all of them demanding technological applications in their fields. At this point, image definition was close enough to the human eye resolution, and technologies migrate their efforts and resources to fulfill user’s demands of precision and accuracy when targeting objectives. Images started to be interpreted as maps, using coordinates to reach the targets. Difficulties emerged regarding the interpretation of two-dimensional images and 3D rendering and reconstruction software took the lead, using multiple slices to configure rendered 3D images that could be used cartographically. Using spatial coordinates, location gain accuracy, and minimally-invasive procedures stepped-up their precision.

At this point, the race to see switched to a new race to locate, and a brand-new discipline emerged under the name of IGS. A bunch of visionary and daring surgeons, well funded and supported by the med-tech industry, pushed the limits of minimally-invasive surgery (MIS) even further, at this time not just doing standard procedures with less invasion, but inventing new techniques and approaches hand in hand with technological progress. A wide range of new technologies was invented and developed, most of them empowering surgeons with skills and allowing them to perform less invasive, more securely new techniques, and with greater confidence.

Navigation guides irrupted in the field of IGS, focusing not just on the target and its habitat, but also in the local and regional micro-environment as well as the extra-corporeal space (macro-environment). In the case of a liver tumoral ablation, for example, it is not only important the tip of the needle location, but also its tail, because its coordinates and data can be extrapolated to perform estimations in order to obtain valuable information for the procedure.

Here instead, there is no radiomics counterpart and surgeOmics is the first proposed approach, and it holds great promises to deal with this new era. SurgeOmics has the potential to improve surgeon-hardware interaction and its central hypothesis is that distinctive algorithms using raw, projection or even image data can provide valuable information for personalized and precision surgery.

Indeed, there is no published discipline seeking to use the radiome, aiming, for example, to select the best entry point, or the less risky route, to select the target more accurately or also to predict surgical complications among other important potential surgical utilities (surgeOmic signature).

In the surgical arena, two phenotypically identical liver tumors located in the same place in two similar patients, cannot be treated in the same way. Addressing some efforts, radiome data can be transferred to create the surgeome, destinated to assess, control and guide precisely almost any kind of procedures. Alike in the detection, prevention, and diagnostic area, this field needs assistance to develop a certain grade of automation and optimize the processes, and this is the start point of the proposed neologism surgeOmics.

In a nutshell

  • Defined as the conception of IGS’s images as datasets, able to be used to perform multi-dimensional data analyzes and improve surgical planning and support the execution of IGS procedures.

  • Comes from IGS, but is potentially applicable to a wide range of minimally invasive and surgical disciplines.

  • Works as a meeting point between different imaging techniques (ultrasound [US], CT, MRI, PET, etc.).

  • Tools developed can help clinical work on a daily basis, and image-guided (IG) surgeons can play a pivotal role in continuously building the databases that are to be used for future spinoffs.

  • Features have the potential to uncover surgical procedures characteristics that usually are operator dependent.

  • Images features contemplate the micro and macro-environment, minding not just the internal structures, but the hybrid OR’s ones, sensors, and wearables.

  • SurgeOmics is a brand-new field and faces multiple challenges to its implementation, validation, acceptance, and scale-up in a clinical setting.

  • The raw-surgeOmics sub-discipline, lead its efforts in deeper layers of data, the ones not intended to be understood by human-operators, avoiding multiple added distortions.

  • This discipline aims to contribute with a surgeomic-based therapeutic-support for precision theragnostic, therapeutic, and follow-up.

Definition

SurgeOmics

Conceived as a biomedical informatics science, dedicated to converting images (acquired during IGS procedures) into mineable data and, able to be processed multi-dimensionally along other wide range of data. The obtained information promise to be valuable and, capable of feedforward neural networks (AI), train IGS’s robotic devices and train IA workflows, as well as bridging and enforcing precision and personalized surgery. These surgeOmics features have the potential to uncover surgical procedures characteristics that usually are operator dependent.

The neologism

We coin in this paper the neologism surgeOmics minding the words “surge” referring to surgery and the suffix “Omics”, used initially in the term genomics to indicate the mapping of the human genome. It was subsequently widely used in biology to highlight and emphasize the holistic feature of the research encompassing the entire view of a system (Now, the term is being used also in other medical research areas that generate high-dimensional data from single objects per se).

Framework & workflow

The process throughout the determination of the surgeOmics signature has three main phases: the surgical planning, followed by the execution phase and, finally the post-procedure control and follow-up (Table 1). Each of them attending four aspects:

  • The patient

  • The disease

  • The equipment

  • The procedure itself

Surgical planning

A wide range of patient data can be gathered and included in the multi-dimensional analysis, among the most important ones are demographic (age, sex, ethnicity, etc.), morphological (height, weight, etc.) and, biochemical data (lab results).

Related to the disease, semantic and agnostic features (see features section) can be extracted in this step, focusing on the tumor (disease-centered) and also its habitat, aiming to examine local and regional important structures or anomalies that can provide valuable information and modify therapeutic decisions. The planning phase’s surgeOme can be gathered, defined as the complete collection of features extracted from all available IGS images.

Regarding the surgical procedure, categorical, discrete and continuous variables are attended, and all kind of metrics (entry point, target contact and target center spatial coordinates, distance to contact and distance to target center) are calculated depending on the procedure to be executed.

Imaging modalities (US, CT, MRI), minimally-invasive techniques (laparoscopy, endoscopy, percutaneous, combination), ablation devices (radiofrequency, microwave, irreversible electroporation), navigation guides (optical, electromagnetic), specific instruments and, characteristics of the hybrid operating-room amongst the most important variables to be collected related to the equipment in this planning phase.

Execution

During this stage, more features are added to the patients surgeOmics signature and, just as a summary:

  • Patients behavior data during the procedure (vital signs, radiation dose, procedure time, bleeding, respiration curve, lab tests, etc.).

  • Data collected from:

    • ○ Intelligent ORs, with cameras registering surgeon’s movements.

    • ○ The surgeon’s wearables, and immersive technologies.

    • ○ The equipment & instruments, imaging techniques, endo or laparoscopic sight videos, navigation guides data stream, ablation or energy devices, etc.

Post-procedure & follow-up

The same framework used in the first two phases during the planning stage can be employed at this point, collecting all data points together or the ones required according to procedure type.

Features

The clue of surgeOmics is the extraction of multi-dimensional data to quantitatively describe attributes of the three IGS’s phases, knowing them as surgeOmics features. Semantic, are the ones commonly used in the radiology, interventional radiology, percutaneous surgery, endoscopy or laparoscopy lexicons to describe characters, while the agnostic type is the one that attempts to capture them through quantitative descriptors.

Translational potential

The surgeOmics field is a completely new science, coming from basic sciences but with the potential to be quickly translated to the clinical arena. Some of the translational potentials are briefed below:

  • Use images from multiple IGS imaging techniques to process altogether in order to build the surgeOme.

  • Design computer-aided planning (CAP) interfaces to calculate valuable coordinates, angles, and distances to assist IG surgeons before and during the execution phase.

  • Use high-dimensional data to calculate difficulty, risks, complication rates and, adjust therapies accordingly.

  • Uncover through data analysis tips, clues, and details that are usually operator-dependent.

  • Assist professionals in every repetitive task along the standard IG surgeons workflows.

  • Use data coming from deeper layers (projection data [pd], raw-data), “unintelligible” to humans to locate instead of, enhancing the location and precision of the IGS therapies. (pd-surgeOmics/raw-surgeOmics)

  • Empower the micro and macro habitats or environments (local, regional and, external) as contributing factors to asses IGS therapies.

The most important ending points of this efforts are, in summary:

  • Evolve IGS to the field of personalized and precision surgery.

  • Outperform the current therapies, develop new ones in a data-based way and get better outcomes overall.

  • Use the resulting big-data to engage the new Age of IA, AI, and robotics.

Raw data: the one resulting from ADCs.
Projection data: after the application of algorithms that “reconstruct” the physical world from raw-data.
Image data: organized and compiled in the image’s matrix.
Radiome: the complete set of imaging features obtained for a patient with available images.
Radiomic signature: a collection of features which holds predictive, prognostic and diagnostic values.
SurgeOme: the complete set of IGS imaging features obtained for a patient with available images.
SurgeOmic signature: a collection of features which holds important information for the IGS field.
Datum: single observation about a patient.
Data: multiple observations of a patient.
Information: refers to analyze data that have been suitably curated and organized so that they have meaning.6
Fig. 1. Images as pictures workflow.
Fig. 2. Images as data workflow. CAD, computer-aided detection.
Fig. 3. Data as information workflow. IGS, image-guided surgery.
Fig. 4. Computed tomography (CT) scan workflow (left, real world; right, image domain). HU, hounsfield unit; ADC, analogue to digital conversion; DICOM, digital imaging and communications in medicine.

SurgeOmic Features Collection for Image Guided Surgery

Surgical planning Procedure execution Follow-up
Patient Morphological Vitals Morphological
Demographic Respiration Demographic
Genomic Dose exposure Genomic
Biochemical Bleeding Biochemical
Biochemistry
Disease Shape Tumor data Texture
Texture Size
Size Wavelet
Wavelet Enhancement
Enhancement Outline
Outline Intensity
Intensity
Equipment Imaging Projection data
Ablation methods DICOM data
MIS tech Navigation data
Hybrid OR complexity Ablation data
Navigation guides
Procedure Entry point coordinates Wearables
Distance to target Computer vision
Attacking angle Sensors
Target coordinates Time elapsed

DICOM, digital imaging and communications in medicine; MIS, minimally invasive surgery; OR, operating room.

  1. Gillies RJ, Kinahan PE, Hricak H. Radiomics: images are more than pictures, they are data. Radiology. 2016;278:563-77.
    Pubmed KoreaMed CrossRef
  2. Vinge V. Technological singularity, VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30–31, 1993, Cleveland, OH.
  3. Lambin P, Rios-Velazquez E, Leijenaar R, Carvalho S, van Stiphout RG, Granton P, et al. Radiomics: extracting more information from medical images using advanced feature analysis. Eur J Cancer. 2012;48:441-6.
    Pubmed KoreaMed CrossRef
  4. Wang G. A perspective on deep imaging. IEEE Access. 2016;4:8914-24.
    CrossRef
  5. Kalra M, Wang G, Orton CG. Radiomics in lung cancer: its time is here. Med Phys. 2018;45:997-1000.
    Pubmed CrossRef
  6. Shortliffe EH, Cimino JJ. Biomedical informatics: computer applications in health care and biomedicine. 4th ed.. London: Springer-Verlag 2014.
    CrossRef