|Ahead of print
Artificial intelligence in cancer diagnostics and therapy: current perspectives
Anusree Majumder1, Debraj Sen2
1 Department of Pathology, Armed Forces Medical College and Command Hospital (Southern Command), Pune, Maharashtra, India
2 Department of Radiodiagnosis, Armed Forces Medical College and Command Hospital (Southern Command), Pune, Maharashtra, India
|Date of Submission||27-Apr-2020|
|Date of Decision||27-Apr-2020|
|Date of Acceptance||15-Dec-2020|
|Date of Web Publication||19-Sep-2021|
Department of Radiodiagnosis, Armed Forces Medical College and Command Hospital (Southern Command), Pune, Maharashtra
Source of Support: None, Conflict of Interest: None
Artificial intelligence (AI) has found its way into every sphere of human life including the field of medicine. Detection of cancer might be AI's most altruistic and convoluted challenge to date in the field of medicine.
Embedding AI into various aspects of cancer diagnostics would be of immense use in dealing with the tedious, repetitive, time-consuming job of lesion detection, remove opportunities for human error, and cut costs and time. This would be of great value in cancer screening programs. By using AI algorithms, data from digital images from radiology and pathology that are imperceptible to the human eye can be identified (radiomics and pathomics). Correlating radiomics and pathomics with clinico-demographic-therapy-morbidity-mortality profiles will lead to a greater understanding of cancers. Specific imaging phenotypes have been found to be associated with specific gene-determined molecular pathways involved in cancer pathogenesis (radiogenomics). All these developments would not only help to personalize oncologic practice but also lead to the development of new imaging biomarkers.
AI algorithms in oncoimaging and oncopathology will broadly have the following uses: cancer screening (detection of lesions), characterization and grading of tumors, and clinical decision-making and prognostication.
However, AI cannot be a foolproof panacea nor can it supplant the role of humans. It can however be a powerful and useful complement to human insights and deeper understanding. Multiple issues like standardization, validity, ethics, privacy, finances, legal liability, training, accreditation, etc., need to be overcome before the vast potential of AI in diagnostic oncology can be fully harnessed.
Keywords: Artificial intelligence (AI), deep learning, diagnostics, machine learning, oncology, pathology, radiology
| » Introduction|| |
Artificial intelligence (AI) is a branch of computer science that aims to evolve intelligent machines or computer systems which can simulate various human cognitive functions like learning, reasoning, and perception. From voice-powered personal assistants 'Siri' and 'Alexa' to more complex applications like Fuzzy logic systems and AIoT (a combination of AI technologies with the Internet of Things), the prospects of AI are infinite.
The healthcare sector has also been a witness to this phenomenal progress of AI. While machine learning and algorithms have found their way into diagnostics, physical AI involving sophisticated robots 'carebots' and medical devices are being utilized to deliver patient care as well as perform complex surgeries. Detection of cancer might be AI's most altruistic and convoluted challenge to date in the field of medicine. Embedding AI into various aspects of cancer diagnostics will help serve larger populations, personalize oncologic practice, remove opportunities for human error, and cut costs and time. However, AI faces unique challenges. This article, though not exhaustive in scope or detail, aims to present a broad overview of AI in cancer diagnostics and the challenges that lie ahead.
Method of preparing the document
This review article was prepared after conducting an exhaustive internet search on PubMed, Web of Science, and Google Scholar in the month of November 2020 when the final draft of this manuscript was prepared. The period between the years January 2009 and April 2020 was initially searched. Subsequently, the search was extended till November 2020. The keywords 'artificial intelligence', 'machine learning', 'deep learning', 'diagnostics', 'radiomics', 'pathomics', 'radiogenomics', 'oncology,' and 'cancer' were used in different combinations with the Boolean operators 'AND/OR' to achieve relevant and focused results. Apart from a few older landmark papers, only recent studies/articles/reports on the subject from renowned journals were referenced. About 90 articles were referenced and evaluated, and finally, a total of 70 articles/textbooks were cited. Of the 70 articles/textbooks/reports that have been cited, 57 were published after the year 2010, and 51 were published in or after the year 2015. The opinion of colleagues in the fields of oncoimaging, oncopathology, and oncology on this subject was also informally sought and the manuscript content was finalized. Simple flowcharts, diagrams, and tables were also prepared for ease of understanding.
A brief history of AI
AI is based on the assumption that human thinking can be simulated through mechanical or electronic manipulation of symbols. Historical references to AI can be traced back to automatons such as Talos of Crete and golden robots of Hephaestus in ancient Greek mythology. The first work which is now recognized as AI is the model of artificial neurons developed by Warren McCulloch and Walter Pits in 1943. In 1950, Alan Turing published his landmark research paper where he proposed the idea of 'The Imitation Game' — a question that considered if machines could think. This proposal later became The Turing Test, an important philosophy in AI that evaluates intelligence, consciousness, and ability in machines. In 1956, American scientist John McCarthy organized the Dartmouth conference where the term 'artificial intelligence' was first adopted. He defined it as 'the science and engineering of making intelligent machines, especially intelligent computer programs'. From the invention of ELIZA, the world's first chat-bot in 1966 to Google Deepmind's AlphaGo, AI has evolved significantly and is now a part and parcel of our everyday lives.
How AI works
AI is all about imparting the cognitive ability to machines. It uses computational networks (neural networks) that mimic the biological nervous systems. AI may be broadly classified into two main categories: type 1 and type 2 based on capabilities and functionality [Figure 1]. Artificial narrow intelligence (ANI) represents all the existing AI that has been created to date. These machines can perform only a single or limited number of tasks autonomously displaying human-like capabilities but cannot replicate our multi-functional abilities.,,
|Figure 1: Types of artificial intelligence (AI). Type 1 is based on capability and type 2 is based on functionality|
Click here to view
To perform these tasks, AI uses various algorithms which are sequences of steps to be followed in calculations or other problem-solving operations by computers. While a traditional algorithm is composed of rigid, preset, explicitly programmed instructions that get executed each time the computer encounters a trigger, AI can modify and create new algorithms in response to learned inputs without human intervention. Instead of relying solely on inputs that it was designed to recognize, the system acquires the ability to evolve, adapt, and grow based on new data sets. Machine learning (ML), a subset of AI, focuses on the development of such algorithms that can process input data and use statistical analysis to detect patterns and draw inferences without being explicitly programmed. Structured data sets are then used to train the machine which applies the discovered patterns to solve a problem when new data is fed. However, if the result is incorrect, there is a need to 'teach' them. ML algorithms can be classified into 3 broad categories-supervised, unsupervised, and reinforcement learning [Figure 2]., This technique is frequently used in many services that offer automated recommendations to users Iike on-demand music streaming services. Deep learning (DL) is an evolution of ML algorithms that uses a hierarchical level of artificial neural networks (ANNs) to progressively extract higher-level features from raw input. The number of intervening layers can be up to 1000, thus inspiring the term 'deep' learning, [Figure 3]. Unlike traditional programs that linearly analyze data, the hierarchical system of functioning in DL enables machines to process unstructured data in a non-linear fashion without human intervention. Each layer feeds data from the layer below and sends the output to the layer above and so on. Much like the human brain, the machine ultimately learns to recognize patterns on its own, can self-correct, and make intelligent decisions.
|Figure 3: Simple pictographic representation of (a) machine learning and (b) deep learning. The first layer (input) represents the observed values or data fed into the system. The last layer (output) produces a value or information or class prediction. The intervening layers between the input and output layers are called hidden layers, since they do not correspond to observable data. The tiered structure of the neural networks allows them to produce much more complex output data. The number of intervening neural networks between 'input' and 'output' are much higher in 'deep learning'|
Click here to view
The most popular variations of DL are CNN (convolutional neural network) and RNN (recurrent neural network). While the former is used for image classification, RNN can handle sequential data like text and speech. Deepmind's AlphaGo computer gaming program is a classic example of the infinite possibilities that AI is capable of in today's world.
AI in cancer diagnostics
Cancer is the second leading cause of death worldwide with an estimated 9.6 million deaths in 2018. Many of these deaths are preventable through early diagnosis and timely intervention. However, the early or pre-cancerous disease is often difficult to recognize. Standard screening methods employing the human analysis of scanned images, biomarkers, and genetic testing are not foolproof and may yield erroneous results leading to delayed diagnosis and considerable morbidity and mortality [Figure 4]. This becomes more relevant as we move towards an era of personalized medicine, where medical practice is guided by genotype rather than phenotype, and populations are subclassified based on their genotype, individual susceptibility to disease, disease progression, and response to therapy. Incorporating AI into cancer diagnostics will also help serve larger populations with fewer diagnosticians, remove opportunities for human error and achieve greater safety and accuracy, and cut costs and time.
|Figure 4: As diseases progress and become more severe, late detection leads to poor outcomes. Screening programs lead to the early detection of disease and better outcomes|
Click here to view
AI in oncoimaging
In 1967, long before the advent of the DL era in medicine, Winsberg and coworkers introduced optical scanning techniques to detect radiographic abnormalities in mammograms using a facsimile scanner. However, it was not until the 80's when large-scale research began towards the development of various computer-aided detection (CAD) systems in radiodiagnostics. Pioneering works by Giger, Chan, and Vyborny paved the way for the application of ML algorithms in differentiating benign from malignant lesions in various imaging modalities.,, In 2011, Philippe Lambin first proposed the term 'radiomics' which he defined as 'the extraction of a large number of image features from radiation images with a high-throughput approach'. From automated cancer screening to the use of image-based signatures for precision oncology, radiomics finds application in almost every aspect of oncology.
Digital images are basically patterns of numbers. AI thus helps to extract mathematical information from digital images that are imperceptible to the human eye. Large volumes of quantitative data from multi-modality imaging techniques like computerized tomography (CT scan), magnetic resonance imaging (MRI), and positron emission tomography (PET) can be correlated with clinicopathological profiles, treatment responses, and genomic-proteomic assays to develop new imaging biomarkers. While conventional radiomics workflow (also known as hand-crafted radiomics or HCR) largely focused on image mining using a set of predefined engineered features, state-of-the-art deep learning-based radiomics (DLR) provide a more quantitative and reproducible radiological assessment based on automatic identification of complex data patterns without human intervention. These systems significantly improve the accuracy of image detection for cancer by removing human subjective biases which may ensue from lack of adequate experience, training, or due to time constraints.,
The fundamental steps of the radiomics pipeline in oncology (HCR model) include data acquisition and preprocessing, tumor segmentation, feature extraction and selection, and modeling [Figure 5]. In contrast, DLR is an end-to-end method without separate steps of feature extraction and selection, and modeling. Both models require validation with a new dataset before clinical application. The first step in radiomics is data acquisition and preprocessing which includes image smoothening and enhancement techniques to reduce noise and artifacts from the original images. Then through manual, semi-automatic, and automatic techniques, tumor segmentation is done to delineate the regions of interest (ROI). Manual segmentation which involves scanning dozens of slices is time-consuming and subject to inter-observer variability. Automatic and semi-automatic techniques (support vector machines or SVM) analyze pixels/voxels and use features like intensity, gradient, and optimization of energy functions to improve an initial contour.,, However, tumors often have numerous morphological variations, tumor margins are often blurred by partial volume effect and intensity may be the same as non-tumorous tissue. Newer deep learning techniques like Lungnet and Unet are being increasingly utilized to overcome these problems. The most important step in radiomics is feature extraction where high-throughput features are extracted from images by analysis of variations in shape, intensity, and texture of ROI — often to predict tumor heterogeneity and response to treatment. However, the large volume of data generated often includes many redundant features. Feature selection is a technique in radiomics that employs various methods like filter, wrapper, and embedding to retain the most relevant data and prevent overfitting. Next, a statistical analysis of the selected data is performed to construct a prediction model for possible clinical outcomes. The final step is assessing the stability of the extracted data and validation of the radiomic model before clinical application. The goal is to develop an independent, external, and prospectively validated model for the prediction of tumor growth, treatment response, and clinical outcome. The Image Biomarker Standardization Initiative is an international collaboration that works towards standardization in radiomics, providing reliable benchmark data sets and formulation of various guidelines for radiomic studies.
|Figure 5: Flowchart showing the fundamental steps of hand-crafted radiomics (HCR) pipeline. In contrast, deep-learning radiomics (DLR) is an end-to-end method without separate steps of feature extraction and selection and modeling. Both models however require validation with new datasets before clinical application|
Click here to view
AI algorithms in oncoimaging have broadly two important uses: cancer screening and clinical decision-making and prognostication.
Hua et al. proposed a CNN-based model (deep belief network) for automated lung nodule detection and characterization based on data accrued from digital radiograph that tend to address feature-related and annotation cost issues. Using faster R-CNN (region-based CNN which is a CNN-based object detection framework), Ribli et al. were able to achieve higher score thresholds for detection of malignant lesions in screening mammograms compared to traditional CADs.
DL algorithms have been successfully used for automated contouring of radiosensitive organs-at-risk (OAR) and target volumes as well as for auto-planning of radiation dose distribution in patients with breast, head, and neck, lung, prostate, rectal, pancreatic cancers undergoing radiotherapy, saving time and avoiding overexposure and systemic toxicity.,,,,, Imaging-based ML algorithms have also been used to predict response to neoadjuvant chemotherapy in different cancers.,, In a very recent study published in The Lancet by Bitencourt et al., clinical and MRI radiomic features coupled with ML were used to assess HER2 expression levels in breast cancer and predict pathological complete response (pCR) after neoadjuvant chemotherapy in HER2 overexpressing breast cancer patients.
In yet another study published in The Lancet, ML algorithms were applied to build a radiomics-based predictor of CD8 cell expression signature using extracted features from CT images to predict response to immunotherapy (anti-PD 1 or anti-PDL1 monotherapy) in advanced solid tumors. Greater than 85% accuracy has been achieved in pre-treatment prediction of nodal metastases and extra-nodal extension in head and neck cancers on CT images by a 3D DL CNN which may help in prognostication and management. Significant progress has also been made in the field of radiogenomics. Imaging phenotypes have been found to be associated with specific molecular pathways involved in cancer pathogenesis. Using multi-parametric MRI images, CNNs have been shown to predict both isocitrate dehydrogenase (IDH) mutation and O6-methylguanine-DNA methyltransferase (MGMT) methylation status in brain tumors that may help select patients for targeted therapy. In a study by Wu et al., 8 classical ML methods were evaluated and random forest-based radiomics models were found to have the highest predictive performance in IDH genotype prediction in patients with diffuse gliomas. Advanced models have also been evolved to predict newer biomarkers like PDL1 expression in head and neck and lung carcinomas using the radiomics signature extracted from 18-fluorodeoxyglucose (18FDG-PET) imaging.,
Some of the notable studies on the applications of AI in oncoimaging are listed in [Table 1].
|Table 1: Notable studies on applications of artificial intelligence (AI) in oncoimaging|
Click here to view
AI in oncopathology
Despite the advent of non-invasive techniques like liquid-biopsy, traditional histopathology still remains the gold standard in cancer diagnosis. This involves the processing of resected tissue specimens to yield hematoxylin and eosin (H&E) stained sections of representative areas which are then examined under a microscope by the pathologist. Right from formalin fixation of tissues, grossing, paraffin embedding, tissue sectioning to the final production of a stained slide, the entire process is complex requiring stringent standardization procedures to eliminate any staining variability that might affect the final diagnosis. Inter-observer bias in the interpretation of the slides may also affect the final diagnosis. AI in histopathology involves the application of various image standardization and data augmentation procedures to produce an optimal virtual slide that can be subjected to further processes like human diagnostics or automated measurements [Figure 6].
|Figure 6: Flowchart showing the fundamental steps involved in the creation of an optimal virtual slide from a standard microscope histopathological slide|
Click here to view
The first footsteps of AI in pathology can be traced back to the early '50s when Mellors and Silver suggested the use of an automatic pre-screening instrument (micro-fluorometric scanner) for evaluation of cervical Pap smear More Detailss. Pioneering works by Bostrom et al., Prewitt et al., Rosenfeld, Tanaka et al. paved the way for the application of various experimental image analyzers for automated scanning of images.,,, From simple tasks like automated differentiation of leucocytes in peripheral blood smears to the emergence of radiopathomics, the progress of AI in pathology has been phenomenal.,
The technique of AI application in pathology can be summed up in the following steps. Firstly, whole slide digital scanners are used to scan complete microscope slides and create a high-resolution digital file that can be stored, analyzed, and shared (Whole-slide imaging [WSI]). Thereafter scale normalization technique is applied to rectify variations in color, contrast, brightness, and scale of digital images. This is followed by the delineation of the 'region of interest (ROI)' in each slide by combining several computer vision operations like color space conversion, pixel clustering, and quantization. A two-stage work-flow system is then applied for automated analysis of the final image. The first step is the training phase where patch-level CNNs generate thousands of heat-maps from virtual pathology images. The extracted features are then used to train an image-level ML model. In the inference stage, heat-maps are generated from the target image using pre-trained patch-level CNNs. The extracted features are then fed into the ML model to determine the final image-level result. However, the enormous density of data that needs to be processed compared with radiology remains a daunting task. For example, the largest radiological data sets are the high-resolution chest CT scans which comprise about 134 million voxels. In comparison, a single prostate biopsy yields around 4 billion pixels of data. Also, building such a large database of annotated high-quality data for pre-training the CNNs is laborious and expensive and often beyond the purview of smaller establishments. The use of well-defined data sets like Imagenet and TCGA (The Cancer Genome Atlas More Details Program) coupled with data augmentation techniques can provide a cost-effective solution to this problem and increase the footprint of AI in daily diagnostic practices.
Analogous to radiomics, the term 'pathomics' implies the application of AI to images in pathology. It involves the application of WSI to generate quantitative features invisible to the naked eye and its correlation with the divergent phenotypic characteristics of tissue specimens. A big challenge faced by oncologists is that the same cancer can have varying clinical outcomes in different groups of patients. Also, patients exhibit different sensitivities to various cytotoxic drugs. These can be attributed to inter and intratumoral heterogeneity of cancer tissues arising from variable genetic composition, epigenetic changes, and a heterogeneous tumor microenvironment. How a tumor behaves is ultimately determined by the complex interplay of various molecular pathways arising from a diverse population of cancer cells with varied mutations. A pathologist cannot examine or characterize each and every cell in a tissue section. He usually focuses on the most aggressive tumor component to determine the final phenotypic characteristics. In contrast, AI algorithms divide WSIs into multitudes of small tiles each as small as 50 pixels. Each of these includes a variable number of tumor cells, epithelial and lymphovascular structures, adipose tissue, immune cell infiltrates, mitotic structures, stromal connective tissue, and necrosis. Each tile is analyzed using a 4-step algorithm (detection-segmentation-labeling-classification) and the results are combined to determine the final analyses for entire WSIs. Thus, for a highly aggressive tumor, AI can be applied to obtain quantitative data about the phenotypic features of specific tumor components which may provide valuable insight into the biological behavior of the tumor.
AI algorithms in oncopathology have broadly two important uses: diagnosis and grading of tumors, and prognostication.
AI-based models have been successfully applied for automated Gleason-scoring in prostatic carcinoma, histological sub-classification of lung and ovarian cancer, and grading of gliomas. Using raw input data from virtual images of H&E-stained slides of lung cancer biopsies, Coudray et al. trained a CNN model to accurately predict 6 different mutations (STK 11, EGFR, KRAS, FAT1, SETBP1) from analysis of specific histologic patterns associated with various molecular subtypes of lung adenocarcinoma. AI can be used for intraoperative margin assessment from frozen section optical coherence tomography (OCT) images. In a study by Nguyen et al., the sensitivity for OCT-based breast cancer surgical margin detection was 100% compared with data based on pathologic methods.
WSI from various data sets are being utilized to train patch level CNN models for micrometastases and mitotic counts., A significant agreement has been achieved between automated and manually performed Ki-67 labeling index (LI) in breast cancer samples using a deep learning model known as deep-SDCS (Simultaneous Detection and Cell Segmentation). In a large-scale clinical trial of estrogen-receptor-positive breast cancer by Narayanan et al., accuracies of up to 99% and 89%, respectively were achieved on two separate test sets of Ki-67 stained breast cancer datasets comprising biopsy and whole-slide images. A recent study by Humphries et al. that applied DL for determination of PD-L1 tumor proportion score (TPS) in needle biopsies of cases of non-small cell lung carcinoma, showed a strong concordance rate of 97-98% between the algorithmic estimation of TPS and pathologist visual scores. Survival recurrent network (SRN) models have been used for survival analysis in patients with gastric cancer. This model is capable of integrating molecular data with clinicopathological information like demographic data, stage, type of surgery, Lauren classification, etc., to predict survival after surgery. DL techniques have been successfully utilized for cancer prognostication through quantification of tumor-infiltrating lymphocytes in WSIs of breast cancer samples using an antibody supervised DL approach. Assessment of tumor percentage in tissues is a critical step before selecting samples for next-generation-sequencing-studies (NGS) for molecular profiling. Most tests have a critical percentage of tumor threshold below which the test may not be pertinent. AI image analytic systems like TissueMark can be used to reduce interobserver variability in assessing tissue quality for these tests, thereby reducing chances of false-negative results.
Some of the notable studies on the applications of AI in oncopathology are listed in [Table 2].
|Table 2: Notable studies on applications of artificial intelligence (AI) in oncopathology|
Click here to view
AI in translational oncology
Translational oncology involves the study of various molecular pathways involved in tumor genesis to develop novel therapeutic interventions. CNNs are being increasingly utilized in translational medicine for drug development and predicting the sensitivity of cancer cells to therapeutics using a combination of genomic and chemical cell biology (theranostics). Several models have been developed that may predict drug-target interaction strength as well as anti-cancer drug synergy. These will help physicians to decide on individualized targeted therapy based on the patient's internal biology and external milieu. ANNs have also been employed to predict peptide-MHC (major histocompatibility complex) binding which may have implications for immunotherapy development in oncology.,
Some of the notable studies on the applications of AI in translational oncology are listed in [Table 3].
|Table 3: Notable studies in artificial intelligence (AI) translational oncology|
Click here to view
Despite the limitless possibilities of AI, its clinically meaningful application in every day medical practice remains fraught with several challenges. These may be broadly divided into extrinsic (owing to the heterogeneity of the human body, human behavior, and institutions) and intrinsic (related to the technology of AI itself).
Because of the heterogeneity of normal and abnormal human anatomy, imaging findings may not always corroborate with histopathology. The same disease may have different presentations and also evolve with time. There are often different classification systems and treatment protocols for the same disease, and these may also change as medical science evolves.
Medical data sets also show significant disparity across institutions. These may be attributed to various factors like sample size, age, gender, ethnicity, incompleteness in data acquisition, and a lack of standardized clinical protocol or reporting lexicon for data collection across institutes. Besides, technical issues like lack of standardization of imaging equipment and protocols along with the inherent variations in staining pattern of slides that exist across laboratories further compound the problem. Such poorly representative data sets together with the complex nature of the neural networks lead to biased algorithms that may result in inaccurate analyses and output, and to overfitted models that do not generalize across populations.,,
Training and validation of AI algorithms require large data sets across institutes, adequately representing the inter-institute variations in the training data. However, the lack of a proper data-sharing network and competition between various institutes often adversely affect data access and quality. The first step towards overcoming these obstacles is to build an open data-sharing platform involving multiple institutes. The emphasis should be on FAIR (findable, accessible, interoperable, reusable) data usage to promote the development of externally validated ML models with wider applicability across generalized population groups. As we move towards personalized medicine, research will need to be carried out in smaller subpopulations defined by genotype, individual susceptibility to disease, disease progression, and response to therapy. The results of such research would not be generalizable to the entire population. The trade-off between more generalizable robust programs with lower accuracy versus subpopulation-specific programs that are less robust would have to be carefully thought of.,
To successfully merge AI with clinical oncology and maximize its impact, there are knowledge gaps that need to be addressed. AI algorithms are often complex, slow to run requiring frequent configurations. Currently, physicians receive little training in data science and ML, limiting their ability to understand DL mechanisms. Also, there is a general apprehension that AI may ultimately replace the radiologist-pathologist resulting in job loss in an already competitive market. On the contrary, most data scientists have little experience with oncologic workup and management, limiting their ability to identify important and suitable clinical use cases. The goal is to seamlessly blend AI into the daily diagnostic workflow and make it easily accessible to the diagnostician. Collaboration should be pursued between clinical oncologic departments and bioinformatics and data science divisions, and strategic partnerships with technology firms should be formed where appropriate.
While the job of AI would be broadly to detect, characterize, and make a preliminary diagnosis, the radiologist-pathologist would still be expected to take a holistic and comprehensive overview. Accordingly, our teaching programs would need to focus less on 'detection' skills and more on developing 'perceptual and integrative' skills. Societal and behavioral changes in terms of accreditation, jobs, and emoluments would be required.
Perhaps the greatest challenge to the adoption of AI in medicine is addressing the ethical and legal issues concerning data collection and usage. Companies need to be compliant with data protection and privacy rules existent in their parent countries as well as the countries of residence of data subjects. Informed consent of patients should be obtained before the use of sensitive data like genetic data. Not only should the patients be informed about the possible usage of their data, but it also has to be ensured that the benefits should accrue to all. Also, stringent monitoring and validation mechanisms should be in place to assess the performance of AI in different applications. Legal liabilities in case of malfunction also need to be defined before the adoption of AI in real-life settings.
Most high-level AI software operate through the 'black box' testing system where the software is run without any knowledge of the internal structure of the software. Only the input/output is known to the tester, the rationale behind arriving at a particular conclusion remains oblivious. For example, an AI model called 'Deep Patient' developed at Mount Sinai Hospital, New York could anticipate the onset of psychiatric disorders like schizophrenia from a patient's electronic health records, but the new tool offered no clue on how it did that. Without a proper understanding of the processes behind its predictions, doctors often face an ethical dilemma about whether to accept AI recommendations about diagnoses or treatment protocols. To ensure more transparency in AI models, methods need to be developed that allow users to review the specific characteristics of input data that contributed to the outcome.
Irrespective of whether the learning was supervised or unsupervised, it is also important to identify a robust 'truth' or gold standard for the validation of every AI application.
| » Conclusion|| |
Though the progress has been phenomenal, the path to the practical application of AI in oncology is fraught with many obstacles. We need to understand that AI cannot be a foolproof panacea to all our problems nor can it completely replace the role of 'humans'. It can however be a powerful and useful complement to the insights and deeper understanding that humans possess. The 'human' has to be kept 'in the loop' and 'at the apex' in overall control [Figure 7]. Issues of standardization, validity, finances, technology, ethics, security, legal and regulatory liabilities, training, etc., need to be gradually overcome before we can fully harness the vast potentials of AI to drive cancer care into the 21st century and beyond [Figure 7].
|Figure 7: Figure showing the ideal relationship between Artificial Intelligence (AI) and man. The 'human' has to be kept 'in the loop' and 'at the apex' in overall control of the system and the 'results'. Some of the challenges involved in incorporating AI in cancer diagnostics are shown within the 'stars'|
Click here to view
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| » References|| |
McCorduck P. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick, Mass: A.K. Peters; 2004.
Cornet G. Robot companions and ethics a pragmatic approach of ethical design. J Int Bioethique 2013;24:49-180.
Russell SJ, Norvig P. Artificial Intelligence: A Modern Approach. 3rd
ed. Upper Saddle River, NJ: Prentice-Hall; 2010.
Turing AM. Computing machinery and intelligence. Mind 1950;49:433-60.
McCarthy J, Minsky ML, Rochester N, Shannon CE. A proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Mag 2006;27:12. doi: https://doi.org/10.1609/aimag.v27i4.1904
Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, et al
. Mastering the game of Go without human knowledge. Nature 2017;550:354-9.
Murphy KP. Machine Learning: A Probabilistic Perspective. Cambridge: The MIT Press; 2012.
Thrall JH. Trends and developments shaping the future of diagnostic medical imaging: 2015 annual oration in diagnostic radiology. Radiology 2016;279:660-6.
Winsberg F, Elkin M, Macy J, Bordaz V, Weymouth W. Detection of radiographic abnormalities in mammograms by means of optical scanning and computer analysis. Radiology 1967;89:211-5.
Vyborny CJ, Giger ML. Computer vision and artificial intelligence in mammography. AJR Am J Roentgenol 1994;162:699-708.
Chan HP, Doi K, Vyborny CJ, Lam KL, Schmidt RA. Computer-aided detection of microcalcifications in mammograms: Methodology and preliminary clinical study. Invest Radiol 1988;23:664-71.
Giger M, Doi K, MacMahon H, Metz CE, Yin F-F. Pulmonary nodules: Computer-aided detection in digital chest images. RadioGraphics 1990;10:41-51.
Lambin P, Rios-Velazquez E, Leijenaar R, Carvalho S, van Stiphout RG, Granton P, et al
. Radiomics: Extracting more information from medical images using advanced feature analysis. Eur J Cancer 2012;48:441-6.
Sen D, Chakrabarti R, Chatterji S, Grewal DS, Manrai K. Artificial intelligence and the radiologist: The future in the Armed Forces Medical Services. BMJ Mil Health 2020;166:254-6.
Aerts HJWL. The potential of radiomic-based phenotyping in precision medicine: A review. JAMA Oncol 2016;2:1636-42.
Zhang B, Tian J, Dong D, Gu D, Dong Y, Zhang L, et al
. Radiomics features of multiparametric MRI as novel prognostic factors in advanced nasopharyngeal carcinoma. Clin Cancer Res 2017;23:4259-69.
Larue RT, Defraene G, De RD, Lambin P, Van EW. Quantitative radiomics studies for tissue characterization: A review of technology and methodological procedures. Br J Radiol 2017;90:20160665. doi: 10.1259/bjr. 20160665.
Zhao B, Tan Y, Tsai WY, Qi J, Xie C, Lu L, et al
. Reproducibility of radiomics for deciphering tumor phenotype with imaging. Sci Rep 2016;6:23428. doi: 10.1038/srep23428.
Pereira S, Pinto A, Alves V, Silva CA. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans Med Imaging 2016;35:1240-51.
Chae KJ, Jin GY, Ko SB, Wang Y, Zhang H, Choi EJ, et al
. Deep Learning for the classification of small (<2 cm) pulmonary nodules on CT imaging: A preliminary study. Acad Radiol 2020;27:e55-63.
Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A, editors. Medical Image Computing and Computer-Assisted Intervention ––MICCAI 2015. Cham: Springer; p. 234-41.
Hatt M, Vallieres M, Visvikis D, Zwanenburg A. IBSI: An international community radiomics standardization initiative. J Nucl Med 2018;59:287.
Hua KL, Hsu CH, Hidayati SC, Cheng WH, Chen YJ. Computer-aided classification of lung nodules on computed tomography images via deep learning technique. Onco Targets Ther 2015;8:2015-22.
Ribli D, Horváth A, Unger Z, Pollner P, Csabai I. Detecting and classifying lesions in mammograms with deep learning. Sci Rep 2018;8:4165. doi: 10.1038/s41598-018-22437-z.
Huang Y, Li S, Yue H, Wang M, Hu Q, Wang H, et al
. Impact of nominal photon energies on normal tissue sparing in knowledge-based radiotherapy treatment planning for rectal cancer patients. PLoS One 2019;14:e0213271.
Campbell WG, Miften M, Olsen L, Stumpf P, Schefter T, Goodman KA, et al
. Neural network dose models for knowledge-based planning in pancreatic SBRT. Med Phys 2017;44:6148-58.
Schubert C, Waletzko O, Weiss C, Voelzke D, Toperim S, Roeser A, et al
. Intercenter validation of a knowledge-based model for automated planning of volumetric modulated arc therapy for prostate cancer. The experience of the German Rapid Plan Consortium. PLoS One 2017;12:e0178034.
Wang C, Zhu X, Hong JC, Zheng D. Artificial intelligence in radiotherapy treatment planning: Present and future. Technol Cancer Res Treat 2019;18:1533033819873922. doi: 10.1177/1533033819873922.
Lustberg T, van Soest J, Gooding M, Peressutti D, Alijabar P, van der Stoep J, et al
. Clinical evaluation of atlas and deep learning-based automatic contouring for lung cancer. Radiother Oncol 2018;126:312-17.
Ciardo D, Gerardi MA, Vigorito S, Morra A, Dell'acqua V, Diaz FJ, et al
. Atlas-based segmentation in breast cancer radiotherapy: Evaluation of specific and generic-purpose atlases. Breast 2017;32:44-52.
Ypsilantis PP, Siddique M, Sohn HM, Davies A, Cook G, Goh V, et al
. Predicting response to neoadjuvant chemotherapy with PET imaging using convolutional neural networks. PLoS One 2015;10:e0137036.
Lo Gullo R, Eskreis-Winkler S, Morris EA, Pinker K. Machine learning with multiparametric magnetic resonance imaging of the breast for early prediction of response to neoadjuvant chemotherapy. Breast 2020;49:115-22.
Tahmassebi A, Wengert GJ, Helbich TH, Bago-Horvath Z, Alaei S, Bartsch R, et al
. Impact of Machine Learning with Multiparametric Magnetic Resonance Imaging of the breast for early prediction of response to Neoadjuvant Chemotherapy and survival outcomes in breast cancer patients. Invest Radiol 2019;54:110-17.
Bitencourt AGV, Gibbs P, Rossi Saccarelli C, Daimiel I, Lo Gullo R, Fox MJ, et al
. MRI-based machine learning radiomics can predict Her2 expression level and pathologic response after neoadjuvant therapy in Her2 overexpressing breast cancer. EBioMedicine 2020;61:103042.
Sun R, Limkin EJ, Vakalopoulou M, Dercle L, Champiat S, Han SR, et al
. A radiomics approach to assess tumour-infiltrating CD8 cells and response to anti-PD-1 or anti-PD-L1 immunotherapy: An imaging biomarker, retrospective multicohort study. Lancet Oncol 2018;19:1180-91.
Kann BH, Aneja S, Loganadane GV, Kelly JR, Smith SM, Decker RH, et al
. Pretreatment identification of head and neck cancer nodal metastasis and extranodal extension using deep learning neural networks. Sci Rep 2018;8:14036. doi: 10.1038/s41598-018-32441-y.
Chang K, Bai HX, Zhou H, Su C, Bi WL, Agbodza E, et al
. Residual convolutional neural network for determination of IDH status in low- and high-grade gliomas from MR imaging. Clin Cancer Res 2018;24:1073-81.
Wu S, Meng J, Yu Q, Li P, Fu S. Radiomics-based machine learning methods for isocitrate dehydrogenase genotype prediction of diffuse gliomas. J Cancer Res Clin Oncol 2019;145:543-50.
Chen RY, Lin YC, Shen WC, Hsieh TC, Yen KY, Chen SW, et al
. Associations of Tumour PD-1 Ligands, Immunohistochemical studies and textural features in 18F-FDG PET in squamous cell carcinoma of the Head and Neck. Sci Rep 2018;8:105.
Lopsi E, Toschi L, Grizzi F, Rahal D, Olivari L, Castino GF, et al
. Correlation of metabolic information on FDG-PET with tissue expression of immune markers in patients with non-small cell lung cancer (NSCLC) who are candidates for upfront surgery. Eur J Nucl Med Mol Imaging 2016;43:1954-61.
Mellors RC, Silver R. A micro-fluorometric scanner for the differential detection of cells: Application to exfoliative cytology. Science 1951;114:356-60.
Bostrom RC, Sawyer HS, Tolles WE. Instrumentation for Automatically Pre-screening Cytological Smears. Proceedings of the IRE 1959;47:1895-1900.
Prewitt JMS, Mendelsohn ML. The Analysis of Cell Images. Ann N Y Acad Sci 1966;128:1035-53.
Rosenfeld A. Picture processing by computer. Science 1970;169:166-7.
Tanaka N, Ikeda H, Ueno T, Mukawa A, Watanabe S, Kozo Okamoto K, et al
. Automated cytologic screening system (CYBEST model 4): An integrated image cytometry system. Appl Opt 1987;26:3301-7.
Rathore S, Iftikhar MA, Gurcan MN, Mourelatos Z. Radiopathomics: Integration of radiographic and histologic characteristics for prognostication in glioblastoma. arXiv preprint, arXiv: 1909.07581[eess.IV].
Madabushi A, Lee G. Image analysis and machine learning in digital pathology: Challenges and opportunities. Med Image Anal 2016;33:170-5.
Gupta R, Kurc T, Sharma A, Almeida JS, Saltz JH. The emergence of pathomics. Curr Pathobiol Rep 2019;7:73-84.
Xu Y, Jia Z, Wang LB, Ai Y, Zhang F, Lai M, et al
. Large-scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features. BMC Bioinformatics 2017;18:281.
Cheng PF, Dummer R, Levesque MP. Data mining The Cancer Genome Atlas in the era of precision cancer medicine. Swiss Med Wkly 2015;145:w14183.
Arvaniti E, Fricker KS, Moret M, Rupp N, Hermanns T, Fankhauser C, et al
. Automated Gleason grading of prostate cancer tissue microarrays via deep learning. Sci Rep 2018;8:12054.
Coudray N, Ocampo PS, Sakellaropoulos T, Narula N, Snuderl M, Fenyo D, et al
. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat Med 2018;24:1559-67.
Wu M, Yan C, Liu H, Liu Q. Automatic classification of ovarian cancer types from cytological images using deep convolutional neural networks. Biosci Rep 2018;38:BSR20180289. doi: 10.1042/BSR20180289.
Ertosun MG, Rubin DL. Automated grading of gliomas using deep learning in deep pathology images: A modular approach with ensemble of convolutional neural networks. AMIA Annu Symp Proc 2015;2015:1899-908.
Pradipta AR, Tanei T, Morimoto K, Shimazu K, Noguchi S, Tanaka K. Emerging technologies for real-time intraoperative margin assessment in future breast-conserving surgery. Adv Sci 2020;7:1901519. doi: 10.1002/advs. 201901519.
Nguyen FT, Zysk AM, Chaney EJ, Kotynek JG, Oliphant UJ, Bellafiore FJ, et al
. Intraoperative evaluation of breast tumor margins with optical coherence tomography. Cancer Res 2009;69:8790-6.
Steiner DF, MacDonald R, Liu Y, Truszkowski P, Hipp JD, Gammage C, et al
. Impact of deep learning assistance on the histopathologic review of lymph nodes for metastatic breast cancer. Am J Sur Pathol 2018;42:1636-46.
Ciresan DC, Giusti A, Gambardella LM, Schmidhuber J. Mitosis detection in breast cancer histology images with deep neural networks. Med Image Comput Comput Assist Interv 2013;16:411-8.
Narayanan PL, Raza SEA, Dodson A, Gusterson B, Dowsett M, Yuan Y. DeepSDCS: Dissecting cancer proliferation heterogeneity in Ki67 digital whole slide images. arXiv preprint, arXiv: 1806.10850(cs). (2018).
Humphries MP, McQuaid S, Craig SG, Bingham V, Maxwell P, Maurya M, et al
. Critical appraisal of programmed death ligand 1 reﬂex diagnostic testing: Current standards and future opportunities. J Thorac Oncol 2019;14:45–53.
Lee J, An JY, Choi MG, Park SH, Kim ST, Lee JH, et al
. Deep learning-based survival analysis identified associations between molecular subtype and optimal adjuvant treatment of patients with gastric cancer. JCO Clin Cancer Inform 2018;2:1-14.
Saltz J, Gupta R, Hou L, Kurc T, Singh P, Nguyen V, et al
. Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell Rep 2018;23:181-93.
Hamilton PW, Wang Y, Boyd C, James JA, Loughrey MB, Hougton JP, et al
. Automated tumor analysis for molecular proﬁling in lung cancer. Oncotarget 2015;6:27938–52.
Menden MP, Iorio F, Garnett M, McDermott U, Benes CH, Ballester PJ, et al
. Machine learning prediction of cancer cell sensitivity to drugs based on genomic and chemical properties. PLoS One 2013;8:e61318.
Han Y, Kim D. Deep convolutional neural networks for pan-specific peptide-MHC class I binding prediction. BMC Bioinformatics 2017;18:585.
Thrall JH, Li X, Li Q, Cruz C, Do S, Dreyer K, et al
. Artificial intelligence and machine learning in radiology: Opportunities, challenges, pitfalls, and criteria for success. J Am Coll Radiol 2018;15:504-8.
Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: Addressing ethical challenges. PLoS Med 2018;15:e1002689.
Miotto R, Li L, Kidd BA, Dudley JT. Deep patient: An unsupervised representation to predict the future of patients from the electronic health records. Sci Rep 2016;6:26094.
Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016. p. 2921-9.
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7]
[Table 1], [Table 2], [Table 3]