By Marilynn Larkin
NEW YORK (Reuters Health) - 30/4/2019
"Tumors are continuously evolving biological systems, and medical imaging is uniquely positioned to monitor changes throughout treatment," note Dr. Hugo Aerts of the Dana Farber Cancer Institute in Boston and colleagues. "However, quantitative assessment through manual measurements is tedious, time-consuming, and prone to interoperator variability, as visual evaluation can be nonobjective and biased."
"By using automated artificial intelligence (AI) technologies, the application of our model requires minimal human input.," Dr. Aerts told Reuters Health by email. "For localization of the tumors, only single-click seed points are needed, instead of time-consuming volumetric segmentations. This demonstrates the ease of incorporating a large number of scans at several time-points into our quantification process."
The deep learning network "also can handle missing time-points, which often occurs in the clinic," he noted. "In general, AI-based imaging biomarkers can easily be integrated in clinical workflows as they can be applied noninvasively, repeatedly, at low cost, and with minimal human input."
Dr. Aerts and colleagues used ImageNet, a neural network that identifies objects from their most relevant features, and trained their models using serial CT scans of 179 patients with stage 3 NSCLC who had been treated with chemoradiation. Up to four images per patient, taken before treatment and at one, three and six months post-treatment, were included in the training set, for a total of 581 images.
After training, the researchers assessed the model's ability to make outcome predictions from the training set and from a validation dataset of 178 images from 89 NSCLC patients who had been treated with both chemoradiation and surgery.
Patient characteristics were similar between the groups: the median age was 63, about half were females and more than half (58.9%) were diagnosed with stage 3 NSCLC at the time of diagnosis. The median radiation dose was 66 Gy for the radiation alone cohort, who were followed for a median of 31.4 months.
The median radiation dose for the validation cohort was 54 Gy, and patients were followed for a median of 37.1 months.
The team also constructed a clinical model that included parameters of stage, gender, age, tumor grade, performance, smoking status, and clinical tumor size. As reported online April 22 in Clinical Cancer Research, the neural network was capable of stratifying patients into low- and high-mortality risk groups, with significant differences in overall survival.
The model's performance was enhanced with each additional follow-up scan. For example, the model's accuracy for predicting two-year overall survival based on pretreatment scans alone was 0.58, and improved to 0.74 after adding all available follow-up scans.
Risk of locoregional recurrence in the training cohort was predicted after input of two follow-up timepoints, at around one and three months after the completion of radiation therapy. Predictions of progression and distant metastasis needed the additional third follow-up at around six months for a significant stratification of the mortality risk groups.
The model also significantly predicted pathologic response in the validation cohort.
By contrast, overall, the clinical model "did not yield a statistically significant prediction for pathologic response," according to the authors.
"We want to validate the promising results of this study by evaluation of the AI model in prospective clinical trials involving larger cohorts of cancer patients," Dr. Aerts said. "Furthermore, we have to evaluate if the model keeps its performance in subgroups of patients - e.g., related to genetic or histologic subtypes, race, gender, age, etc."
"The model was developed specifically for evaluation of NSCLC tumors treated with (chemo)radiation," he added. "Similar studies have to be performed for other treatment and/or tumor sites to demonstrate generalizability of the deep learning methods."
Dr. Shandong Wu, Director, Intelligent Computing for Clinical Imaging Lab and Technical Director for AI Innovations in Radiology at the University of Pittsburgh, agreed that the findings are feasible, as shown in his own team's work.
In a 2018 paper in Clinical Cancer Research (http://bit.ly/2vrBCgT), "we have shown that applying deep learning modeling for analysis of screening mammography is feasible for the identification of features that may not be perceptible to the naked eye of radiologists in visual assessment... ultimately enhancing clinical decision-making and reducing false recall cases," he told Reuters Health by email.
"While promising," he said, "this kind of model needs to be sufficiently validated by large independent data, ideally from multiple medical centers, before it can be implemented in clinical settings."
Clin Cancer Res 2019.