Automation makes work life easier in many ways but is it also a solution for analyzing medical images? Is a computer actually reliable enough to assist in the medical decision making process? Researchers in Landshut examine how machine learning algorithms can work more reliably and support radiologists.
Prof. Stefanie Remmele
In this interview with MEDICA-tradefair.com, Prof. Stefanie Remmele explains how algorithms can be used in radiology and how they learn to analyze images. She also describes the role they already play in today's medical technology sector.
Prof. Stefanie Remmele: The project harks back to a conversation with Prof. Andreas Lienemann, a radiologist at the Landshuter Radiologie Mühleninsel, a prominent radiology office in Landshut. He referred to various publications, which indicate that in some applications, machine learning algorithms are now able to classify images as reliably as physicians can. Radiologists could benefit from this in their daily work, for instance, if it was possible to automatically distinguish normal from abnormal findings.
It should be noted here that to date, these types of algorithms have only been trained in image datasets at university settings. Generally, they are very homogeneous as it pertains to the clinical picture and image parameters such as contrast and resolution. For example, in one of the aforementioned studies, an algorithm is able to distinguish between data from Alzheimer’s disease patients and normal findings, that is to say, there were only these two types of data in the study and for training purposes. What’s more, imaging protocols are provided for these types of studies and remain unchanged over the course of the study to make the data comparable. The only variation of data is due to the clinical picture. We took this a step further by using data from actual activities of the radiology practice, which include all kinds of findings and protocols. In other words, there are many variations an algorithm needs to be able to handle.
20 percent of radiological images of the cranium show normal findings. An algorithm that sorts normal from abnormal findings would be a great time saver for radiologists.
Remmele: At first we studied anatomical regions with a particularly high percentage of normal findings by asking the radiologists and consulting targeted search queries on their radiology information system. We subsequently settled on the head application. Not only is this the most common CT scan and the third most common MRI examination but more than 20 percent of radiographic results also consist of normal findings. It would thus result in significant time savings if radiologists could receive workflow support in this area. In early 2017, the radiology office has exported nearly 400 datasets of head MRIs for us and the Cerner Deutschland GmbH has placed the archiving and diagnostic information technology that is available in the radiology office at our disposal. By mid-2017, we had set up two platforms that enabled us to implement the two main methods for radiology data classification for further research. All of the research and work is based on the final exams and thesis projects of my students.
Remmele: The conventional approach is the direct implementation of what a radiologist does when he takes a look at the images. That is to say, both the radiologist's eyes and the algorithm use the three-dimensional datasets to segment anatomical structures such as ventricles, grey or white matter in the brain or bones. These anatomical structures are defined by various features including texture, volume, diameter, key features, proportions, and symmetry factors and the images are then compared. The objective of the first research paper was to implement the workflow processes for this method, enabling us to read and segment the image data as well as assess and classify the features by using different algorithms.
We used a different approach in the second paper via CNNs, convolutional neural networks or deep learning methods, respectively. This was also driven by classification successes of algorithms in the non-medical field. In this case, image features are automatically learned via a neural network; similarities to these not necessarily anatomical features are essentially also quantified and classified. These types of neural networks have most notably become known thanks to the classification of common objects, faces or animals.
In machine learning, programs train to find certain patterns in a data set. In the beginning, they still need humans to help them by feeding them data.
Remmele: Right now, we achieve an accuracy of about 75 percent, meaning 75 percent of all image data sets are assigned to the correct class. Since the summer of 2017, another paper aims to further improve this rate, especially thanks to improved data preprocessing. Current research has many aspects that we can implement but we also have to invent many details because we are working with a very heterogeneous dataset.
Remmele: We definitely have to adapt the methods to our problem to significantly improve the accuracy of our algorithm and make it suitable for everyday use. In addition, we have to identify whether we want to continue with a binary system of normal and abnormal findings or whether we want to indicate something akin to probabilities. I believe we could then considerably improve the process and save radiologists a lot of time if we could not only classify data but also point out where the algorithm discovered abnormalities in the dataset.
Needless to say, we also would like to implement the algorithm in the radiology information system. We are currently looking for industry partners to accomplish this. We also receive additional data from a General Hospital and a University Hospital. This will introduce increased heterogeneity into our existing dataset, designed to improve the effectiveness of training, among other things. The radiology office also continues to support us, which is quite remarkable because it does not have a research assignment like a University Hospital does. However, this research branch will urgently need commitment like this in the future. If we want to develop solutions for practicing physicians, we need their data.
Remmele: Several manufacturers already provide solutions under the term "computer-aided diagnosis" that use machine learning in medical imaging as well as neurology. Screening tests for breast and lung cancer, for example, also apply tools that use classification algorithms in machine learning. Some products already tout image segmentation with deep learning. Today, these products are primarily used in specialized centers and university hospitals to monitor tumor progression for example.
The interview was conducted by Timo Roth and translated from German by Elena O'Meara.