In an effort to reduce errors in the analyses of diagnostic images by health professionals, a team of researchers from the Department of Energy's Oak Ridge National Laboratory has improved understanding of the cognitive processes involved in image interpretation.

The work, published in the Journal of Medical Imaging, has potential to improve health outcomes for the hundreds of thousands of American women affected by breast cancer each year. Breast cancer is the second leading cause of death in women, and early detection is critical for effective treatment.

Catching the disease early requires an accurate interpretation of a patient's mammogram; conversely, a radiologist's misinterpretation of a mammogram can have enormous consequences for a patient's future.

Analyses of mammograms

The ORNL-led team, which included Gina Tourassi, Hong-Jun Yoon, and Folami Alamudun, as well as Paige Paulus of the University of Tennessee's Department of Mechanical, Aerospace, and Biomedical Engineering, found that analyses of mammograms by radiologists were significantly influenced by context bias or the radiologist's previous diagnostic experiences.

"These findings will be critical in the future training of medical professionals to reduce errors in the interpretations of diagnostic imaging and will inform the future of human and computer interactions going forward," said Gina Tourassi, team lead and director of ORNL's Health Data Science Institute.

The research required the design of an experiment aimed at following the eye movements of radiologists at various skill levels to better understand the context bias involved in their interpretations of the images.

The experiment, designed by Yoon, followed the eye movements of three board-certified radiologists and seven radiology residents as they analyzed 100 mammographic studies from the University of South Florida's Digital Database for Screening Mammography.

The 400 images, representing a mix of cancer, no cancer, and cases that mimicked cancer but were benign, were specifically selected to cover a range of cases similar to that found in a clinical setting.

The participants, who were grouped by levels of experience and had no prior knowledge of what was contained in the individual X-rays, were outfitted with a head-mounted eye-tracking device designed to record their "raw gaze data," which characterized their overall visual behavior.

The study also recorded the participants' diagnostic decisions via the location of suspicious findings along with their characteristics according to the BI-RADS lexicon, the radiologists' reporting scheme for mammograms.

By computing a measure known as a fractal dimension on the individual participants' scan path (map of eye movements) and performing a series of statistical calculations, the researchers were able to discern how the eye movements of the participants differed from mammogram to mammogram.

Managing such a data-intensive task manually would have been impractical, so the researchers turned to artificial intelligence to help them efficiently and effectively make sense of the results.

Fortunately, they had access to ORNL's Titan supercomputer, one of the country's most powerful systems. With Titan, the team was able to rapidly train the deep learning models necessary to make sense of the large datasets.