AI Uses Images and Omics to Decode Cancer

Amber Dance in The Scientist:

It’s the question on every cancer patient’s mind: How long have I got? Genomicist Michael Snyder wishes he had answers. For now, all physicians can do is lump patients with similar cancers into large groups and guess that they’ll have the same drug responses or prognoses as others in the group. But their methods of assigning people to these groups are coarse and imperfect, and often based on data collected by human eyeballs. “When pathologists read images, only sixty percent of the time do they agree,” says Snyder, director of the Center for Genomics and Personalized Medicine at Stanford University. In 2013, he and then–graduate student Kun-Hsing Yu wondered if artificial intelligence could provide more-accurate predictions. Yu fed histology images into a machine learning algorithm, along with pathologist-determined diagnoses, training it to distinguish lung cancer from normal tissue, and two different types of lung cancer from each other. Then he fed in survival data for those slides, letting the system learn how that information correlated with the images. Finally, he added in new slides that the model hadn’t seen before, and asked the all-important longevity question.

The computer could predict who would live for shorter or longer than average survival times for those particular cancers—something pathologists struggle to do.1 “It worked surprisingly well,” says Yu, now an instructor at Harvard Medical School. But Snyder and Yu thought they could do more. Snyder’s lab works on -omics, too, so they decided to offer the computer not just the slides, but also tumor transcriptomes. With these data combined, the model predicted patient survival even better than images or transcriptomes alone, with more than 80 percent accuracy.2

More here.