close-icon
Subscribe to learn more about this topic
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Using Deep Learning on Histology Slides to Screen for Cancer Biomarkers

Can deep learning extract new insights from basic H&E stains?

datarevenue-icon
by
Markus Schmitt

We caught up with Heather Couture, from Pixel Scientia Labs, and she shared insights from her recent research on using deep learning on H&E slides to screen for cancer biomarkers.

A brief introduction to genomic analysis and H&E stains

Traditionally, scientists analyzing diseases such as cancer have focused on single genes (genetic testing). But today they're increasingly using genomic analysis to look at all the different genes that make up a cancer and provide a comprehensive profile of the disease.

While this genomic analysis can provide accurate, specific insights to help treat the disease by identifying the correct cancer subtype, it’s also slow and expensive. 

By contrast, analyzing H&E stains is far simpler and cheaper. This involves looking at stained tissue or cell samples under a standard microscope. H&E turns nuclei blue and cytoplasm pink so pathologists can interpret the slides more easily. But it provides different information than genomic analysis.

Heather develops processes to predict some of the genomic properties of cancer cells based on H&E images, using deep learning. This means pathologists can potentially identify the cancer biomarkers more quickly and cost effectively.

A table showing that H&E stains are fast and cheap, but do now allow for genomic subtype analysis. Genomic tests are slow and expensive, but can provide genomic subtype analysis. Combining H&E stains with machine learning is fast, cheap, and can predict genomic subtype analysis.
Machine learning combined with fast, cheap H&E stains can achieve some of the benefits of more expensive genomic tests for personalized medicine.

Heather describes this process for us:

“For breast cancer analysis, we’re predicting the molecular properties of tumors. This can be used for personalized treatment. If you know the exact subtype of a tumor, it might mean that one specific drug is more likely to be effective than another.
Typically, those assessments are done with genomic data. Our team at the University of North Carolina discovered that we can predict some of these properties from the H&E histology slides – slides that have been stained and imaged under a microscope. It's not a perfect result, but it can be used as a screening process before investing the time and money it takes to run the full genomic tests.”
Free Consultation

Get an unbiased perspective on what approach to ML might fit your team.

Book a free consultation

Machine learning can make predictions that doctors can’t

It’s not just that machines are often more efficient and work with larger datasets. Predicting molecular properties based on easily available H&E slides is not something humans can do at all. As Heather says:

“Pathologists could use a different type of staining and be able to get protein biomarkers. There are ways to do that, and this is standard in the lab – but not from H&E, which is the most common diagnostic set of stains.”

This doesn’t mean algorithms will replace humans. Heather predicts a world where humans and machines work hand in hand. But as she points out, only time will tell exactly how this partnership will work:

“It probably shouldn’t be AI versus the pathologist. It should be AI and pathologist combined. How do they work together? Is it that AI looks at the images first and finds regions the pathologist should look at? Does AI check the pathologist's answer to look for errors? Or do they do different tasks: AI could do some of the mundane things – like counting cells, looking for cells dividing, or some little things like that – and then the pathologist takes over.”

It’s a fast-growing area of research, so hopefully we’ll have answers to some of these questions soon. Heather reports a recent surge of interest:

“Paper after paper has shown variations of using machine learning for biomarker analysis. There’ve been two Nature papers in the last week alone that tested the concept on several types of cancer. And there's another one this week that's also using machine learning to predict molecular properties. So it seems like this particular area has finally and suddenly taken off.”

How deep learning can advance automated analysis of H&E slides

The idea of using machine learning to automate analysis is not new, but recent advances in deep learning have brought new life to the field. Specifically, deep learning has recently revolutionized the field of image analysis, allowing machines to identify patterns in images – including H&E stains. 

Heather describes how deep learning has enabled teams to transition from handcrafting features for machine learning (a slow, inaccurate process) to feeding full images into deep learning algorithms, which then identify the meaningful segments automatically:

“We previously used handcrafted features on skin cancer. We would segment individual cells and nuclei, characterize their shape, their texture, their arrangement, and stuff like that. We used that to try to predict whether or not different samples had a mutation – and we couldn't do it. We didn't get results that were statistically meaningful at all.
Deep learning was already coming on the scene, but the toolkits weren't there yet. There was no TensorFlow; there was no PyTorch. In my experience, going from handcrafted features to deep learning was what made the difference. I'm guessing the same is true for other teams.”

Getting AI models into clinics: Three challenges

As exciting as these advances are, there’s no shortage of challenges to overcome. Heather reminds us that it might still take years for her H&E research to be adopted in clinics:

“Using deep learning to predict molecular biomarkers is very much still in the research world. It's far from clinical use. It's something that needs a lot more work.”

And any progress is likely to be made step by step. Heather thinks the first real use case could be a simple screening tool to advise human pathologists what to do next:

“The output could be, ‘This is most likely to be subtype x, and does not have mutation y.’ From that, a pathologist could decide to get more detailed screening, or to pay for the genomic test.”

Heather outlined some of the main hurdles that still need to be overcome before deep learning on H&E slides is widely adopted in clinics.

Challenge 1: Achieving robust results with varying laboratory equipment

Robustness – the ability to perform in a predictable way across varying settings – is often a challenge for machine learning pipelines. This is especially true for H&E slides, where the equipment used to create the data is highly sensitive and not always standardized across different labs.

Any solution that uses deep learning on H&E slides in real-world settings would need to be able to handle these differences. Currently, if measurements are not consistent across laboratories, then a model trained on data from one set of equipment would not be able to understand data from a different lab. As Heather explains:

“The algorithm would need to be robust. It would need to generalize to different scanners, different microscopes, and different staining techniques. One of the issues with H&E staining is that if it's done by a different lab, the intensity of the stains can come out slightly differently.
The stains can fade over time, and if they’re scanned by a different scanner, they come out differently. Accounting for these challenges is something that needs to be tackled, but it’s a non-trivial problem.”

Challenge 2: Using deep learning algorithms on small datasets

Compared to the climate change datasets Heather has worked on, H&E datasets are often tiny. This is a problem for machine learning, which often relies on huge datasets (hundreds of thousands of samples) to find generalizable patterns. As Heather says:

“We're dealing with smaller datasets. A few hundred patients might be considered a ‘large’ dataset in some settings. A thousand is OK, but sometimes that's all we get. That's quite different from other application areas of machine learning.”

The number of samples is often small, and each individual sample is also “wide” (it contains a lot of information), which adds an extra layer of complexity to the challenge of building reliable deep learning algorithms. Nevertheless, Heather is optimistic about overcoming this challenge: 

“You can use different algorithms. These images are large, so we have more image patches in each. For training a CNN, it's actually great. There are some things we need to do differently – like we can't put the whole image through the GPU at once. We need to break it into smaller patches and sometimes have a second-level classifier to integrate predictions or features from the smaller patches. 
But there are solutions, and it's an active research area.”

Challenge 3: Building teams with the required expertise

Working with machine learning and H&E slides requires deep and broad expertise across several areas. As a consultant, Heather often finds herself patching gaps, doing everything from advisory work to coding machine learning models. She describes the variety of work she does:

“On one end, my role might be purely as an advisor. I meet with my client once a week to help talk them through their problems, make suggestions, or point them to pre-existing libraries and research.
Or I might be doing coding for them instead: implementing a proof of concept or helping them debug something. Or on the other end, I might be leading their machine learning effort. I might be the first machine learning person on their team, getting them up and running, and helping them build a team.”

Heather specializes in helping companies build proofs of concept – stripped-down versions of the final solutions they’re aiming for. She then hands these over to teams of machine learning engineers who can turn them into productionized solutions. She says a common challenge is that companies hire people from academia who may not have worked with real-world datasets:

“They've worked with academic benchmark datasets while learning stuff as part of a course, but often they haven’t really worked with the challenges of real data. Real data is noisy. It's unbalanced. Labels can be wrong; ground truth can be wrong.”

Finding multidisciplinary experts with overlapping experience is challenging. But as Heather says, it’s necessary:

“Building these solutions takes people from many different fields: not just machine learning, but pathology, genetics, statistics, and depending on what you're doing, possibly many other fields as well.”

Are you planning to speed up your research process with machine learning?

We’ve seen and solved many of the technical and team challenges Heather discussed, and we’d love to hear about yours. Reach out directly to our CEO to discuss your ML research challenge.

Get Notified of New Articles

Leave your email to get our weekly newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.