How machine learning is changing healthcare forever
A 60-year-old patient in Tokyo, Japan, had been feeling under the weather for months. She’d been tested by doctors, undergoing a battery of tests, but a clear diagnosis wasn’t forthcoming. So, the experts turned to IBM’s Watson, a supercomputer and analytical mastermind.
Within 10 minutes, Watson had used machine learning to compare the patient’s genetic changes with a database of 20 million cancer research papers and correctly identified that the patient was suffering from a rare form of leukaemia. Finally, she could get the correct treatment.
The market for artificial intelligence in healthcare and the life sciences is projected to grow by 40 percent a year, to $6.6 billion in 2021, according to estimates from Frost & Sullivan. It is predicted that there will be 25,000 petabytes of health-related data produced on an annual basis by 2020. To put that in everyday terms, that’s the equivalent of 13.3 years of HD video or 6.6 billion photos.
With that amount of data to wade through, it’s no wonder that medical professionals don’t have the time they’d like to spend on diagnoses. That’s why the Harvard Science Review is calling machine learning, “the future of healthcare.”
Using machine learning for diagnosis
Increasingly, tech giants are contributing to the healthcare industry. Google has Verily; Apple has its HealthKit and ResearchKit apps; and Amazon is considering entering the pharmacy market.
Recently, Google has also turned its machine learning experts onto the problem of diagnosis.
Even with years of medical training there can be huge variance between doctors when it comes to making a definitive patient diagnosis. In some forms of breast cancer, specialists only agree with each other 48 percent of the time, and there are similar figures across a range of complex cancers.
According to Google, the lack of agreement among doctors, “is not surprising given the massive amount of information that must be reviewed in order to make an accurate diagnosis.” To make that assessment, doctors must review the various X-ray, tissue and mammogram slides, for example, which show all the biological tissues.
Each patient can have anywhere up to 15 slides, and each slide has over 10 gigapixels when looked at under 40X magnification. That’s the same as examining 1,000 pictures of 10 megapixels each – and every single pixel matters.
When time is limited, it’s easy to see how misdiagnoses can happen. To combat this, Google turned over thousands of images to its Inception machine learning programme (also known as GoogleNet). The results, which were published in a peer-reviewed medical journal, were staggering. The machine learning-powered algorithm could accurately diagnose in 89 percent of cases, while the average score for doctors was 73 percent.
Using machine learning in real-time
While Google’s tests were a scientific study and didn’t include real-time diagnoses, machine learning is being used in day-to-day healthcare situations globally.
China, for example, has one of the highest lung cancer rates anywhere in the world. Each year an estimated 700,000 new cases of lung cancer emerge in the country. To deal with this, Chinese hospitals employ 80,000 radiologists who process 1.4 billion scans every year.
Each radiologist examines 17,500 scans a year (that’s 87.5 scans a day assuming the radiologist works 200 days per year). In response, Chinese hospitals have begun using machine learning technology to improve diagnosis rates.
It has been particularly effective in improving diagnoses in CT scans and X-rays to identify suspicious lesions and nodules in lung cancer patients. Diagnosis accuracy has improved by 4 percent per year, and is made 12 percent faster than in human-only cases.
Machine learning is also looking to transform the world of healthcare in developing countries where highly-trained specialists are often lacking. Researchers in Thomas Jefferson University Hospital in Philadelphia are tackling the problem of diagnosing tuberculosis – one of the world’s top 10 deadliest diseases. Tuberculosis killed 1.6 million people, mainly in the developing world, in 2016.
The trouble isn’t the equipment: Tuberculosis can be diagnosed by chest imaging, and X-ray machines dating from the 1980s can do an adequate job when it comes to TB. Even the poorest areas have access to relatively inexpensive X-ray technology. However, most developing countries lack the radiologists and specialists needed to interpret the scans.
The Thomas Jefferson University Hospital study used machine learning to interpret scans instead, and had an accuracy rate of 96 percent. More work is still needed before this can become a real world solution, says Dr. Lakhani, one of the study’s researchers. “We hope to prospectively apply this in a real world environment,” he confirms. “An AI solution using chest imaging can play a big role in tackling TB.”
Overall, machine learning in healthcare is still at an early stage. But like most industries, there will be a first-mover advantage. The data is there. The question is, are you making the best use of it?
Want to find out more about implementing machine learning?
If you work in a healthcare environment and want to experiment with machine learning then our team of data scientists can help. You’ve got doctors in medicine; we’ve got doctors in machine learning (literally – three of our team have PhDs in machine learning!)
We offer a range of bespoke services which are designed to fit the exact need you have: advanced data analysis and modelling, custom algorithm creation and machine learning implementation.
Our advanced data science consultancy can team up with you to interpret your data and make your business work more efficiently so get in touch today to unleash your business’ potential.
Want to make sense of your data? Download our comprehensive guide: The Pedictive Maintenance Cookbook.