FEATURE
Tags: 

Artificial intelligence reshapes the future of medicine

Robert Roe explores the use of AI technology in healthcare and medicinal research

Artificial intelligence (AI) technology has already made a big impact in many areas of computing, inclunding enterprise and academic applications, but increasingly it is now being applied to healthcare research due to its huge potential.

Combining traditional laboratory informatics processes with AI technology could open up huge potential to advance research and provide faster and more specialised treatments to patients. The technology can be used to approximate human cognition of complex medical data, freeing researchers to work on other areas of the work and accelerating research, or helping a physician to get to an accurate diagnosis in less time.

AI technology is now being applied to many aspects of healthcare. This ranges from assisting patients and clinicians, cataloguing laboratory results, generating abstracts for scientific papers, precision medicine, accelerating research into drug discovery and helping experts to better understand disease and injuries. To create AI requires a lot of computational power, as neural networks or training algorithms require huge amounts of data to teach the AI to perform a task with a high degree of accuracy.

If AI and machine learning techniques are combined with the large amounts of medical research and data from LIMS and ELN systems, AI networks can be used to help advance research and free researchers from mundane tasks.

Leading the charge

IBM has been deploying its IBM Watson cognitive computing system in research centres and hospitals for a number of years but now these collaborations are beginning to produce results that can benefit patients and doctors. IBM has partnered with New York-based Memorial Sloan Kettering Cancer Center to train Watson Oncology to interpret cancer patients’ clinical information and identify individualised, evidence-based treatment options.

As Watson Oncology’s teacher, Memorial Sloan Kettering (MSK) Cancer Center is trying to create a powerful resource that will help inform treatment decisions for those who may not have access to a specialty centre such as  MSK.

It is hoped that this collaboration will decrease the time it takes for the latest research and evidence to influence clinical practice across the broader oncology community, help physicians synthesise available information, and improve patient care.

MSK cares for more than 130,000 people with cancer each year. It will use this patient data alongside specialised oncologists with unique expertise and integrating the latest published research – to teach Watson how to identify and treat cancer. The success of the MSK IBM collaboration has led to the development of a genomic service using data from MSK through a collaboration between IBM and Quest Diagnostics.

The project aims to use IBM Watson’s core capabilities reading natural language, evaluating cases with evolving machine-learned models, and rapidly processing large volumes of data to address the challenges facing oncologists today.

IBM and Quest Diagnostics first launched the new service in October 2016 to assist researchers in advancing precision medicine through the combination of cognitive computing with genomic tumour sequencing.

Memorial Sloan Kettering will supplement Watson’s corpus of scientific data with OncoKB, a precision oncology knowledge base, to help inform individual treatment options for cancer patients.

‘We now know that genetic alterations are responsible for many cancers, but it remains challenging for most clinicians to deliver on the promise of precision medicine, since it requires specialised expertise and a time-consuming interpretation of massive amounts of data,’ said Paul Sabbatini, deputy physician-in-chief for clinical research at MSK. ‘Through this collaboration, oncologists will have access to MSK’s expertly curated information about the effects and treatment implications of 
specific cancer gene alterations. This has the power to scale expertise and help improve patient care.’

The new service involves laboratory sequencing and analysis of a tumour’s genomic makeup, to help reveal mutations that can be associated with targeted therapies and clinical trials.

Watson then compares those mutations against relevant medical literature, clinical studies, and carefully annotated rules created by leading oncologists, including those from MSK. Watson for Genomics ingests approximately 10,000 scientific articles and 100 new clinical trials every month.

Bolstering the amount of data Watson can use, MSK will provide OncoKB to help Watson uncover treatment options that could target the specific genetic abnormalities that are causing the growth of the cancer. Comparison of literature that may take medical experts weeks to prepare can now be completed in significantly less time using Watson.

OncoKB was developed and is maintained through MSK’s Marie-Josée and Henry R. Kravis Center for Molecular Oncology, in partnership with Quest. It includes annotation for almost 3,000 unique variants in 418 cancer-associated genes and in 40 different tumour types, including descriptions of the effects of specific mutations, as well as therapeutic implications.

The project is publicly accessible, meaning that researchers around the world have access to information about oncogenic effects and treatment implications of thousands of unique variants at their fingertips.

Making science easier to publish

Another use for AI technology is language processing and this has been applied to creation of abstracts for scientific journal articles. sciNote – the creators of a free open source electronic lab notebook (ELN) of the same name – has incorporated AI into its ELN software.

The sciNote Manuscript Writer add-on allows researchers to generate a draft of a scientific manuscript using data stored by the user on its platform and relevant references. With the add-on users can simplify the process of preparing scientific manuscripts, giving them more time to focus on research.

‘At this point we are using it to create the draft of a scientific paper. This is the first electronic lab notebook to use AI in this way,’ commented Dr Klemen Zupancic, CEO of sciNote. ‘The main benefit is saving time for researchers. Writing a scientific paper is not only tedious but it is also time consuming.

‘According to our research, researchers can spend on average 72 hours writing a scientific paper, and a lot of that time is just putting the data together and re-formatting that data. That is one area where we feel that AI can do a really good job,’ added Zupancic.

Recognising the importance of timely publication of scientific findings, Scinote created the add-on to significantly reduce the time taken to prepare initial content. The software draws upon data contained within the ELN and references that are accessible in open access journals, to provide a structured draft for the author to then edit and develop further.

Zupancic said: ‘While the competition within the scientific community to publish articles in high-ranking journals is constantly on the rise, it is also vital that valuable research data are published, and therefore accessible, at the earliest possible time. sciNote’s ELN is already used by over 20,000 scientists to store and manage scientific data. The announcement of this new AI add-on has the potential to transform the article writing process and empower these scientists, while establishing sciNote as a leader in the industry.’

Zupancic explained that the impetus to add these new AI-based capabilities to the sciNote ELN came from the experience of trying to create scientific papers in an increasingly competitive environment: ‘We started as researchers, the company owners and company founders are all researchers with ample experience in writing scientific papers, so we knew that this process could be improved. Then there was a couple of news stories in the past few years where people had made hoaxes by writing fake scientific papers using software, and that sort of thing.

‘This led us to the realisation that science and scientific publications [use a] structured enough language for software to be able to grasp it. This is where we found that advances in AI have done amazing work,’ Zupancic continued. ‘It is hard to get AI to write a poem, but when the text that it needs to output is very well defined and ‘standardised’, then the job for AI is much easier. That was one of the things that drove us to explore the use of AI for this particular challenge.’

sciNote LLC is now inviting scientists interested in the Manuscript Writer add-on to visit the website, create an account in sciNote and provide feedback, to optimise AI capability and overall user experience.

One of the more interesting aspects of AI development is that the accuracy, and ultimately how useful the software is to its user community, is based on the number of people using the system and the quality of the data used to train the system.

‘It depends on many factors, not only on your area of work but also how you record the data, how detailed, in what kind of format. All of these factors have an impact on how useful the end result will be for you,’ said Zupancic. ‘It is too early to tell, but it seems to work best with life sciences. That being said, an important part of this AI life cycle is the learning phase. AI learns how people perceive what it outputs and what changes they make, so we have high hopes that AI will get better and smarter over time.’

Zupancic also stated that the team behind the manuscript writer add-on hope it may be possible to expand the capabilities to generate patent or grant applications. ‘That is the vision but how fast and how well it learns is quite difficult to estimate at this point. What AI is really good at is searching publicly available information and seeing how that relates to what you have done, and how your research fits into the boarder spectrum of scientific progress.’

AI diagnosis

One example of the research enabled through AI is Poland-based Future Processing, a member of NVIDIA’s Inception programme.

The company is working to simplify the use of these tools, making the diagnosis process more affordable, accessible and accurate. Its medical imaging solutions business segment works closely with medical imaging experts, research institutions and clinics around the world to develop software that can process and analyse images.

One area of focus is dynamic contrast enhanced imaging and the analysis of computed tomography (CT) images. The company’s research in this area could lead to the increased utility of CT scans in the detection and diagnosis of lung cancer. To diagnose the disease, physicians rely on the segmentation of lesions on the lungs, using a combination of PET and CT scans. These determine the functional properties of a lesion, as well as its anatomical structure and characteristics. Lung cancer causes one in five cancer-related deaths worldwide – taking about 1.6 million lives a year – with high rates of mortality. In England more than one third of cases are diagnosed after presenting as an emergency, by which time the vast majority are already at a late stage. With the hopes of advancing the fight against lung cancer, Future Processing is working on a system that will eliminate the need for the combination of PET and CT scans. Instead, doctors would be able to make diagnoses based exclusively from CT scans.

Using convolutional neural networks, the team has shown that diagnoses from CT scans alone can be made efficient and accurate.

‘Before, the segmentation of active lesions required co-registering PET and CT sequences in a time-consuming procedure,’ explains Dr Jakub Nalepa, senior research scientist at Future Processing. ‘In fact, we have just presented a paper where, using CNNs with CT scans, we demonstrated segmentation of a single image within minutes – and this can be accelerated further.’

This acceleration in segmentation speeds is powered by NVIDIA Tesla GPU accelerators and could make a huge difference for both doctors and patients. By automatically segmenting the lesions, radiologists can save precious time and measure lesion progress. It would also be a boon for medical sites without access to PET scanners, as they could care for their patients directly, using only a CT scanner. This is more cost-effective for medical sites, with a CT scan costing from $1,200 to $3,200, whereas a PET scan costs on average $3,000 to $6,000.

Nalepa and his team have shown that their approach reduces the rate of false positives, when studying lung data without active lesions, from 90.14 to 6.6 per cent.

In addition, to further increase the accuracy of this technique, in the future Nalepa and her team hope to apply this technology to different forms of cancer, which could make a huge difference for both doctors and patients.

Feature

Robert Roe reports on developments in AI that are helping to shape the future of high performance computing technology at the International Supercomputing Conference

Feature

James Reinders is a parallel programming and HPC expert with more than 27 years’ experience working for Intel until his retirement in 2017. In this article Reinders gives his take on the use of roofline estimation as a tool for code optimisation in HPC

Feature

Sophia Ktori concludes her two-part series exploring the use of laboratory informatics software in regulated industries.

Feature

As storage technology adapts to changing HPC workloads, Robert Roe looks at the technologies that could help to enhance performance and accessibility of
storage in HPC

Feature

By using simulation software, road bike manufacturers can deliver higher performance products in less time and at a lower cost than previously achievable, as Keely Portway discovers