Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Facebook Open-Sources AI Model to Predict COVID-19 Patient Outcomes

Facebook Open-Sources AI Model to Predict COVID-19 Patient Outcomes

This item in japanese

A team from Facebook AI Research (FAIR) and New York University (NYU) School of Medicine has developed deep-learning models that use chest X-rays to predict COVID-19 patient prognosis. In a comparison study, the models outperformed human radiologists, and could be used to help hospitals predict the demand for supplemental oxygen or intensive care.

The system was described in detail in a paper published on arXiv. The models were pre-trained using two large publicly-available chest X-ray image datasets, then fine-tuned on NYU's dataset of COVID-19 patient images. The resulting three models can predict patient deterioration from a single X-ray, predict patient deterioration from a series of X-rays, and predict a patient's supplemental oxygen needs from a single X-ray, respectively. The model that uses a series of images can produce predictions up to four days in advance and outperformed two human experts' predictions. According to the research team,

[We] hope that by releasing this research, hospitals and the community at large can build upon what we’ve done so far — and that our models help the experts make crucial decisions and better serve patients with their limited time and resources.

Last year, FAIR and NYU published their results using a convolutional neural network (CNN) to predict COVID-19 patient prognosis. The system, called COVID-GMIC, was based on a Globally-Aware Multiple Instance Classifier (GMIC) architecture. The model was trained and evaluated on a dataset containing 19,957 chest X-ray images taken from 4,722 COVID-19-positive patients at NYU. This system achieved results "comparable" to human experts.

The new system is trained on the same NYU patient X-ray image dataset. Because the dataset is relatively small for training a deep-learning model, the researchers used transfer learning: the network was pre-trained on two larger publicly-available datasets, MIMIC-CXR-JPG and CheXpert, using a self-supervised learning technique called Momentum Contrast (MoCo). In this scheme, the network learns an encoder that maps images into a smaller-dimensional vector space. Images that are similar to each other will be mapped to vectors that are "closer" to each other.

All three models were based on this MoCo encoder. The two models that predict from single images, Single Image Prediction (SIP) and Oxygen Requirement Prediction (ORP), were built by fine-tuning a linear classifier on the output of encoder to predict patient deterioration or oxygen requirements, respectively. The third model, Multiple Image Prediction (MIP), used the encoder outputs from a sequence of images as input to a Transformer to produce a hidden representation, which was then input into a linear classifier. The team compared the results of the MIP model to predictions from two expert radiologists, using prediction time-windows of 24, 48, 72, and 96 hours. In all cases, except for the 24-hour prediction window, the model outperformed the humans.

Many AI researchers have used their systems to help combat the COVID-19 pandemic. The three major cloud providers, Amazon Web Services, Google Cloud Platform, and Microsoft Azure, all host several publicly available COVID-19 datasets for using machine learning. Last year, Google used their AlphaFold model for predicting several of the virus's protein structures. In a recent Nature article, a Danish research team described their AI system that can predict a COVID-19 patient's need for a respirator with up to 80% accuracy, using data such as age, BMI, and blood pressure.

FAIR's source code and pre-trained model files are available on GitHub.

Rate this Article