MIT publishes new framework for machines to work as radiologist


MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in the US uses an underutilized resource to help machine learning algorithms better analyze medical images; radiology reports are included with the images.

According to MIT NewsAccurately evaluating an x-ray or medical image is essential for a patient’s health and can even save a life. Since obtaining such an examination depends on the availability of a qualified radiologist, a rapid reaction is not always possible.

Register for our upcoming Masterclass>>

Ruizhi Ray Liao, postdoctoral researcher at MIT’s CSAIL, said: “Our goal is to teach machines that can recreate what radiologists do on a daily basis. “

While the concept of using computers to interpret images is not new, the MIT-led team is using a previously underutilized resource – the vast set of radiology reports that accompany medical images and are written by radiologists in routine clinical practice – to improve interpretation skills. machine learning algorithms. In addition, the team draws on a notion in information theory called mutual information – a statistical measure of the interdependence of two distinct variables – to reinforce the success of their approach.

Here is how it works:

See also
  • To begin with, a neural network learns to detect the extent of a disease, such as pulmonary edema, by presenting it with a large number of x-ray images of patients’ lungs, along with an assessment of the severity of a disease. doctor for each case.
  • This information is contained in a series of numbers. The text is represented by a separate neural network, which uses a different set of integers to represent its information.
  • The information from the images and text is then integrated by a third neural network in a coordinated approach that maximizes mutual information between the two sets of data.

Polina golland, principal investigator at CSAIL, said that “when the reciprocal information between images and text is high, images are very predictive of text and text is very predictive of images.”

The work was supported by the National Institutes of Health’s National Institute of Biomedical Imaging and Bioengineering, Wistron, the MIT-IBM Watson AI Lab, the MIT Deshpande Center for Technological Innovation, the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (J – Clinic) and the MIT Lincoln Lab.


Join our Discord server. Be part of an engaging online community. Join here.


Subscribe to our newsletter

Receive the latest updates and relevant offers by sharing your email.

Dr Nivash Jeevanandam

Dr Nivash Jeevanandam

Nivash holds a doctorate in information technology. He worked as a research associate in a university and as a development engineer in the computer industry. He is passionate about data science and machine learning.


Source link

Previous Former Portsmouth, NH resident warns others about the stalking man
Next Ethereum vs Solana vs Cardano - who is DeFi's darling?

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *