We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here. By continuing to use our site, you accept our use of cookies. Cookie Policy.

Features Partner Sites Information LinkXpress hp
Sign In
Advertise with Us
GLOBETECH PUBLISHING LLC

Download Mobile App




AI Can Distinguish Brain Tumors from Healthy Tissue

By MedImaging International staff writers
Posted on 22 Nov 2024
Print article
Image: Artificial intelligence models can be trained to distinguish brain tumors from healthy tissue (Photo courtesy of 123RF)
Image: Artificial intelligence models can be trained to distinguish brain tumors from healthy tissue (Photo courtesy of 123RF)

Researchers have made significant advancements in artificial intelligence (AI) for medical applications. AI holds particular promise in radiology, where delays in processing medical images can often postpone patient care. Convolutional neural networks (CNNs) are robust tools used to train AI models on large image datasets to identify and classify images. This enables the networks to “learn” to distinguish between different types of images. Furthermore, CNNs also have the ability for “transfer learning,” allowing models trained for one task to be applied to similar new tasks. AI models have already demonstrated the ability to identify brain tumors in MRI images with near-human accuracy. Now, in a new study, researchers have shown that AI models can be trained to differentiate between brain tumors and healthy tissue.

While detecting camouflaged animals and classifying brain tumors may seem unrelated, the researchers from Boston University (Boston, MA, USA) saw a connection between the natural camouflage of animals and the way cancerous cells blend with surrounding healthy tissue. The ability to generalize — the process of categorizing various items under a common identity — is crucial for the AI model to detect camouflaged objects. This capability could be particularly advantageous for detecting tumors. In their retrospective study using publicly available MRI data, the researchers explored how neural networks could be trained using brain cancer imaging data, incorporating a unique camouflage detection step to enhance the networks' tumor detection capabilities.

The researchers utilized MRIs from public repositories of both cancerous and healthy brain scans to train the networks to identify cancerous areas, distinguish them from healthy tissue, and classify the type of cancer. The results, published in Biology Methods and Protocols, showed that the networks performed nearly flawlessly at detecting healthy brain scans, with only 1-2 false negatives, and were also able to differentiate between cancerous and non-cancerous brains. One of the networks achieved an accuracy of 85.99% in detecting brain cancer, while the other reached 83.85%. An important feature of these networks is their ability to explain their decisions, which can increase the trust that both medical professionals and patients place in the AI models. This transparency is particularly valuable, as deep learning models are often criticized for their lack of interpretability. The network was capable of generating images that highlighted specific areas in its classification of tumor-positive or tumor-negative scans, which would allow radiologists to verify the AI's findings, serving almost as a second opinion in radiology.

Going forward, the researchers believe that developing deep network models with decisions that are easy to explain will be crucial for AI to play a transparent and supportive role in clinical settings. While the networks performed less effectively when distinguishing between different types of brain cancer, the study demonstrated that they exhibited distinct internal representations. The accuracy and clarity of the networks improved as they were trained using camouflage detection. Transfer learning increased the networks' accuracy, and while the best performing model was about 6% less accurate than standard human detection, the research successfully highlights the improvements in accuracy brought about by this training approach. The researchers argue that, when combined with methods to explain the network’s decisions, this approach will foster the transparency needed for future AI applications in clinical settings.

“Advances in AI permit more accurate detection and recognition of patterns,” said the paper’s lead author, Arash Yazdanbakhsh. “This consequently allows for better imaging-based diagnosis aid and screening, but also necessitate more explanation for how AI accomplishes the task. Aiming for AI explainability enhances communication between humans and AI in general. This is particularly important between medical professionals and AI designed for medical purposes. Clear and explainable models are better positioned to assist diagnosis, track disease progression, and monitor treatment.”

New
Gold Member
X-Ray QA Meter
T3 AD Pro
New
Ultrasound Table
General 3-Section Top EA Ultrasound Table
NMUS & MSK Ultrasound
InVisus Pro
New
3T MRI Scanner
MAGNETOM Cima.X

Print article

Channels

Ultrasound

view channel
Image: A transparent ultrasound transducer-based photoacoustic-ultrasound fusion probe, along with images of a rat’s rectum and a pig’s esophagus (Photo courtesy of POSTECH)

Transparent Ultrasound Transducer for Photoacoustic and Ultrasound Endoscopy to Improve Diagnostic Accuracy

Endoscopic ultrasound is a commonly used tool in gastroenterology for cancer diagnosis; however, it provides limited contrast in soft tissues and only offers structural information, which reduces its diagnostic... Read more

General/Advanced Imaging

view channel
Image: The results of the eight-view 3D CT reconstruction from a public dataset (Photo courtesy of Medical Physics, doi.org/10.1002/mp.12345)

AI Model Reconstructs Sparse-View 3D CT Scan With Much Lower X-Ray Dose

While 3D CT scans provide detailed images of internal structures, the 1,000 to 2,000 X-rays captured from different angles during scanning can increase cancer risk, especially for vulnerable patients.... Read more

Imaging IT

view channel
Image: The new Medical Imaging Suite makes healthcare imaging data more accessible, interoperable and useful (Photo courtesy of Google Cloud)

New Google Cloud Medical Imaging Suite Makes Imaging Healthcare Data More Accessible

Medical imaging is a critical tool used to diagnose patients, and there are billions of medical images scanned globally each year. Imaging data accounts for about 90% of all healthcare data1 and, until... Read more
Copyright © 2000-2024 Globetech Media. All rights reserved.