We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here. By continuing to use our site, you accept our use of cookies. Cookie Policy.

Features Partner Sites Information LinkXpress hp
Sign In
Advertise with Us
GLOBETECH PUBLISHING LLC

Download Mobile App




Deep Learning Improves Lung Ultrasound Interpretation

By MedImaging International staff writers
Posted on 31 Jan 2024
Print article
Image: Workflow diagram showing real-time lung ultrasound segmentation with U-Net (Photo courtesy of Ultrasonics)
Image: Workflow diagram showing real-time lung ultrasound segmentation with U-Net (Photo courtesy of Ultrasonics)

Lung ultrasound (LUS) has become a valuable tool for lung health assessment due to its safety and cost-effectiveness. Yet, the challenge in interpreting LUS images, largely due to its dependence on artefacts, leads to variability among operators and hampers its wider application. Now, a new study has found that deep learning can enhance the real-time interpretation of lung ultrasound. This study found that a deep learning model trained on lung ultrasound images was capable of segmenting and identifying artefacts in these images, as demonstrated in tests on a phantom model.

In the study, researchers at the University of Leeds (West Yorkshire, UK) employed a deep learning technique for multi-class segmentation in ultrasound images of a lung training phantom. This technique was used to distinguish various objects and artefacts, such as ribs, pleural lines, A-lines, B-lines, and B-line confluences. The team developed a modified version of the U-Net architecture for image segmentation, aiming to strike a balance between the model’s speed and accuracy. During the training phase, they implemented an ultrasound-specific augmentation pipeline to enhance the model’s ability to generalize new, unseen data such as geometric transformations and ultrasound-specific augmentations. The trained network was then applied to segment live image feeds from a cart-based point-of-care ultrasound (POCUS) system, using a convex curved-array transducer to image the training phantom and stream frames. The model, trained on a single graphics processing unit, required about 12 minutes for training with 450 ultrasound images.

The model demonstrated a high accuracy rate of 95.7%, with moderate-to-high Dice similarity coefficient scores. Real-time application of the model at up to 33.4 frames per second significantly enhanced the visualization of lung ultrasound images. Furthermore, the team evaluated the pixel-wise correlation between manually labeled and model-predicted segmentation masks. Through a normalized confusion matrix, they noted that the model accurately predicted 86.8% of pixels labeled as ribs, 85.4% for the pleural line, and 72.2% for B-line confluence. However, it correctly predicted only 57.7% of A-line and 57.9% of B-line pixels.

Additionally, the researchers employed transfer learning with their model, using knowledge from one dataset to improve training on a related dataset. This approach yielded Dice similarity coefficients of 0.48 for simple pleural effusion, 0.32 for lung consolidation, and 0.25 for the pleural line. The findings suggest that this model could aid in lung ultrasound training and help bridge skill gaps. The researchers have also proposed a semi-quantitative measure, the B-line Artifact Score, which estimates the percentage of an intercostal space occupied by B-lines. This measure could potentially be linked to the severity of lung conditions.

“Future work should consider the translation of these methods to clinical data, considering transfer learning as a viable method to build models which can assist in the interpretation of lung ultrasound and reduce inter-operator variability associated with this subjective imaging technique,” the researchers stated.

Related Links:
University of Leeds

New
Gold Member
X-Ray QA Meter
T3 AD Pro
Radiation Therapy Treatment Software Application
Elekta ONE
New
Opaque X-Ray Mobile Lead Barrier
2594M
New
3T MRI Scanner
MAGNETOM Cima.X

Print article
Radcal

Channels

Radiography

view channel
Image: The new X-ray detector produces a high-quality radiograph (Photo courtesy of ACS Central Science 2024, DOI: https://doi.org/10.1021/acscentsci.4c01296)

Highly Sensitive, Foldable Detector to Make X-Rays Safer

X-rays are widely used in diagnostic testing and industrial monitoring, from dental checkups to airport luggage scans. However, these high-energy rays emit ionizing radiation, which can pose risks after... Read more

MRI

view channel
Image: Artificial intelligence models can be trained to distinguish brain tumors from healthy tissue (Photo courtesy of 123RF)

AI Can Distinguish Brain Tumors from Healthy Tissue

Researchers have made significant advancements in artificial intelligence (AI) for medical applications. AI holds particular promise in radiology, where delays in processing medical images can often postpone... Read more

Nuclear Medicine

view channel
Image: Example of AI analysis of PET/CT images (Photo courtesy of Academic Radiology; DOI: 10.1016/j.acra.2024.08.043)

AI Analysis of PET/CT Images Predicts Side Effects of Immunotherapy in Lung Cancer

Immunotherapy has significantly advanced the treatment of primary lung cancer, but it can sometimes lead to a severe side effect known as interstitial lung disease. This condition is characterized by lung... Read more

General/Advanced Imaging

view channel
Image: Cleerly offers an AI-enabled CCTA solution for personalized, precise and measurable assessment of plaque, stenosis and ischemia (Photo courtesy of Cleerly)

AI-Enabled Plaque Assessments Help Cardiologists Identify High-Risk CAD Patients

Groundbreaking research has shown that a non-invasive, artificial intelligence (AI)-based analysis of cardiac computed tomography (CT) can predict severe heart-related events in patients exhibiting symptoms... Read more

Imaging IT

view channel
Image: The new Medical Imaging Suite makes healthcare imaging data more accessible, interoperable and useful (Photo courtesy of Google Cloud)

New Google Cloud Medical Imaging Suite Makes Imaging Healthcare Data More Accessible

Medical imaging is a critical tool used to diagnose patients, and there are billions of medical images scanned globally each year. Imaging data accounts for about 90% of all healthcare data1 and, until... Read more
Copyright © 2000-2024 Globetech Media. All rights reserved.