Improving spleen segmentation in ultrasound images using a hybrid deep learning framework
Loading...
Links to Files
Author/Creator ORCID
Date
2025-01-11
Type of Work
Department
Program
Citation of Original Publication
Karimi, Ali, Javad Seraj, Fatemeh Mirzadeh Sarcheshmeh, Kasra Fazli, Amirali Seraj, Parisa Eslami, Mohamadreza Khanmohamadi, et al. "Improving Spleen Segmentation in Ultrasound Images Using a Hybrid Deep Learning Framework". Scientific Reports 15, no. 1 (January 11, 2025): 1670. https://doi.org/10.1038/s41598-025-85632-9.
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Subjects
Abstract
This paper introduces a novel method for spleen segmentation in ultrasound images, using a two-phase training approach. In the first phase, the SegFormerB0 network is trained to provide an initial segmentation. In the second phase, the network is further refined using the Pix2Pix structure, which enhances attention to details and corrects any erroneous or additional segments in the output. This hybrid method effectively combines the strengths of both SegFormer and Pix2Pix to produce highly accurate segmentation results. We have assembled the Spleenex dataset, consisting of 450 ultrasound images of the spleen, which is the first dataset of its kind in this field. Our method has been validated on this dataset, and the experimental results show that it outperforms existing state-of-the-art models. Specifically, our approach achieved a mean Intersection over Union (mIoU) of 94.17% and a mean Dice (mDice) score of 96.82%, surpassing models such as Splenomegaly Segmentation Network (SSNet), U-Net, and Variational autoencoder based methods. The proposed method also achieved a Mean Percentage Length Error (MPLE) of 3.64%, further demonstrating its accuracy. Furthermore, the proposed method has demonstrated strong performance even in the presence of noise in ultrasound images, highlighting its practical applicability in clinical environments.