Knee cartilage segmentation of ultrasound images using convolutional neural networks and local phase enhancement
Citation & Export
Hide
Simple citation
Mohabir, Justin Heeralaal.
Knee cartilage segmentation of ultrasound images using convolutional neural networks and local phase enhancement. Retrieved from
https://doi.org/doi:10.7282/t3-0f8n-mj25
Export
Description
TitleKnee cartilage segmentation of ultrasound images using convolutional neural networks and local phase enhancement
Date Created2020
Other Date2020-05 (degree)
Extent1 online resource (ix, 70 pages) : illustrations
DescriptionOsteoarthritis (OA) is a chronic disorder that results from the inflammation of body joints and the degradation of cartilage. The most prominent form of OA is knee OA, where the cartilage between the femur and tibia degrades from regular use. To measure the progression of knee OA in patients, clinicians use a metric of cartilage thickness known as Joint Space Width (JSW) to see how much cartilage is degraded over time. The most common method of measuring JSW is to perform a planar X-ray on the knee and manually measure the space between the joints from that image. This, however, gives patients a dose of ionizing radiation. Magnetic Resonance (MR) imaging and Ultrasound (US) have arisen as alternatives to imaging knee cartilage. MR imaging is reserved to research settings due to the expensive operation. This leaves US as the main alternative to show promise from clinical studies but has limitations such as noise and artifacts that make segmentation of the knee cartilage within images difficult to segment manually. A previous study has shown that enhancing images prior to segmentation can allow a more accurate segmentation. This thesis investigated the efficacy of using different Convolutional Neural Network (CNN) architectures to segment knee cartilage from US images, as well as the effect of enhancing the images prior to segmentation from the CNNs compared to a Random Walker (RW) algorithm.
The CNN architectures used in this study are: U-Net, Stacked U-Net and W-Net. Each of these architectures were trained by either B-mode images, local phase enhanced images, or an early-stage combination of both the B-mode and enhanced images. The 150-image training set of data used was augmented to artificially increase the amount of training images to improve the robustness and to prevent overfitting. 10-fold cross-validation was performed on each combination of CNN architecture and input type to prevent outliers.
Validation was performed on each of the CNNs generated by comparison against a manual segmentation of the US images using the Dice Similarity Coefficient (DSC). Validation was performed on 50 images from a similar dataset used to train the CNNs and a second set of 50 images from a different US system. The average DSC for the U-Net, Stacked U-Net and W-Net were: 0.8566, 0.8289 and 0.8675 in the similar dataset and 0.779, 0.7185 and 0.772 in the different dataset, respectively. The average DSC for the B-Mode, enhanced, and combined input types were: 0.8071, 0.8552 and 0.8908 in the similar dataset and 0.6869, 0.7756 and 0.807 in the different dataset, respectively. Compared to a RW algorithm, 53% of U-Nets, 67% of Stacked U-Nets, and 70% of W-Nets had significantly (p>0.05) higher average DSCs. 30% of B-Mode networks, 77% of enhanced image networks and 83% combined image networks had significantly higher DSCs. This study presents an automated US cartilage segmentation method using CNNs. The results presented show significant improvements in segmentation using local phase enhancement instead of an unaltered B-Mode US image. Low segmentation time and processing requirement of CNNs show promise as a method of achieving accurate real-time segmentation of knee cartilage and can make US a viable alternative to X-ray for diagnosis and progression measurement of knee OA.
NoteM.S.
NoteIncludes bibliographical references
Genretheses, ETD graduate
LanguageEnglish
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.