Hierarchical 3-D registration of ultrasound-computed tomography of femur shaft fracture using reinforcement learning
Citation & Export
Hide
Simple citation
Zeng, Xuxin.
Hierarchical 3-D registration of ultrasound-computed tomography of femur shaft fracture using reinforcement learning. Retrieved from
https://doi.org/doi:10.7282/t3-benq-hh02
Export
Description
TitleHierarchical 3-D registration of ultrasound-computed tomography of femur shaft fracture using reinforcement learning
Date Created2020
Other Date2020-01 (degree)
Extent1 online resource (xiii, 73 pages) : illustrations
DescriptionFemoral shaft fracture, a bone fracture that involves the femur, typically sustained in high-energy injuries, such as a car crash. Improper treatment of fixation or alignment may cause soft tissue injuries, bone loss and significant high risk on pulmonary compliance. Therefore, the treatments with surgical guidance are of importance in deducing the rate of compliance and improving the accuracy of the operation. Image-guided computer-assisted orthopedic surgery has been explored in improving the outcomes of the femoral shaft fracture treatments. And the domain intra-operative imaging modality fluoroscopy used in CAOS is 2D fluoroscopy. 3D anatomical representation from 2D fluoroscopy requires high volume of 2D data from different directions, which has pool reproductivities into 3D due to the limit field of view in 2D fluoroscopy. Furthermore, the increasing operation time with ionizing radiation exposure from fluoroscopy modality brings essential concerns for the safety of the surgeon and patient. Recently, ultrasound (US) has been investigated as an alternative intra-operative imaging modality due to its real-time, safe and 2D/3D imaging capabilities.
However, lower signal to noise ratio (SNR), imaging artifacts, limited field of view (FOV) and blurred bone boundaries have hindered wide spread adaption of US in CAOS. In order to overcome these limitations, automatic bone segmentation and intra-operative registration methods have been developed. Accurate, robust and real-time segmentation and registration is necessary for successful guidance in US-based CAOS. This thesis presents an automated hierarchical registration method using reinforcement learning, for accurate, robust and real-time registration of intra-operative US to pre-operative CT data.
The proposed framework consists of: (1) bone shadow region image enhancement and segmentation, (2) point cloud modeling from the segmented bone surface image, and (3) point cloud registration using Q-learning of US-CT. Local phase image features are used as an input to an L1-norm-based regularization framework for enhancement of bone shadow regions. Simple bottom up ray casting method is used to segment the bone surfaces from the enhanced bone shadow images. In addition, CT data was segmented using intensity-based thresholding method. In other words, the complicated cross-modality US-CT registration was transformed into point cloud registration. In the next step, we proposed a hierarchical registration method using supervised Q-learning that learns the optimal sequence of motion action to achieve the optimal alignment. Within this approach, the agent is modeled using PointNet++ framework, with point cloud data obtained by segmenting the US and CT data as the input, and the next optimal action as the output. The quantitative and qualitative evaluations are performed on over 100 test cases and have shown the potential in making ultrasound as an alternative intra-operative image modality in image-guidance. The target registration error (TRE) and fiducial registration error (FRE) range have average value of 4.32 mm and 3.82 mm respectively. And the success rate, which defines as the TRE and FRE are both less than 10mm, is 92.6% with an average time of 0.31 second for each step.
NoteM.S.
NoteIncludes bibliographical references
Genretheses, ETD graduate
LanguageEnglish
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.