LanguageTerm (authority = ISO 639-3:2007); (type = text)
English
Abstract (type = abstract)
Femoral shaft fracture, a bone fracture that involves the femur, typically sustained in high-energy injuries, such as a car crash. Improper treatment of fixation or alignment may cause soft tissue injuries, bone loss and significant high risk on pulmonary compliance. Therefore, the treatments with surgical guidance are of importance in deducing the rate of compliance and improving the accuracy of the operation. Image-guided computer-assisted orthopedic surgery has been explored in improving the outcomes of the femoral shaft fracture treatments. And the domain intra-operative imaging modality fluoroscopy used in CAOS is 2D fluoroscopy. 3D anatomical representation from 2D fluoroscopy requires high volume of 2D data from different directions, which has pool reproductivities into 3D due to the limit field of view in 2D fluoroscopy. Furthermore, the increasing operation time with ionizing radiation exposure from fluoroscopy modality brings essential concerns for the safety of the surgeon and patient. Recently, ultrasound (US) has been investigated as an alternative intra-operative imaging modality due to its real-time, safe and 2D/3D imaging capabilities.
However, lower signal to noise ratio (SNR), imaging artifacts, limited field of view (FOV) and blurred bone boundaries have hindered wide spread adaption of US in CAOS. In order to overcome these limitations, automatic bone segmentation and intra-operative registration methods have been developed. Accurate, robust and real-time segmentation and registration is necessary for successful guidance in US-based CAOS. This thesis presents an automated hierarchical registration method using reinforcement learning, for accurate, robust and real-time registration of intra-operative US to pre-operative CT data.
The proposed framework consists of: (1) bone shadow region image enhancement and segmentation, (2) point cloud modeling from the segmented bone surface image, and (3) point cloud registration using Q-learning of US-CT. Local phase image features are used as an input to an L1-norm-based regularization framework for enhancement of bone shadow regions. Simple bottom up ray casting method is used to segment the bone surfaces from the enhanced bone shadow images. In addition, CT data was segmented using intensity-based thresholding method. In other words, the complicated cross-modality US-CT registration was transformed into point cloud registration. In the next step, we proposed a hierarchical registration method using supervised Q-learning that learns the optimal sequence of motion action to achieve the optimal alignment. Within this approach, the agent is modeled using PointNet++ framework, with point cloud data obtained by segmenting the US and CT data as the input, and the next optimal action as the output. The quantitative and qualitative evaluations are performed on over 100 test cases and have shown the potential in making ultrasound as an alternative intra-operative image modality in image-guidance. The target registration error (TRE) and fiducial registration error (FRE) range have average value of 4.32 mm and 3.82 mm respectively. And the success rate, which defines as the TRE and FRE are both less than 10mm, is 92.6% with an average time of 0.31 second for each step.
Subject (authority = RUETD)
Topic
Biomedical Engineering
Subject (authority = local)
Topic
Ultrasound
Subject (authority = LCSH)
Topic
Femur -- Ultrasonic imaging
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.