Staff View
Speech-based activity recognition for medical teamwork

Descriptive

TitleInfo
Title
Speech-based activity recognition for medical teamwork
Name (type = personal)
NamePart (type = family)
Abdulbaqi
NamePart (type = given)
Jalal
NamePart (type = date)
1977
DisplayForm
Jalal Abdulbaqi
Role
RoleTerm (type = text); (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Marsic
NamePart (type = given)
Ivan
DisplayForm
Ivan Marsic
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Gajic
NamePart (type = given)
Zoran
DisplayForm
Zoran Gajic
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Spasojevic
NamePart (type = given)
Predrag
DisplayForm
Predrag Spasojevic
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Sarcevic
NamePart (type = given)
Aleksandra
DisplayForm
Aleksandra Sarcevic
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
School of Graduate Studies
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
Genre (authority = ExL-Esploro)
ETD graduate
OriginInfo
DateCreated (qualifier = exact); (encoding = w3cdtf); (keyDate = yes)
2020
DateOther (type = degree); (qualifier = exact); (encoding = w3cdtf)
2020-10
Language
LanguageTerm (authority = ISO 639-3:2007); (type = text)
English
Abstract
Activity recognition is the process by which one or more people's actions and their environment are observed and analyzed to infer their activities. The activity recognition task includes recognizing the activity type and estimates its progression stage, from planning, through performance, to evaluation. This task is usually achieved by using different types of sensor modalities such as video, radio frequency identification (RFID), and medical device signals. To our knowledge, the speech data representing the verbal communication between individuals has not been used for activity recognition. In medical teamwork, some activities are conducted through verbal communication between the medical team. Consequently, for these activities, speech data can provide better information than video or other sensors. On the other hand, speech data present challenges that include fast and concurrent talking (known as “cocktail party problem”), as well as the ambient noise. Therefore, achieving speech-based activity recognition requires dealing with these challenges as well as finding the optimal model architecture for activity recognition. In this research, we develop deep neural networks that use and enhance the utterance-level speech stream to predict medical activity type.

Our speech-based approach to recognize team activities is developed in the context of trauma resuscitation. We first analyzed the audio recordings of trauma resuscitations in terms of activity frequency, noise-level, and activity-related keyword frequency to determine the dataset characteristics. We next evaluated different audio-preprocessing parameters (spectral feature types and audio channels) to find the optimal configuration. We then introduced a novel neural network to recognize the trauma activities using a modified VGG network that extracts features from the audio input. The output of the modified VGG network is combined with the output of a network that takes keyword text as input, and the combination is used to generate activity labels. We compared our system with several baselines and performed a detailed analysis of the performance results for specific activities. Our results show that our proposed architecture that uses Mel-spectrum spectral coefficients features with a stereo channel and activity-specific frequent keywords achieve the highest accuracy and average F1-score.

We further propose an extensive analysis of keyword labeling. We investigated two approaches to create the keyword list: by the number of their existence in the dataset and by computing the keyword sensitivity for each activity. Besides, we examine using a different number of keywords per activity to find the optimum number. Also, we categorize the keywords based on their relationship to the medical activities to find a nonvaluable keyword to remove. This analysis assists us to increase the performance significantly.
We present a broad analysis of the multimodal network for trauma activity recognition. That includes the audio, keyword, and fusion modules. We design and evaluate different networks and learning approaches for audio, keyword, and fusion modules. For the audio network, we design and examine two networks: the first network uses a convolutional neural network (CNN), while the second network, we replace several frontend CNN layers with two recurrent neural networks (RNN) to track the temporal information in the sequential speech recordings. For the keyword network, we propose five networks for evaluation to extract the features from the transcript keyword input. For the fusion, we applied and examine two fusions approaches early fusion and late fusion. Evaluation results show substantial improvement in the accuracy and the f1-score over the baseline.

An important challenge that affects the performance introduced speech-based activity recognition approach, is the speech quality in general and the non-stationary noise in particular. An efficient speech enhancement system is required to address this issue. Most current speech enhancement models use spectrogram features that require an expensive transformation and result in phase information loss. Previous work has overcome these issues by using convolutional networks to learn the temporal correlations across high-resolution waveforms. These models, however, are limited by memory-intensive dilated convolution and aliasing artifacts from upsampling. We introduce an end-to-end, fully recurrent neural network for single-channel speech enhancement. Our network is structured as an hourglass-shape that can efficiently capture long-range temporal dependencies by reducing the feature resolution without information loss. Also, we used residual connections to prevent gradient decay over layers and improve the model generalization. Experimental results show that our model outperforms state-of-the-art approaches in six quantitative evaluation metrics.
Subject (authority = local)
Topic
Activity recognition
Subject (authority = RUETD)
Topic
Electrical and Computer Engineering
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_11028
PhysicalDescription
Form (authority = gmd)
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
1 online resource (xiii, 64 pages)
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
RelatedItem (type = host)
TitleInfo
Title
School of Graduate Studies Electronic Theses and Dissertations
Identifier (type = local)
rucore10001600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/t3-fqx6-e188
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Abdulbaqi
GivenName
Jalal
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2020-06-29 16:21:36
AssociatedEntity
Name
Jalal Abdulbaqi
Role
Copyright holder
Affiliation
Rutgers University. School of Graduate Studies
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
RightsEvent
Type
Embargo
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2020-10-31
DateTime (encoding = w3cdtf); (qualifier = exact); (point = end)
2021-10-31
Detail
Access to this PDF has been restricted at the author's request. It will be publicly available after October 31st, 2021.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
CreatingApplication
Version
1.7
ApplicationName
Microsoft® Word for Microsoft 365
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2020-07-02T12:51:46
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2020-07-02T12:51:46
Back to the top
Version 8.5.5
Rutgers University Libraries - Copyright ©2024