Description
TitleSpeech-based activity recognition for medical teamwork
Date Created2020
Other Date2020-10 (degree)
Extent1 online resource (xiii, 64 pages)
DescriptionActivity recognition is the process by which one or more people's actions and their environment are observed and analyzed to infer their activities. The activity recognition task includes recognizing the activity type and estimates its progression stage, from planning, through performance, to evaluation. This task is usually achieved by using different types of sensor modalities such as video, radio frequency identification (RFID), and medical device signals. To our knowledge, the speech data representing the verbal communication between individuals has not been used for activity recognition. In medical teamwork, some activities are conducted through verbal communication between the medical team. Consequently, for these activities, speech data can provide better information than video or other sensors. On the other hand, speech data present challenges that include fast and concurrent talking (known as “cocktail party problem”), as well as the ambient noise. Therefore, achieving speech-based activity recognition requires dealing with these challenges as well as finding the optimal model architecture for activity recognition. In this research, we develop deep neural networks that use and enhance the utterance-level speech stream to predict medical activity type.
Our speech-based approach to recognize team activities is developed in the context of trauma resuscitation. We first analyzed the audio recordings of trauma resuscitations in terms of activity frequency, noise-level, and activity-related keyword frequency to determine the dataset characteristics. We next evaluated different audio-preprocessing parameters (spectral feature types and audio channels) to find the optimal configuration. We then introduced a novel neural network to recognize the trauma activities using a modified VGG network that extracts features from the audio input. The output of the modified VGG network is combined with the output of a network that takes keyword text as input, and the combination is used to generate activity labels. We compared our system with several baselines and performed a detailed analysis of the performance results for specific activities. Our results show that our proposed architecture that uses Mel-spectrum spectral coefficients features with a stereo channel and activity-specific frequent keywords achieve the highest accuracy and average F1-score.
We further propose an extensive analysis of keyword labeling. We investigated two approaches to create the keyword list: by the number of their existence in the dataset and by computing the keyword sensitivity for each activity. Besides, we examine using a different number of keywords per activity to find the optimum number. Also, we categorize the keywords based on their relationship to the medical activities to find a nonvaluable keyword to remove. This analysis assists us to increase the performance significantly.
We present a broad analysis of the multimodal network for trauma activity recognition. That includes the audio, keyword, and fusion modules. We design and evaluate different networks and learning approaches for audio, keyword, and fusion modules. For the audio network, we design and examine two networks: the first network uses a convolutional neural network (CNN), while the second network, we replace several frontend CNN layers with two recurrent neural networks (RNN) to track the temporal information in the sequential speech recordings. For the keyword network, we propose five networks for evaluation to extract the features from the transcript keyword input. For the fusion, we applied and examine two fusions approaches early fusion and late fusion. Evaluation results show substantial improvement in the accuracy and the f1-score over the baseline.
An important challenge that affects the performance introduced speech-based activity recognition approach, is the speech quality in general and the non-stationary noise in particular. An efficient speech enhancement system is required to address this issue. Most current speech enhancement models use spectrogram features that require an expensive transformation and result in phase information loss. Previous work has overcome these issues by using convolutional networks to learn the temporal correlations across high-resolution waveforms. These models, however, are limited by memory-intensive dilated convolution and aliasing artifacts from upsampling. We introduce an end-to-end, fully recurrent neural network for single-channel speech enhancement. Our network is structured as an hourglass-shape that can efficiently capture long-range temporal dependencies by reducing the feature resolution without information loss. Also, we used residual connections to prevent gradient decay over layers and improve the model generalization. Experimental results show that our model outperforms state-of-the-art approaches in six quantitative evaluation metrics.
NotePh.D.
NoteIncludes bibliographical references
Genretheses, ETD graduate
LanguageEnglish
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.