2020

Expand all

Laboratory simulations of conversation scenarios: questionnaire results from patient and partner
Peggy Nelson, Liz Anderson, Timothy Beechey

Hearing-related questionnaires can reveal much about the daily experience of hearing aid users. Nonetheless, results may not fully reflect the lived experience for several reasons, including: users’ limited awareness of all communication challenges, limitations of memory, and the subjective nature of reporting. Multiple factors can influence results obtained from questionnaires (Nelson et al. ASA Louisville). Consideration of the perspectives of both hearing aid wearers and communication partners may better reflect the challenges of two-way everyday communication. We have developed simulations of challenging conversational scenarios so that clients and their partners can make judgments of sensory aid performance in realistic, but controlled conditions. Listeners with hearing loss and their partners use a client-oriented scale (adapted from the COSI, Dillon, 1997) to report challenging real-life listening conditions such as small group conversations, phone conversations, health reports, and media. Representative scenarios are simulated in the laboratory where clients and partners make ratings of intelligibility, quality, and preference. Results are compared to outcome measures such as the Speech, Spatial and Qualities of Hearing Scale (SSQ, Gatehouse & Noble, 2004) and Social Participation Restrictions Questionnaire (SPaRQ, Heffernan et al., 2018). Results will help refine methods for evaluating the performance of emerging technologies for hearing loss.

    Laryngeal Vibration as a Treatment for Spasmodic Dysphonia
    Jürgen Konczak, Divya Bhaskaran, Naveen Elangovan, Arash Mahnan, Jinseok Oh, Peter Watson, Yang Zhang

    We are conducting research on patients with Spasmodic Dysphonia (SD), which is a voice disorder that leads to strained or choked speech. Current therapeutic options for treating SD are very limited.  SD does not respond to conventional speech therapy and is treated primarily with Botulinum toxin injections to provide temporary symptom relief.  There is no cure for SD.  

    Vibro-tactile stimulation (VTS) is a non-invasive neuromodulation technique that our laboratory developed for people with SD. In preliminary research, our team documented that a one-time 30-minute application of VTS can result in measurable improvements in the voice quality of people with SD. 

    We are conducting a longitudinal study funded by the National Institutes of Health, investigating the possible long-term benefits of this approach for improving the voice symptoms of people with SD. Study participants will administer VTS at home by themselves for up to 8 weeks. Researchers will assess their voice quality and monitor the corresponding neurophysiological changes in the brain using electroencephalography in the laboratory at the beginning, in the middle and at the end of the VTS in-home training. The findings of the study will inform patients and clinicians on the possible impact of this therapeutic approach. It could promote the development of wearable VTS devices that would enlarge the available therapeutic arsenal for treating voice symptoms in SD.

    2019

    Expand all

    Respiratory Sinus Arrhythmia in Varying Social-Communicative and Cognitive Contexts in Typically Developing Children and Children with Autism Spectrum Disorder
    Katherine Bangert, Lizbeth Finestack

    This study examines physiological response in children with ASD compared to children who are developing typically, while completing tasks varying in social and cognitive demands. The primary measure of interest is respiratory sinus arrhythmia (RSA), a measure of parasympathetic nervous system activity to the heart. Specifically we are investigating if there are differences from RSA at a resting baseline to an experimental task that includes either greater social communicative interaction or cognitive effort, such as a narrative task versus a conversation, or a conversation versus an executive functioning task within groups. We are also looking across diagnostic groups to determine if there are differences in RSA during baseline and/or during a social communicative or cognitive context. Findings will inform the use of RSA as a diagnostic marker, severity indicator, and a tool to aid in the prognosis for language and social abilities in children with ASD. 

    The Effects of Spatial Remapping with Simulated Central Vision Loss
    Steve Engel, PhD, Psychology and Rebecca Dorfman, graduate student

    Remapping is a method to preserve visual information for age-related macular degeneration patients with central field loss (CFL). Specifically, text from the central blind spot is moved to virtually “wrap around” the blind spot into areas of better vision in the field. This study assessed the hypothesis that daily training with remapping in simulated CFL patients will yield faster reading speeds. Previous studies have shown the benefits of remapping, as it produces faster reading speeds than that of reading with simulated CFL and no remapping; however, no study has assessed the impact of daily training on reading speeds with remapping. Over the course of three days, normal sighted subjects read with simulated CFL, unmonitored, for 30 minutes per day. Reading speeds were measured with standardized sentences before and after training each day.

    Audiovisual Enhancement in the Perception of Children’s Speech
    Benjamin Munson, Jacquelyn Karisny, Gisela Smith, Kristi Oeding, Alexandra Hagen

    It is nearly axiomatic that audiovisual (AV) speech is more intelligible than audio-only (A-only) speech, particularly when the speech is presented in a challenging listening environment, such as in background noise [e.g., MacLeod and Summerfield, Br. J. Audiol., 2 (1987)]. No previous research on audiovisual speech perception has examined the perception of children’s speech. Children may elicit a smaller AV benefit than adults, as their visual articulatory movements are more variable than adults’ [e.g., Smith and Goffman, J. Speech Lang. Hear. Res. 41 (1998)], and hence are less informative perceptual cues. Alternatively, the overall lower intelligibility of children’s A-only speech might lead them to elicit overall higher AV benefits than adults. To examine this question, we collected developmentally appropriate sentence productions from five, 4-6 year old children, and five sex-matched adults. Ongoing work is examining the intelligibility of these sentences in multitalker babble in A-only and AV conditions in a variety of signal-to-noise ratios, so that we can compare AV benefits for children and adults when A-only intelligibility is matched. Both sentence intelligibility and eye gaze during perception are being measured. Results will help us understand the role of individual-speaker variation on the magnitude of AV benefit.

    Auditory stream segregation based on localization cues in normal hearing and hearingimpaired listeners
    Andrew Oxenham, PhD, Psychology, and Heather Kreft, Researcher

    Hearing out one voice amid a complex auditory background is a real challenge for hearing-impaired (HI) listeners. The aim of this study was to investigate to what extent HI listeners can make use of localization cues to segregate speech sequences. The performance of HI listeners with and without their own hearing aids were compared to that of older and younger listeners with normal hearing. Listeners were presented with sequences of speech sounds consisting of a consonant and a vowel (CV). The CV tokens were concatenated into interleaved sequences that alternated in positions in both horizontal and median planes. The listeners were asked to focus only on the loudspeaker facing them, and detect whether or not a repeated token was introduced within this sequence, so that performance improved if the listeners were able to perceptually segregate the two sequences.

    Localization of Target Sound Sources in the Presence of Distractor Sources
    Mark A. Stellmack

    These experiments explore human auditory localization in multi-source environments, or, more specifically, the ability of listeners to localize a target sound source in the presence of distractor sound sources.  A substantial amount of previous research has been conducted with stimuli presented over headphones, demonstrating "binaural interference" effects, but relatively little research has been performed in the free-field with stimuli presented over loudspeakers.  Free-field listening introduces stimulus and response features that are not present in headphone listening, such as perceptually externalized sound sources and the presence of head-movement cues.  These experiments utilize a laser-pointer response system, whereby the listener points to the perceived location of the sound source on a screen that conceals the loudspeakers.  Infrared cameras detect the position of the laser-pointer device, which is used to calculate the azimuth and elevation angles of the position of the laser dot on the screen relative to the listener's head position.  These experiments examine the effects on target localization of various stimulus parameters, such as the frequency characteristics and relative positions of the target and distractors, and simulated movement of the distractors.  The goal of the research is to identify stimulus characteristics that facilitate the perceptual segregation of sound sources in real-world listening conditions.

    Dual Sensory Loss: Audiovisual Interaction in Speech Perception
    Ying-Zi Xiong, Nam Anh Nguyen, Peggy Nelson, Gordon E. Legge

    Dual Sensory Loss (DSL) is the co-occurrence of vision and hearing loss. The prevalence of DSL rises steeply with age and is expected to increase as the American population ages. People with DSL have increased difficulties in daily functioning, because many real-life tasks require both vision and hearing. Despite extensive attention to the rehabilitation for vision and hearing loss separately, very little attention has been paid to the specific challenges faced by people with DSL. We are conducting a series of studies focusing on the spatial localization and speech perception of people with DSL, which are important for safe mobility and social interaction. Our long term goal is to improve the quality of life and independence of people with DSL by optimizing the utilization of their residual vision and hearing.

    People with hearing loss have trouble understanding speech, especially in complex situations with multiple talkers speaking from different locations. In such social interactions, vision and hearing frequently interact with and complement each other. A common example is that we can understand a person’s speech better by looking at their mouth movements. In addition to this direct audiovisual interaction, knowing a speaker’s location can also make it easier to attend to and understand their speech among competing voices from other spatial locations. In this study we ask whether people with DSL can still benefit from their residual vision in speech perception. Our hypothesis is that people with DSL might have limited benefit from visual speech because of limited visual acuity and contrast sensitivity, but the benefit of vision in directing auditory spatial attention to a target speaker might be preserved.

    2018

    Expand all

    Isolating Neural Correlates of Streaming and Attention to Components within Complex Tones
    Hao Lu, Andrew J. Oxenham

    Alternating sequences of harmonic complex tones can form separate auditory streams, based on differences in fundamental frequency (F0) between the two tones. Attention can enhance the cortical representation of the attended stream, relative to the unattended stream, as measured via EEG. However, it is not known whether such enhancement can be observed at the level of individual components within complex tones. In particular, it is unclear whether or how a frequency component that is common to both tones is modulated by attention. To study this question, we used amplitude modulation (AM) around 40 Hz to generate auditory steady-state responses (ASSRs) that were used to tag certain frequency components within the complex tones of the attended or unattended stream. In this study we collected EEG data from normal-hearing young participants as they performed an auditory task. Participants were instructed to selectively listen to either the low (700-Hz) or high (1050-Hz) stream and to report the number of level oddballs in the attended stream at the end of each trial. Some of the frequency components in each complex were AM-tagged. EEG responses were measured and quantified to each tagged component using the multi-channel phase-locking value (PLV).

    Neural Markers of Auditory Enhancement
    Anahita Mehta, Lei Feng, Andrew J. Oxenham

    Context plays a crucial role in auditory perception, providing us with perceptual constancy – the ability to recognize and identify acoustic sources, such as a musical instrument, a particular talker, or individual phonemes and words, despite variations in room acoustics and speaker characteristics, and the presence of background noise. One example of context effects in audition is a phenomenon known as ‘auditory enhancement’ where a target tone in a simultaneous masker becomes perceptually more salient if the masker itself, termed precursor, is presented first. This phenomenon reflects the general principles of contrast enhancement. Behaviorally, a substantial amount of research is done to understand the acoustic parameters under which auditory enhancement occurs. However, the physiological mechanism underlying auditory enhancement is still largely unknown. In this study, we used auditory steady state response (ASSR), which is an EEG measure, to investigate the brain responses to sounds with and without auditory enhancement. Through these EEG recordings, we aim to test the hypothesis of the amplification of the neural response to the target tone when the precursor is present. We use a paradigm where we differentially 'tag' the target and the maskers using amplitude modulations so that we can separate out the response patterns to these components in the EEG response. Through 4 separate EEG experiments, we have established and replicated our findings that show the first cortical neural correlates of auditory enhancement. In the future, we are going to explore the behavioral and neural effects of directed attention on enhancement. 

    Characterization of Neural Responses to Continuous-Speech in Populations with and without Speech-in-Noise Perception Difficulties
    Juraj Mesik, Magdalena Wojtczak

    Speech understanding is critical in our daily lives, yet difficulties with speech-in-noise (SIN) understanding are common among the older, and hearing impaired populations. At the same time, existing behavioral and objective measures of SIN deficits may not be sufficiently sensitive, as they do not closely correlate with subjective reports of real-life speech perception difficulties. In our project, we seek to test whether neurophysiological measures of continuous-speech processing could serve as an additional objective metric of SIN processing deficits. Specifically, in our studies participants listen to narrative speech in the presence of distracting sounds, such as other speakers, while we measure their brain activity using electroencephalography (EEG). We then utilize computational models to extract brain responses to both low-level speech features, such as the variations in the speaker's voice, as well as high-level speech features related to speech meaning. We seek to compare these measures across participant populations experiencing different degrees of SIN difficulties, and learn if and how their speech processing differs. Scientifically, these results may help us better understand what components of speech processing are impaired in individuals with SIN difficulties. Depending on the findings, our methods may one day also gain practical utility as an additional objective diagnostic test for SIN processing deficits. 

    Setting Up the Display ++ for Future Vision Research
    Walter Wu, Gordon Legge

    The goal of this project is to set up the Display++ for future vision research. Display++ is a widely used LCD display for vision research. It offers precise timing control and calibrated contrast of visual stimuli. First, we are hoping to derive the gamma function of the Display++ for setting up experiments. The gamma function can be implemented in MATALB or Python codes for precise stimulus presentation. Second, a psychophysics experiment will be set up to test participants’ contrast sensitivity functions (CSF). The CSFs will be compared with past results to test the reliability of visual stimuli presented on the Display++. These two steps are important for a reliable setting for future vision research running in CATSS.

    Dual Sensory Loss: Impact of Vision and Hearing Impairment on Spatial Localization
    Ying-Zi Xiong, Douglas A. Addleman, Peggy Nelson, Gordon E. Legge

    Dual Sensory Loss (DSL) is the co-occurrence of vision and hearing loss. The prevalence of DSL rises steeply with age and is expected to increase as the American population ages. People with DSL have increased difficulties in daily functioning, because many real-life tasks require both vision and hearing. Despite extensive attention to the rehabilitation for vision and hearing loss separately, very little attention has been paid to the specific challenges faced by people with DSL. We are conducting a series of studies focusing on the spatial localization and speech perception of people with DSL, which are important for safe mobility and social interaction. Our long term goal is to improve the quality of life and independence of people with DSL by optimizing the utilization of their residual vision and hearing.

    This study focuses on the spatial localization of people with DSL. Spatial localization refers to determining the direction of sounds and visual objects in the environment, and it is important for safe mobility and effective social interaction. Visual and auditory systems frequently work together to localize objects and events in the world. Dual sensory loss cannot be fully understood through separate analysis of vision and hearing loss, given the ubiquitous interactions between vision and hearing. In this study we specifically focus on the interaction between vision and hearing in spatial localization to address the special difficulties people with DSL might have in their spatial localization.

    2017

    Expand all

    The Role of Attentional Learning in Audiovisual Search
    Doug Addleman, Yuhong Jiang

    The research we do in CATSS tries to understand how people learn to look and listen in a cluttered world. Participants in our studies search for target objects among distracting ones, such as a number spoken at the same time as letters, or a visually presented number among letters. We use these tasks to study how people learn new strategies for searching as they go about these tasks, identifying ways that people improve at finding targets over time when their locations are more predictable or less predictable. We’re also interested in how the brain translates what it learns about the locations of visual objects to auditory ones and vice-versa. These findings improve our understanding of how the brain learns to process visual and auditory information, and we plan to use this information to inform studies of how this type of learning can be useful for people with sensory loss.

    Sensitivity to Binaural Temporal-Envelope Beats with Single-Sided Deafness and a Cochlear Implant as a Measure of Tonotopic Match
    Coral Dirks, Peggy Nelson, Matt Winn, Andrew Oxenham

    Current cochlear implant (CI) fitting strategies aim to maximize speech perception through the CI by allocating all spectral information across the electrode array without detailed knowledge of the tonotopic placement of each electrode along the basilar membrane. For patients with considerable residual hearing in the non-implanted ear, this approach may not be optimal for binaural hearing. This study uses binaural temporal envelope beat sensitivity to estimate frequency-to-electrode matching in the CI ear. Objective and subjective outcomes will provide new information on binaural interactions in this patient population and guide methodology for frequency-matching remapping efforts.

    Understanding and Improving the Visual Function Provided by a Visual Prosthetic Device
    Sandra Montezuma, Yingchen He

    The broad aim of this work is to utilize our knowledge of the neuroscience of vision to enhance the quality of life for people with blindness. Specifically, the focus is to optimize the treatment plan for people with retinitis pigmentosa (RP) by maximizing visual function from the Argus II retinal prosthesis (Argus II).  RP cannot be treated with currently available medical or surgical options. The first and only visual prosthesis approved by the FDA, the Argus II retinal prosthesis system (Second Sight Medical Products; Sylmar, CA), became available in the United States in 2013 for the treatment of blind patients with end-stage RP. However, the vision provided by the device is rudimentary, and it is possible that the blind patients do not process visual information in the same way as normally-sighted people because of their long-term visual deprivation. Since retinal prostheses are still in their infancy, much remains unknown, both about the technology itself and about the brain functions of the patients who have been blind for decades. Thus, our research has two aims. Theoretically, we utilized brain imaging (electroencephalogram, EEG, and functional Near-Infrared Spectroscopy, fNIRS) to better understand the brain plasticity in these retinal prostheses users who have lost and regained sight. Practically, we explored the utility of incorporating novel features such as thermal vision into the prostheses in order to improve the functional vision provided by the device. These studies are crucial for further development of visual prostheses as well as for the advancement of our understanding of cortical plasticity when the blind regain sight.

    Measuring Listening Effort and Localization Using Functional Near-Infrared Spectroscopy
    Dorea Ruggles, Kristi Oeding

    The study evaluated the use of functional near infrared spectroscopy (fNIRS) for examining listening effort. This technology is fairly new in the auditory field, so we sought to evaluate how sensitive it was for examining listening effort in listeners with normal hearing. We examined how speech intelligibility in noise impacted listening effort in a few conditions: we vocoded speech so that it was slightly or significantly degraded and our participants wore hearing aids in a quiet (omnidirectional) or noise program (directional). We also examined a spatial memory task where participants had to remember where sounds had originated from. For this task, participant also wore the hearing aids in quiet or noise program. In previous studies and a pilot study we did, we found that the front eye fields were activated and wanted to see if we could replicate previous studies and our pilot. Our hope was to evaluate how sensitive fNIRS is in detecting speech degradation and changes in hearing aid programs.

    Association of Musical Training with Auditory and Speech Neural Coding and Perception
    Kelly Whiteford, Juraj Mesik, Andrew J. Oxenham

    Numerous studies have reported a link between engagement in musical training and enhanced neural processing and perception of sound, ranging from fine-grained pitch discrimination to the perception of speech in noise, with training-related neural changes emerging as early in the auditory pathways as the brainstem or even the cochlea. Such findings suggest a role for experience-dependent plasticity in the early auditory system, which may have meaningful perceptual consequences. However, the generalizability of the musician advantage remains unclear. For example, small-sized samples often represent extreme ends of the musical spectrum; the nature and magnitudes of the musician advantage are sometimes small or inconsistent; and comparisons between studies are complicated by methodological differences and varying analytical techniques. This multi-site study aims to examine the robustness of the musician advantage across the adult lifespan by replicating and extending eight key experiments involving both perception and neural coding across a large sample of listeners.

    Project website: www.soundbrainscience.com

    2016

    Expand all

    Evaluation of Visual Attention to Images by Adults with Traumatic Brain Injury
    Jessica Brown, PhD SLHS

    Survivors of traumatic brain injury (TBI) often experience cognitive and visual deficits that affect their participation in activities of daily living and other tasks that require attention and information processing using the visual modality. The impact these deficits have on performance requires that professionals understand the most efficient ways to assess and treat individuals within this population.  In order to provide health care professionals with this information, it is necessary to ensure the visual supports we use with these individuals are effective; eye tracking is one way to accomplish this goal. Therefore, the purpose of this study is to determine the ways in which individuals with TBI identify and process explicit and inferential information from given image stimuli. The researchers will implement a between-groups, repeated measures design, including adults with and without histories of severe traumatic brain injury. Participants will view sixty total stimulus screens presented on an eye tracking screen and select the image (from a field of four) that best matches a provided written sentence. An equal number of stimulus sentences will address the main character/action, background detail, and physical/mental inference of the target image.

    Visual Attention of Adults with Traumatic Brain Injury to Various Daily Planner Supports
    Jessica Brown, PhD SLHS

    There is substantial evidence documenting the visual acuity and perceptual deficits in those with neurological impairments, but the impacts of these deficits on daily activities are still ambiguous to clinicians. In clinical and research environments, the focal point for attention is on external aids for cognition (e.g., daily planners, pictures, written procedures) with the most successful clinical techniques involving external supports. Researchers have yet to empirically study content representation methods within support frameworks. Understanding the visual-perceptual skills of adults with TBI and their attention to cognitive supports presented in various visuographic forms is necessary to help determine material efficacy and provide selection guidelines.

    The first purpose of this investigation is to measure the visual attention patterns of 20 adults with a history of severe TBI and 20 individuals with no neurological impairment when viewing cognitive supports across three visuographic modifications. This aims to identify and evaluate modifications in content representation that will effectively direct the visual attention of adults with TBI. The second purpose is to examine the effects of these three visuographic modifications across three layout variations in order to identify and evaluate presentation changes capable of guiding visual attention of adults with TBI. 

    Mapping Patterns of Brain Activation While Watching a Digital Learning Media Prototype
    Juergen Konczak, PhD Kinesiology, Sanaz Khosravani, Kinesiology

    The goal of this exploratory study is to determine which cortical areas are activated by watching scenes derived from commercial learning media on baseball pitching. Specific emphasis is based on understanding what cortical motor and premotor regions or frontal lobe areas known to be active during observational motor learning (i.e. mirror neuron system) will be activated. The spatial activation map associated with watching Visyn Video Segments (VVS) will be contrasted against a series of control conditions to determine whether watching VVS will activate those areas known to be active during observational motor learning or mental imagery of performing a skill.

    The project requires the participants to watch/imagine different scenes associated with baseball pitching, while their brain signals are being recorded using established electroencephalography technology. The scenes include: upper limb tasks such as reaching and grasping, as well as fastball pitching (from a commercially available video footage and a commercially available instructional video), and finally, the Visyn company video segments presenting fastball pitching. The recorded signals from different cortical areas will later be analyzed, and different aspects of brain activities will be extracted via the application of mathematical techniques on the collected data, providing an insight into different activation patterns over, within, and between different cortical areas as a function of watching different training videos. 

    Robot-Aided Training of the Proprioceptive Sense to Improve Motor Function for People with Stroke
    Juergen Konczak, PhD Kinesiology, I-Ling Yeh, Kinesiology

    Proprioception refers to the perception of limb motion or position and the orientation of one’s body in space. Impaired proprioception is observed in people with neurological conditions, such as stroke. Proprioceptive deficits are associated with poor upper limb motor function that impairs activities of daily living. Specifically, this study aims to determine if a robot-aided intervention regimen that required users to make active wrist movements without vision could improve proprioception and motor function. We assess participants’ ability to discriminate wrist positions, wrist movement accuracy and somatosensory-evoked potentials measured by electroencephalography as indicators of intervention effectiveness. If successful, this regimen could be a supplementary approach to enhance motor recovery for people with stroke.

    Auditory Stream Segregation Based on Localization Cues in Normal Hearing and Hearing-Impaired Listeners
    Andrew Oxenham, Marion David, Alexis Tausend, Heather Kreft

    Hearing out one voice amid a complex auditory background is a real challenge for hearing-impaired (HI) listeners. The aim of this study was to investigate to what extent HI listeners can make use of localization cues to segregate speech sequences. The performance of HI listeners with and without their own hearing aids were compared to that of older and younger listeners with normal hearing. Listeners were presented with sequences of speech sounds consisting of a consonant and a vowel (CV). The CV tokens were concatenated into interleaved sequences that alternated in positions in both horizontal and median planes. The listeners were asked to focus only on the loudspeaker facing them, and detect whether or not a repeated token was introduced within this sequence, so that performance improved if the listeners were able to perceptually segregate the two sequences.