2021 |
Yoon, Yang-Soo |
R15Activity Code Description: Supports small-scale research projects at educational institutions that provide baccalaureate or advanced degrees for a significant number of the Nation’s research scientists but that have not been major recipients of NIH support. The goals of the program are to (1) support meritorious research, (2) expose students to research, and (3) strengthen the research environment of the institution. Awards provide limited Direct Costs, plus applicable F&A costs, for periods not to exceed 36 months. This activity code uses multi-year funding authority; however, OER approval is NOT needed prior to an IC using this activity code. |
Speech Perception Enhancement Using Novel Signal Processing in Bimodal Hearing
Project Summary/Abstract Speech perception for those who use cochlear implants (CIs) in combination with hearing aids (HAs) in opposite ears (i.e., bimodal hearing) varies greatly. This variability depends on the users? ability to process frequency and time information critical for speech perception. By identifying and enhancing this acoustic information, speech perception will significantly improve. In this AREA project, we aim to establish and verify a tailored identification scheme for the spectral and temporal cues responsible for consonant recognition. Our recent bimodal study shows that some frequency ranges and time segments of consonants are critical for consonant enhancement (called ?target frequency or time ranges?) while other frequency and time ranges cause consonant confusions (called ?conflicting frequency or time ranges?). Our Articulation Index-Gram (AI-Gram) signal processing can add and suppress intensity on these target and conflicting ranges. In Aim 1, we will determine the effect of the dead regions on consonant recognition. Target and conflicting ranges will then be identified on an individual subject basis for each consonant in the HA alone, CI alone, and CI+HA in quiet. The target frequency range will be determined by finding the frequency regions creating dramatic consonant enhancement, while the conflicting frequency ranges will be determined by finding the frequency regions creating consonant confusion. The target time ranges will be determined by finding the segment of the consonants responsible for dramatic consonant improvement while systematically truncating the consonant. The target time range will be used as the conflicting time ranges because the conflicting frequency ranges would be the most detrimental factor affecting the target frequency ranges if they coincide in time. In Aim 2, consonant recognition will be measured in quiet and noise under the three AI-Gram processing conditions: 1) target ranges alone with +6 dB gain; 2) conflicting ranges alone with -6 dB suppression; and 3) both intensified target and suppressed conflicting ranges. For each AI- Gram processing condition, consonant recognition will be measured in the matched listening conditions (e.g., the target or conflicting ranges identified in the HA alone will be presented in the HA alone listening condition). To determine how the unilateral detection ability affects bimodal benefit, the consonants processed on the target or conflicting ranges identified in the HA alone and CI alone will each be presented to the CI+HA listening condition. This proposed work will identify acoustic cues that contribute to bimodal benefit and will reveal how these cues are integrated or interfered with across modalities. Defining the relative impact of the target and conflicting ranges on the AI-Gram-sensitive consonants in the HA alone, the CI alone, and the CI+HA together will help determine the upper and lower cutoff frequencies of a HA and a CI and fine-tune these cutoff frequencies. This data is much needed for the long-term goal: developing a tailored bimodal fitting procedure. The AREA project will provide clinical research opportunities for four undergraduate students per year at Baylor.
|
0.961 |