Area:
auditory system, cochlear implants, speech perception
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Ying-Yee Kong is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2008 — 2010 |
Kong, Ying-Yee |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Cross-Frequency Integration For Speech Recognition in Bimodal Hearing @ Northeastern University
[unreadable] DESCRIPTION (provided by applicant): The benefit of combined electric and acoustic stimulation in speech and pitch perception has been demonstrated in a number of studies. A close examination of the amount of combined benefit reported varies across studies and across test materials and conditions. Some patients did not demonstrate any combined benefit and in rare occasions, some even exhibited a potential incompatibility between the two types of stimulation. Several attempts have been made to relate combined acoustic and electric hearing benefits to some measure of auditory function in the residual hearing, but significant correlations were not found. This study focuses on bimodal hearing in which listeners receive electric stimulation in one ear and acoustic stimulation in the contralateral ear. The long-term goals of this project are (1) to understand the processing of speech in CI listeners who receive combined acoustic and electric stimulation, and (2) to provide a basis for the development of rehabilitation strategies for improving speech recognition in CI listeners. The specific aims are (1) to identify the speech information extracted in electric hearing in the high-frequency regions and residual acoustic hearing in the low-frequency regions; (2) to investigate how the extracted information from each ear is integrated in normal-hearing and cochlear-implant listeners; and (3) to relate phoneme recognition performance to sentence recognition performance. We will apply several well-developed speech integration models, including a simple probabilistic model, Fuzzy Logic Model of Perception, and Pre-Labeling and Post- Labeling models to predict intelligibility scores for combined hearing performance. This model-based approach provides the means to systematically study the differences in the abilities of cochlear-implant listeners to simultaneously extract speech information from acoustic and electric stimulation and integrate this information across ears. The proposed work is of high clinical relevance because it may help identify deficits in information extraction and/or integration, encountered by implant users on an individual basis and aid in developing rehabilitative strategies tailored to individual needs. [unreadable] Relevance: The purpose of this study is to investigate how speech information is integrated across ears in individuals who wear a cochlear implant in one ear and a hearing aid in the opposite ear. The proposed work is of high clinical relevance because it may help identify the problems encountered by implant users and aid in developing rehabilitative strategies tailored to individual needs. [unreadable] [unreadable] [unreadable] [unreadable]
|
0.942 |
2013 — 2017 |
Kong, Ying-Yee |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Speech Perception With Combined Electric and Acoustic Stimulation @ Northeastern University
DESCRIPTION (provided by applicant): The benefit of combined electric and acoustic stimulation (EAS) for speech and pitch perception has been demonstrated in a number of previous studies. In some cases, EAS benefit has been documented even when cochlear-implant (CI) patients have very limited residual hearing and speech perception ability in the non-implanted ear. To date, it is still unclear how individual differences in sensory inputs, linguisti context, and cognitive functions influence the degree of benefit provided by EAS, and it is not known whether the typical EAS patient utilizes their residual hearing to its greatest potential. These uncertainties limit clinicians' and patients' ability to make good decisions related to second-ear implantation. In this research, we seek to identify factors that underlie EAS benefit and to investigate methods that could potentially enhance the benefits of residual hearing in EAS users. Unlike the descriptive approach employed by most previous studies, we will take a more comprehensive, model-based approach that considers both the bottom-up and top-down processes that contribute to multi-source speech perception in EAS users. Aim 1 will determine how EAS benefit is influenced by listeners' ability to utilize and optimally weight speech cues presented to the CI and residual hearing ears. Aim 2 will investigate how bottom-up low-frequency acoustic cues and top-down processing (such as the use of linguistic context and the ability to fill in missing speech information) interact to improve speech intelligibility in EAS usrs. Finally, Aim 3 will develop and test speech-enhancement algorithms that are likely to improve speech perception by EAS users. Overall, this research should add substantially to our understanding of 1) the degree of benefit that can be expected from low-frequency residual hearing in EAS, 2) the mechanisms responsible for EAS benefit and the factors that account for its variability across individuals, and 3) the nature of signal-processing algorithms that may enhance speech perception in EAS users.
|
0.942 |