Area:
acoustic, speech, music
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Jont Allen is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2008 — 2009 |
Allen, Jont Brandon |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Snr-Loss in Hearing Impaired Ears @ University of Illinois Urbana-Champaign
[unreadable] DESCRIPTION (provided by applicant): The focus of this research is to model how people decode speech sounds. The main application of this knowledge is to solve the problem of "signal-to-noise ratio loss," or SNR-Loss, a condition of the hearing impaired where a person is hypersensitive to noise. Under conditions of optimum frequency dependent amplification a person with SNR-Loss requires a better SNR to understand speech as a normal hearing person, for the same score. Existing methods for diagnosing SNR-Loss do not provide insight into the nature of the problem. What we wish to develop is a measure of SNR-Loss that identifies which sounds are being lost, and at what SNR. In this research we have demonstrated that people with hearing loss have problems hearing only a small number of sounds, and that these confusable sounds differ from ear to ear, depending on the particular hearing loss. Out of the 16 ears we have tested, we have not found any two ears that are the same. The hearing impaired (HI) have more trouble hearing speech in noise than normal hearing people. We would like to provide a scientific basis for why this happens, by measuring the most confusable sounds that the impaired ear find most difficult. We have discovered that only a few sounds are confused, and they are different for different ears. We shall answer the questions: "Specifically which perceptual consonant features are not audible to each hearing impaired listener?", "How does this audibility and confusability change as a function of the background noise and amplification?" [unreadable] [unreadable] [unreadable]
|
0.958 |