Area:
Auditory Perception, Music Perception, Computational Audition, Auditory Scene Analysis, Natural Sound Statistics
Website:
http://mcdermottlab.mit.edu/
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Josh H. McDermott is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2016 — 2020 |
Mcdermott, Josh H |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Auditory Scene Analysis With Complex Sounds @ Massachusetts Institute of Technology
PROJECT SUMMARY / ABSTRACT Perhaps the most pervasive problem faced by listeners with hearing impairment or cochlear implants is the difficulty of recognizing speech and other sounds in the presence of competing sound sources, as when conversing at a restaurant. This difficulty in ?sound segregation? ? hearing a particular sound of interest when it is embedded in a mixture of other sounds ? often leads to frustration and social isolation, and is not adequately addressed by current hearing aids and implants. Sound segregation difficulties are also commonly reported in developmental auditory disorders. The long-term goal of the proposed research is to reveal the basis of sound segregation and to provide insights that will facilitate improved prosthetic devices and remediation strategies, as well as more effective machine systems for processing sounds, e.g. for automatic speech recognition. The development of more effective devices, technologies, and therapies is currently limited by an incomplete understanding of the factors that underlie sound segregation by normal-hearing listeners. In particular, little is known about sound segregation with complex naturalistic sounds, in part because much of the research in this area has been conducted using simple signals that are impoverished relative to the sounds listeners normally encounter. We propose to enrich the understanding of sound segregation with three sets of experiments that use novel sound synthesis methods to manipulate properties of natural speech and other sounds and test their role in segregation with behavioral experiments in human listeners. Aim 1 manipulates the classically proposed grouping cue provided by harmonic frequency relations and investigates the mechanisms underlying their effect. Aim 2 investigates the role of prior knowledge of voice and speech structure on segregation, and should help to explain why some voices are easier or harder to segregate than others. Aim 3 investigates the role of attentive tracking in the segregation of sounds from mixtures, and will explore the factors that facilitate tracking or cause it to fail. The results will reveal the mechanisms underlying sound segregation by the healthy auditory system, and will provide insights into the factors that limit auditory comprehension in the presence of multiple sound sources, hopefully suggesting new strategies for signal enhancement, prosthetic devices, and behavioral remediation.
|
1 |
2019 — 2021 |
Mcdermott, Josh H |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Computational Cognitive Neuroscience of Human Auditory Cortex @ Massachusetts Institute of Technology
PROJECT SUMMARY Humans with normal hearing excel at deriving information about the world from sound. Our auditory abilities represent stunning computational feats that only recently have been replicated to any extent in machine systems. And yet our auditory abilities are highly vulnerable, being greatly compromised in listeners with hearing impairment, cochlear implants, and auditory neurodevelopmental disorders, particularly in the presence of noise. Difficulties in recognition often lead to frustration and social isolation, and are not adequately addressed by current hearing aids, implants, and remediation strategies. The long-term goal of the proposed research is to reveal the basis of auditory recognition and to provide insights that will facilitate improved prosthetic devices and therapeutic interventions. The development of more effective devices and therapies is currently limited by an incomplete understanding of the factors that underlie real-world recognition by normal-hearing listeners. In particular, although responses to sound in subcortical auditory pathways are relatively well studied, little is known about the transformations that occur within the auditory cortex to create representations of meaningful sound structure. We propose to enrich the understanding of auditory recognition with three sets of experiments that examine the cortical representation of real-world sounds in human listeners, combining functional magnetic resonance imaging (fMRI) with computational modeling of the underlying representations. Aim 1 develops artificial neural network models of speech and music processing and compares their representations to those in the auditory cortex, synthesizing and then measuring brain responses to sounds that generate the same response in a model, and probing the time scale of the auditory analysis of speech and music. Aim 2 develops and tests models of pitch perception in noise, exploring the hypothesis that pitch perception is constrained both by the statistics of natural sounds and the frequency selectivity of the cochlea. Aim 3 develops and tests models that jointly localize and recognize sounds, and probes the brain representations of sound identity and location using fMRI. The results will reveal the mechanisms underlying robust sound recognition by the healthy auditory system and will set the stage for investigations of the cortical consequences of hearing impairment and auditory developmental disorders, hopefully suggesting new strategies for remediation.
|
1 |