We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Paul C. Nelson is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2005 — 2006 |
Nelson, Paul C [⬀] |
F31Activity Code Description: To provide predoctoral individuals with supervised research training in specified health and health-related areas leading toward the research degree (e.g., Ph.D.). |
Physiological Correlates of Temporal Envelope Perception
The goal of this project is to relate physiological and psychoacoustical measures of responses to temporal envelope fluctuations. Behaviorally-relevant sounds, such as speech, contain many amplitude-modulated features, but the neural mechanisms that underlie their perception remain unclear. This proposed research will critically test both existing and novel encoding hypotheses using human psychophysics, awake rabbit physiology, and computational modeling of auditory processing. By using the same type of stimulus in each of the three experimental paradigms, findings from one approach can be directly integrated into tests of the others. Audio-frequency stimuli that have been useful in determining the perceptual relevance of proposed decision variables will be translated into the modulation-frequency domain. One set of experiments will study the representation of sinusoidal modulation in the presence of an additional masking noise modulation. Masked-modulation detection thresholds have been measured psychophysically, and responses of inferior colliculus neurons will be obtained in matching stimulus conditions. Comparisons of competing model predictions will lend insight into cues used by the real system
|
0.954 |
2007 — 2008 |
Nelson, Paul C |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Spectral and Temporal Interactions in the Awake Marmoset Inferior Colliculus @ Johns Hopkins University
[unreadable] DESCRIPTION (provided by applicant): The goal of this proposed work is to understand the neural mechanisms used to process stimulus components that are likely to arise from a single source, despite their distinct separation in frequency or time. The normal auditory system is remarkably good at grouping such components appropriately, but listeners with hearing impairment have considerable difficulty in situations that require the perceptual segregation of multiple competing sound sources. Psychophysical paradigms that highlight interactions between temporally or spectrally distant elements will be examined physiologically with single-unit extracellular recordings in the inferior colliculus (1C) of the awake marmoset. To investigate influences across time, the dependence of the neural response to a short probe on the history of preceding stimulation (forward masking) will be quantified and related to psychoacoustic studies. To study interactions across frequency, responses to sounds made up of widely spaced components that have been shown to disrupt listeners' abilities to analyze dynamic changes in a target channel will be recorded. The corresponding perceptual effect is known as modulation detection interference (MDI), and it cannot be explained by peripheral auditory responses. The temporal and spectral context-dependent representation of speech signals will also be studied with stimuli matching those used in behavioral paradigms. These experiments will help establish the role of the 1C in the apparent transition from a peripheral code that exhibits weak temporal and spectral context dependence (with respect to perceptual measures) to a higher-order (cortical) representation that is heavily dependent on sound features removed in time or frequency from a target signal component. Potentially confounding factors, such as anesthesia and species differences, are minimized in the proposed preparation by using an awake primate that uses vocalizations to communicate. RELEVANCE. Perhaps the most challenging environment for listeners with hearing impairment is one with many competing dynamic sound sources: the mixture tends to fuse into a single stream, making selective attention to one or more specific sources nearly impossible. To understand how the normal auditory system deals with this situation so effortlessly, single-cell responses in the auditory midbrain will be recorded while sounds are played that emphasize some of the acoustic features apparently used by the system to group (and potentially segregate) components generated by a single source. Results should suggest mechanisms and strategies used by the brain to process these sounds, potentially providing clues for improving artificial algorithms used for signal detection, scene analysis, and efficient representation of information. [unreadable] [unreadable] [unreadable] [unreadable]
|
1 |