1985 — 1987 |
Richards, Virginia M |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Comparing Frequency Discrimination and Pitch Perception |
0.964 |
1988 |
Richards, Virginia M |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Monaural Envelope Correlation Perception |
0.964 |
1989 — 1993 |
Richards, Virginia M |
R29Activity Code Description: Undocumented code - click on the grant title for more information. |
Envelope Synchrony Perception |
0.964 |
1994 — 1997 |
Richards, Virginia M |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Within- and Between-Channel Representations @ University of Pennsylvania
The objective of this research is to better understand the perception of complex auditory stimuli. The means by which this is to be achieved is by consideration of the representation of acoustical stimuli, and in particular the relationship between the representation of temporal and intensity aspects of the acoustical stimuli. The experiments are designed to, one the one h and, determine whether models which treat temporal and intensity information as being independently coded manner in which temporal and intensity information as being independently coded are able to account for psychophysical data, and on the other hand, examine the manner in which temporal and intensity information is combined both on either temporal or intensity information. A second experimental manipulation requires discriminations between complex sounds that vary systematically in terms of three features: intensity, envelope modulation and phase modulation. The ability to discriminate between such stimuli allows an estimation of the relative contribution of these features for the formation of complex auditory percepts, and allows an initial evaluation of the means by which these features are represented in the auditory system. The basic information concerning temporal and intensity processing in the auditory system that will result from this plan of study may provide a basis for the diagnosis and treatment of auditory impairment. Particularly important is the development of a means by which temporal and intensity processing may be independently tested.
|
0.951 |
1998 — 2002 |
Richards, Virginia M |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Within and Between Channel Representations @ University of Pennsylvania
DESCRIPTION: The objective of this research is to better understand the perception of complex acoustical stimuli. The auditory system has the property that incoming sounds are decomposed into different frequency channels. In contrast, our percepts are unified, depending on the integration of information across both frequency and time. The long- term objectives of the proposed research is to understand the integration of information across frequency. Of particular interest is the integration of level and temporal information, two features of broadband sounds encoded in frequency channels. This objective is addressed by comparing the result of psychophysical experiments and optimal statistical models of processing applied at the level of frequency channels. B jointly considering human performance predicted by optimal models, estimates of frequency selectivity for level and temporal tasks may be obtained. Then, the resulting models may be tested using increasingly complex tasks, tasks in which both level and temporal features of broadband stimuli are varied. To the degree that the sensitivity to changes in level and temporal features of broadband sounds are successfully modeled, comparisons with models derived for hearing impaired individuals can follow.
|
0.951 |
2004 — 2008 |
Richards, Virginia M |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Within and Between Channel Representation @ University of Pennsylvania
[unreadable] DESCRIPTION (provided by applicant): This proposal constitutes a psychophysical experimental program which examines two aspects of auditory processing in the healthy human auditory system: (a) uncertainty-driven integration of information across frequency, and (b) observers' ability to detect and/or segregate target signals embedded in a background of distracters. The proposed experiments examine the integration of information across frequency and the integration of information across time and frequency, respectively. The primary goal of the first sequence of experiments is to compare effects of stimulus uncertainty using tasks in which observers should extract frequency from a single frequency locus vs. compare information across the spectrum. For the former task it has been suggested that observers fail to selectively attend to a single frequency locus, an explanation that is difficult to reconcile with uncertainty effects for the latter task. The second set of studies provides estimates of the relative efficacy of different cues for sound source segregation. The experiments examine the hypothesis that the multiple cues are independently represented and optimally combined. In addition, the experiments estimate frequency selectivity after the segregation of two sources into "auditory streams." The proposed experiments address these issues by analyzing threshold data and examining relative weights in time and frequency. Processing models associated with signal detection theory form the basis of data analysis. While the experimental observers are undergraduates with normal audiograms, the experimental methods can be adopted to test persons suffering hearing impairment, persons for whom the presence of multiple sources leads to substantial masking. [unreadable] [unreadable]
|
1 |
2009 — 2010 |
Richards, Virginia M |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Classification Images of Data Collected Using the Method of Free Response @ University of California-Irvine
DESCRIPTION (provided by applicant): Human listeners are facile at segregating the multitude of sound sources in the environment, and attending to a single sound stream. This process requires the integration of information across time and frequency. In order to examine the role of temporal processing the proposed psychophysical experiments employ the rarely used method of free response. For this procedure a masker is presented for several minutes and a target is presented at random times. The subject's task is to press a button whenever the target is detected. This procedure is well suited to the study of sound segregation because the presentation duration is long, mirroring "real world" situations. One question of primary interest addressed in the proposed experiments is whether the continuous nature of the method of free response increases the "cognitive load" on the subjects. A second primary issue is the development of a method that determines which "features" in time and frequency listeners rely on to detect a target in a competing environment. The method of free response has the advantage that button presses unfold over time, allowing the features to be extracted using reverse correlation methods to relate the time of button presses to the stimulus preceding it. The proposed experiments will evaluate the statistical reliability of the obtained features, a first step towards the useful application of this procedure. The proposed experiments and analyses are restricted to normal-hearing listeners. Should the method prove to be successful, the current results will provide a basic data set against which results from hearing-impaired listeners might ultimately be compared (not proposed). The brain mechanisms by which normal-hearing listeners are able to segregate sound sources (e.g., hearing a talker in a noisy environment) are not fully elucidated. Moreover, the ability of hearing-impaired listeners to segregate sound sources is impaired. The proposed research will contribute to the understanding of sound source segregation, with the potential of providing a basic understanding applicable to assistive hearing devices.
|
1 |
2014 — 2015 |
Richards, Virginia M Shen, Yi (co-PI) [⬀] |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Rapid Measurement of Routinely Estimated Psychophysical Functions @ University of California-Irvine
DESCRIPTION (provided by applicant): Collecting behavioral data efficiently is a significant challenge faced by many auditory scientists, especially those who conduct clinical or developmental research. The prolonged process of data collection is the bottleneck restricting how much information can be gained from a single test subject and how many participants can be included in a clinical study. The long-term goal of the proposed research is to increase the efficiency of behavioral data collection, making individualized estimation of auditory psychophysical models possible. As the first step toward this goal, the estimation of two important psychophysical models will be studied in detail. The two models are the auditory filter model, a model of spectral resolution, and the cochlear input-output function, a model of peripheral nonlinearity. The parameters of these models, such as the auditory-filter bandwidth and the compression ratio of the cochlear input-output function, have been shown to be reliable indicators of cochlear health and can predict supra-threshold listening deficits. Classical procedures to fit these models use threshold-based approaches: multiple thresholds are measured, and the psychophysical model of interest is fitted using those thresholds. For the proposed procedure, a Bayesian algorithm will used to ensure that the stimulus presented on each trial is the stimulus that maximally accelerates the rate of parameter convergence. This parameter-based approach allows the estimation of the auditory filter or the cochlear input-output function using a single experimental track and fewer than 200 trials. This is approximately ten times faster than procedures currently in use. In the proposed experiments, for both of these models, parameters estimated for normal hearing listeners using the proposed and threshold-based procedures will be compared to determine the relative reliability of the new procedure. The optimal configurations for the new procedure, e.g. how to initiate and terminate an experimental track, will be identified. Additionally, the procedure developed to estimate the auditory filter will be further developed to ensure its suitability for hearing-impaired listeners. Upon the completion of the proposed research program, user-friendly software packages will be made available to hearing research community for the estimation of the auditory filter and the cochlear input-output function. The outcome of this research is expected to have a strong and sustained impact on behavioral studies of hearing and hearing impairment. With the procedures to be developed, the fitting of fundamental auditory models for individual test subjects can become routine. This will open the door to a better understanding of the individual differences in hearing capability because scientists will be able to test more participants and/or make more measurements in their experiments. Moreover, given the efficiency of the procedures, it will be much easier for the future experimenters to track a listener's hearing characteristics longitudinally.
|
1 |