Area:
language, speech, perception, reading
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Lorin Lachs is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2011 — 2015 |
Lachs, Lorin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Rui: Effects of Visual Phonetic Similarity On Audiovisual Spoken Word Recognition @ California State University-Fresno Foundation
Spoken word recognition is defined as the process by which acoustic patterns are matched to meanings in the "mental lexicon" -- the memory repository of the approximately 85000 words known by an adult language speaker. Previous research has demonstrated that the number of lexical entries acoustically similar to a given word influences the ease with which that word is recognized. However, speech recognition is not solely an acoustic phenomenon; watching someone speak also provides additional information about the content of an utterance. Although there is presently a solid understanding of the role of acoustic similarity in spoken word recognition, less is known about visual similarity, and very little is known about the interaction of acoustic and visual similarity when both sources of information are available. This may be because visual similarity is particularly difficult to measure. One problem is that speech units that are acoustically different can be visually identical. Words that are visually identical are said to comprise a Lexical Equivalence Class (LEC). Previous research has shown that the number of words residing in an LEC affects the ease with which those words are lipread. Similarly, the size of an LEC affects the extent to which visual information enhances the recognition of auditory speech. However, the makeup of an LEC is to some extent dependent on the speaker. The proposed research will accomplish two goals: first, a publicly accessible computational tool will be built to facilitate computational and experimental investigations of visual lexical similarity. Second, several behavioral experiments will further elucidate the role that visual similarity plays in spoken word recognition. This project will advance our understanding of the basic mechanisms involved in spoken word recognition. Such knowledge will be useful for clinicians working with deaf or hearing-impaired populations, and engineers working on problems in automatic speech recognition.
|
0.934 |