1984 — 1987 |
Wilbur, Ronnie [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Syllable Structure and Phonological Rules in American Sign Language |
0.915 |
1992 — 1995 |
Wilbur, Ronnie B [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Prosodic Features of American Sign Language @ Purdue University West Lafayette |
0.936 |
1994 — 2002 |
Wilbur, Ronnie B [⬀] |
T32Activity Code Description: To enable institutions to make National Research Service Awards to individuals selected by them for predoctoral and postdoctoral research training in specified shortage areas. |
Communicative Disorders @ Purdue University West Lafayette
DESCRIPTION (provided by applicant): This application is for a continuing institutional grant designed to provide research training for 6 predoctoral and 3 postdoctoral trainees with research interests in communication disorders and sciences. Training prepares trainees to become active and responsible members of the scientific community. Hands-on apprenticeship training is provided in three interrelated areas: 1) Speech Production, Development, and Disorders: Lifespan Perspective (with special attention to sensorimotor processes in speech, and speech disorders from development through aging); 2) Language Structure, Development and Disorders: Single Language and Crosslinguistic Studies (including specific language impairment and American Sign language); and 3) Peripheral and Central Processing of Speech and Non-Speech Stimuli (with special reference to otoacoustic emissions and cochlear modeling). Trainees in these areas will be offered two "feeder" specialties: Cognitive Neuroscience (including imaging of normal and brain-damaged individuals) and Linguistics Applied to Communication Sciences and Disorders. Participating faculty routinely collaborate on projects that cut across these research areas. Advanced courses are available in communication disorders, research design, statistics, neurosciences, biology, engineering, and linguistics. However, the main purpose of the training program is to provide intensive interactive research experience leading toward establishment of successful independent clinical investigators.
|
0.936 |
2004 — 2011 |
Wilbur, Ronnie [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Basic Grammar of Croatian Sign Language
With National Science Foundation support, Dr. Ronnie Wilbur will lead an international team of American and Croatian researchers in a five year investigation of the linguistic structure of Croatian Sign Language (HZJ). With Dr. Ljubica Pribanic at the University of Zagreb, our goal is to construct a basic grammatical description of HZJ. Many deaf children in Croatia and in the U.S. will benefit from this collaborative research because it will have immediate application to the development of curricular materials for teaching sign language, training sign language interpreters, speech-language pathologists, audiologists, and educating teachers of the deaf about how early sign language usage can foster improved literacy and academic achievement with deaf children.
Two scientific questions motivate this project. First, how divergent are sign language structures? This question reflects the interest in determining the effects that modality of perception and production has on the nature of signed and spoken languages. We will be able to compare HZJ with ASL, Austrian Sign Language (OGS), and other signed languages. We focus on five general areas of inquiry: transitive declarative sentences, yes/no-questions, wh-questions, negation and verbal morphology. These results speak to the core issues of human conceptual structure and its mapping onto natural languages. Second, what is the influence of spoken languages on indigenous sign languages? This grammar will be the first on a sign language used in a Slavic speaking country. The proposed comparison with other signed languages, and with their local spoken languages, is novel to the field of sign language research and should yield new insights into our understanding of the notion "natural human language".
|
0.915 |
2004 — 2008 |
Wilbur, Ronnie B [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Modeling the Nonmanuals in American Sign Language @ Purdue University West Lafayette
[unreadable] DESCRIPTION (provided by applicant): This revised proposal describes a project to systematically investigate the facial components, combinations of components, and interactions of components that constitute facial expressions (nonmanual markers) in the grammar of American Sign Language (ASL). Some of these components have already been shown to differ in significant ways from those used by the general hearing population. They may carry semantic, prosodic, pragmatic, and syntactic information that may not be provided by the manual signing itself. We will compile an inventory of facial articulations, construct a database of video images of these in isolation and in context, and use these data and innovative computational tools to construct a model of facial behavior in ASL. To successfully accomplish this, we propose an innovative integrated linguistic and computational approach to the study of nonmanuals. Our goal in this project is to construct an initial phonological model of ASL nonmanuals. We have targeted a relevant set of facial features and have identified 4 experiments to obtain appropriate information on each of them. A necessary step in preparation for these experiments is to develop computer vision and pattern recognition algorithms that automatically extract these facial features from a large quantity of videos. These algorithms will be capable of processing data more accurately and efficiently than can be done by hand. Finally, by comparing these results with those obtained from native ASL signers in a series of perceptual studies, we can determine what further modifications are still needed. The study of facial expressions in ASL has very practical applications to several areas affecting the lives of Deaf individuals. The absence of clear information on the facial components makes teaching them to individuals trying to learn ASL, such as parents, deaf children, future teachers and interpreters, a pedagogical nightmare. Another important practical application is the development of systems that automatically recognize ASL. Such a system is not feasible without the ability to handle ASL nonmanuals, which carry grammatical information. [unreadable] [unreadable]
|
0.936 |
2006 — 2011 |
Wilbur, Ronnie [⬀] Adamo-Villani, Nicoletta |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Software For Math Education For the Deaf
Objectives: The objectives for this project include reducing the achievement gap between the performance of students with learning disabilities and their non-disabled peers in math; enhancing the math preparation of individuals with LD to enter postsecondary institutions to pursue programs and degrees in math, science, engineering, and technology; and the national dissemination of instructional resources in the form of lessons and online tutorials aligned with curriculum standards that have been validaAbstract Deaf education, specifically in science, technology, engineering, and math (STEM), is a pressing national problem. Our project addresses the need to increase deaf children's abilities in math with a unique approach (realistic and grammatically correct 3D animated signing) that significantly improves on the state of the art by creating emotionally appealing fluid 3D signers, a factor that plays a decisive role in learning for deaf students. Mathsigner software is designed to engage deaf learners and their parents in "hands-on, minds-on" experiences, leading to deeper understanding of fundamental ideas in accordance with current No Child Left Behind and general curricular guidelines. The general goal of the project is to develop and evaluate animation-based software to increase: (1) opportunities for deaf children to learn via interactive media; (2) effectiveness of (hearing) parents in assisting with the education of their deaf children; and (3) effectiveness of teachers for deaf children. Intellectual Merit This project addresses a critical need. Research demonstrates that deaf individuals are significantly underrepresented in the fields of science and engineering (Burgstahler 1994). Historically, it has been difficult for them to gain entry into higher education that leads to STEM careers (Caccamise & Lang 1996). Several factors contribute to this disparity: (1) significant delay in deaf children's literacy; (2) difficulty in conveying in sign language basic science/mathematical concepts, a task for which there are currently no tools; and (3) the inaccessibility to incidental learning (exposure to media in which mathematical concepts are practiced and reinforced). Deaf children lack access to many sources of information (e.g. radio, conversations around the dinner table) and their incidental learning suffers. Consequently some mathematical concepts that hearing children learn incidentally in everyday life have to be explicitly taught to deaf pupils. Our software will fill this void. The Mathsigner project is unique because it seeks to: (1) use advanced technology to teach mathematics to signing K-6 deaf students; (2) provide equal access and opportunities by overcoming known deficiencies in math education as reflected in the under-representation of deaf people in fields requiring math skills; and (3) provide a model for teaching technology for deaf people in general that can contribute to improving deaf education around the globe. The project is informed by advanced linguistic research on American Sign Language structure and grammatical use of facial expressions. We have assembled an expert team to accomplish this goal. Professor Adamo-Villani, Purdue Department of Computer Graphics Technology, is an award-winning graphic designer/animator and creator of 2D and 3D animations aired on national television. She initiated the development of teaching technologies for deaf children using advanced computer animation techniques and outlined the math education program itself. Professor Wilbur, Purdue Department of Speech, Language and Hearing Sciences, is internationally known for her research on American Sign Language (ASL) and its relevance to improving deaf education and literacy. The Indiana School for the Deaf is a fully accredited school and a national resource center. It is recognized nationally for its leadership in education, its advocacy of American Sign Language and being the first ted to improve the achievement of students with LD . This proposal builds from a major internally funded project identified as the Blending Assessment with Instruction Program (BAIP) that is comprised of two validated interventions in the form of lessons for teachers to employ in their instruction and online tutorials for independent use by students with LD. Both interventions are aligned with curriculum standards in math. The research initiative is designed to investigate the effects of the lessons and the tutorials on the achievement of students with LD in math. Significance and Intellectual Merit of Research: National Center for Educational Outcomes reported in 2004 that not only were students with LD performing below all students across the country, but also that the gap actually grew significantly larger as students got older (Thurlow & Wiley, 2004). Research has found that students with LD typically function two to four grades below expectancy across the mathematics curriculum (Parmar & Cawley, 1997). Many students with LD perform poorly on assessments that are tied to state standards (Thurlow, Albus, Spicuzza, & Thompson, 1998). Thurlow et al determined that only 34% of students with LD passed a state test on basic math skills, versus 83% of their non-disabled peers. This is of serious concern given that students with LD are held to increasingly higher standards and will need higher-level math and reasoning skills to meet the demands of high school and beyond. In less than 20 years, one in every four jobs will require technical skills (Tarlin, 1997, as cited in Jarrett, 1998), and many careers require a strong basis in math. If students do not experience a standards-based curriculum at an early age, they will be disadvantaged when being assessed via a standards-based assessment as required by NCLB. To focus only on postsecondary interventions to increase the presence of persons with LD in math, science and technology careers fails to recognize what research demonstrates as the contributor to the underrepresentation of persons with LD in the math, science and technology fields. Research Strategy: We propose to research the effects of BAIP in aligning local curricula with national curriculum standards and statewide assessments as a model for improving the performance of students with LD. The lessons and tutorials are developed for grades 3, 4, 5, 6, 7, 8, and 10 in compliance with NCLB. Two distinct empirical approaches are planned. First, lesson tests will be directly tied to content. These tests will be piloted to assure their validity and reliability. Once quality tests are available, they will be administered prior to and following lesson use with targeted students. In concurrence with this "experimental group" testing, we will pre and post test other comparable students with the same measures. As we will not be able to control for group equivalence due to the absence of randomization to intervention, and also considering that the pre and post test measures will not be equivalent, analysis of covariance procedures will be used to control "pre-lesson" instructional group differences. This method will be used within schools and as numbers of participants increase will carry out more robust empirical studies relying on hierarchical linear modeling. Thus in six months we will quasiexperimentally research the impact on learning of the lessons. Finally, student item score results will be evaluated descriptively to guide us regarding needed lesson changes. Comparisons will be made across students with LD who experience (a) the lessons taught by teachers, (b) tutorials, (c) lessons and (d) tutorials with disability and non-disabled peers. Broader Implications of Proposed Research: The vast majority of students with LD receive their math instruction in the general education classroom. Thus, the project has the potential to benefit all teachers and, ultimately, all students.
|
0.915 |
2009 — 2011 |
Brentari, Diane (co-PI) [⬀] Wilbur, Ronnie [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Conference: Theoretical Issues in Sign Language Research 10
The tenth international Theoretical Issues in Sign Language Research (TISLR) conference will be held on September 30 and October 1-2, 2010 at Purdue University. It is co-directed by Dr. Ronnie Wilbur and Dr. Diane Brentari. Sign language linguists will share their latest research in all areas of linguistics related to sign language structure and use. A special theme of TISLR 10 is "Research Methodologies in Sign Language Linguistics". This theme provides an opportunity for researchers to explain and discuss diverse qualitative and quantitative research methods for studying the world's sign languages. Just as technology has affected the field of linguistics generally, advances in technologies and software have made it possible to analyze sign language data more effectively and to share data electronically. The theme is reflected in the choice of plenary speakers and the construction of special sessions. Two of the invited speakers are Deaf linguists.
The work presented at TISLR is basic research, but it has an impact on areas of applied linguistics and curriculum development for sign language studies at all levels. In addition to their contributions to the conference theme, the Deaf plenary speakers serve as role models for other Deaf students and professionals. They encourage other Deaf people to seek higher education degrees and to participate in, and conduct, research in the sign languages that they know so well. In all sessions of the conference, an innovative communication policy will be implemented. TISLR 10 will actively encourage all participants who can present their talks in American Sign Language (ASL) to do so. A committee of Deaf and hearing individuals who have experience teaching and presenting in ASL will be available to help individuals who have the potential to present in ASL but have not done so to date. Since this change in communication policy toward direct communication is one that was requested by Deaf colleagues, it is expected that this policy will make sign language linguistics more accessible and more attractive to the Deaf Community at all levels.
|
0.915 |
2017 — 2020 |
Siskind, Jeffrey [⬀] Wilbur, Ronnie (co-PI) [⬀] Malaia, Evguenia (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ncs-Fo: Neuroimaging to Advance Computer Vision, Nlp, and Ai
It is often said that a picture is worth a thousand words. Frequently, to search for what is needed, whether images or objects in those images, words are needed instead. Getting accurate labels for efficient searches is a longstanding goal of computer vision, but progress has been slow. This project employs new methods to significantly change how picture-word labeling is accomplished by taking advantage of the best picture recognizer available, the human brain. Through functional magnetic resonance imaging and electroencephalography, brain activity of humans looking at pictures/videos is recorded and then used to improve performance on artificial intelligence (AI) tasks involving computer vision and natural language processing. Current systems use machine learning to train computers to recognize objects (nouns) and activities (verbs) in images/video, which are then used to describe events. Reasoning tasks (e.g., solving math problems) can then be done. These systems are trained on specially prepared datasets with samples of nouns for objects, verbs for activities, sentences describing events, and exam questions and answers. A novel paradigm using humans to perform the same tasks while their brains are scanned allows determination of neural patterns associated with those tasks. The brain activity patterns, in turn, are used to train better computer systems.
The central hypothesis is that understanding human processing of grounded language involving predication and its use during reasoning will materially improve engineered computer vision, natural language processing, and AI systems that perform image/video captioning, visual question answering, and problem solving. Scientific and engineering goals include developing models of human language grounding and reasoning consistent with neuroimaging, to improve engineered systems integrating language and vision that support automated reasoning. The main scientific question is to understand mechanisms by which predicates and arguments are identified, linked, and used for reasoning by the human brain. The hypothesis, that predicate-argument linking in visual and linguistic representations are accomplished similarly, and that this then supports reasoning and problem solving, will be tested using multiple neuroimaging modalities, and machine learning algorithms to decode "who did what to whom" from brain scans of subjects processing linguistic and visual stimuli. The iterative approach will involve understanding information integration at the neural level, to improve machine learning performance on AI tasks by training computers to perform increasingly complex tasks with neuroimaging data from stimuli derived from large-scale natural tasks. Using identical datasets for human and machine performance will support translation of scientific advances to engineering practice involving integration of computer vision and natural language processing.
This award is cofunded by the Office of International Science and Engineering.
|
0.915 |