2001 — 2005 |
Franceschetti, Donald Graesser, Arthur [⬀] Garzon, Max (co-PI) [⬀] Person, Natalie Hu, Xiangen (co-PI) [⬀] Wolff, Phillip Louwerse, Max |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Developing Auto Tutor For Computer Literacy and Physics
The Tutoring Research Group at the University of Memphis has developed a computer tutor (called AutoTutor) that simulates the discourse patterns and pedagogical strategies of unaccomplished human tutors. The typical tutor in a school system is unaccomplished in the sense that the tutor has had no training in tutoring strategies and has only introductory-to-intermediate knowledge about the topic. The development of AutoTutor was funded by an NSF grant (SBR 9720314, in the Learning and Intelligent Systems program). The discourse patterns and pedagogical strategies in AutoTutor were based on a previous project that dissected 100 hours of naturalistic tutoring sessions.
AutoTutor is currently targeted for college students in introductory computer literacy courses, who learn the fundamentals of hardware, operating systems, and the Internet. Instead of merely being an information delivery system, AutoTutor serves as a discourse prosthesis or collaborative scaffold that assists the student in actively constructing knowledge. AutoTutor presents questions and problems from a curriculum script, attempts to comprehend learner contributions that are entered by keyboard, answers student questions, formulates dialog moves that are sensitive to the learner's contributions (such as short feedback, pumps, prompts, assertions, corrections, and hints), and delivers the dialog moves with a talking head. The talking head displays emotions, produces synthesized speech with discourse-sensitive intonation, and points to entities on graphical displays. AutoTutor has seven modules: a curriculum script, language extraction, speech act classification, latent semantic analysis (a statistical representation of domain knowledge), topic selection, dialog management, and a talking head. Evaluations of AutoTutor have shown that the tutoring system improves learning with an effect size that is comparable to typical human tutors in school systems, but not as high as accomplished human tutors and intelligent tutoring systems. The dialog moves of AutoTutor blend in the discourse context very smoothly because students cannot distinguish whether a speech act was generated by AutoTutor or a human tutor.
The proposed research will substantially expand the capabilities of AutoTutor by designing the discourse to handle more sophisticated tutoring mechanisms. These mechanisms should further enhance the active construction of knowledge. One enhancement is to get the student to articulate more knowledge, with more formal, symbolic, and precise specification; if the student doesn't say it, it is not considered covered by AutoTutor. Another enhancement is to set up the dialog so that it guides the user in manipulating a 3-dimensional microworld of a physical system; the student attempts to simulate a new state in the physical system by manipulating parameters, inputs, and formulae. The proposed research will develop AutoTutor in the domains of both computer literacy and Newtonian physics, so we will have some foundation for evaluating the generality of AutoTutor's mechanisms. AutoTutor has been designed to be generic, rather than domain-specific; an authoring tool will be developed that makes it easy for instructors to prepare new material on new topics. After the new versions of AutoTutor are completed, we will evaluate its effectiveness on learning gains, conversational smoothness, and pedagogical quality. During the course of achieving these engineering and educational objectives, the proposed project will conduct basic research in cognitive psychology, discourse processes, computer science, and computational linguistics. This research cuts across quadrant 2 (behavioral, cognitive, affective, and social aspects of human learning) and quadrant 3 (SMET learning in formal and informal educational settings).
|
1 |
2004 — 2008 |
Graesser, Arthur (co-PI) [⬀] Steedman, Mark Hu, Xiangen (co-PI) [⬀] Louwerse, Max Bard, Ellen (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tracking Multimodal Communication in Humans and Agents
TRACKING MULTIMODAL COMMUNICATION IN HUMANS AND AGENTS
This project investigates multimodal communication in humans and agents, focusing on two linguistic modalities - prosody and dialog structure, which reflect major communicative events, and three non-linguistic modalities - eye gaze, facial expressions, and body posture. It aims to determine 1. which of the non-linguistic modalities align with events marked by prosody and dialogue structure, and with one another; 2. whether, and if so when, these modalities are observed by the interlocutor; 3. whether the correct use of these channels actually aids the interlocutor's comprehension. Answers to these questions should provide a better understanding of the use of communicative resources in discourse and can subsequently aid the development of more effective animated conversational agents.
The outcomes of our observations will be modeled on controlled elicited dialog. To assure robust information on the interplay of modalities, we control the base conditions, genre, topic, and goals of unscripted dialogs. An ideal task for this is the Map Task, where dialog participants work together to reproduce on one player's map a route preprinted on the other's. The two maps, however, are slightly different, so that each player holds information important to the other. This scenario triggers a highly interactive, incremental and multimodal conversation.
In the proposed project a basic corpus of Map Task dialogues will be collected while recording spoken language, posture, facial expressions, and eye gaze. Hand gestures, discouraged by the task, will be recorded where they occur. These findings will be used in the Behavior Expression Animation Toolkit (BEAT) in order to augment the current intelligent system AutoTutor. AutoTutor has been developed for a broad range of tutoring environments that coach the student in following an expected set of descriptions or explanations. The coach-follower roles in the Map Task scenario make it possible to easily change the scenario for AutoTutor. In a series of usability experiments interactions of dialog participants with AutoTutor will be recorded. These experiments allow us to record not only the participant's impressions, but also his or her efficiency (the time to complete map, latency to find named objects, deviation of the instruction follower's drawn route from the instruction giver's model), and communicative behavior (discourse structure, gaze, facial expressions, etc.).
The research resulting from this project will benefit a large variety of fields, including cognitive science, computational linguistics, artificial intelligence, and computer science. In addition, the integration of the modalities into a working model will advance the development and use of intelligent conversational systems.
|
1 |
2009 — 2010 |
Louwerse, Max |
RC1Activity Code Description: NIH Challenge Grants in Health and Science Research |
The Importance of Language Characteristics in Documenting Clinical Encounters
DESCRIPTION (provided by applicant): NIH RFA-OD-09-003 This application addresses broad Challenge Area (06) Enabling Technologies, and specific Challenge Topic, 06- LM-102: Self-documenting encounters. Narrative data account for much of the information that is documented in patient encounters. With the advances in EHRs, both in menu-based and speech/writing recognition-based systems, it is vital to identify which relevant language characteristics need to be captured in documenting clinical encounters. At present, it is unknown whether language characteristics other than medical keywords used in menu-based systems help to improve the quality of chart notes. The current project analyzes over 1500 chart notes collected over the last six years, whereby each chart note has been graded by two MD faculty on five dimensions. Four computational linguistic models addressing general linguistic features, cohesion and readability, personality and psychological features, and subjectivity of text will analyze these chart notes in order to determine which language characteristics best explain the different grades. These findings will be used to compare original chart notes with notes created using existing EHRs, to determine the extent to which EHRs might benefit from augmented language characteristics. Knowing which language characteristics are essential in documenting clinical encounters is informative for emerging technologies, but knowing whether existing EHRs can benefit from adding these characteristics is an additional urgent question. Using OpenSource code for EHRs, language characteristics that prove to be important for the quality of chart notes will be implemented into the OpenSource EHR. An experiment using four standardized patient cases will evaluate the benefits and drawbacks of an extended EHR with regard to informativeness and usability. The findings of this project could have an enormous impact on the development of EHRs. Knowing which language characteristics are important in clinical encounters is informative for existing technologies and emerging technologies alike. The findings could bootstrap the development of speech recognition and handwriting recognition EHRs, give a new stimulus to menu-based EHRs, and have a significant impact on the quality of future chart notes, and subsequently on the diagnosis and treatment based on these notes. As the health care system is in immediate need of transitioning to electronic health records, it is essential to get the best quality and most intuitive technologies from the onset. Because chart notes are very much based on language, knowing which language characteristics constitute high-quality chart notes is vital for diagnosis and treatment, which rely on these notes. Not knowing which specific characteristics distinguish high-quality from low-quality EHRs could negatively impact millions of Americans and cost tens of millions of dollars.
|
1 |