Jeffrey F. Cohn, Ph.D. - US grants
Affiliations: | Psychology | University of Pittsburgh, Pittsburgh, PA, United States |
Area:
Facial Expression of Emotion, DepressionWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Jeffrey F. Cohn is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
1985 | Cohn, Jeffrey F | R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Bidirectional Influence in Mother-Infant Interaction @ University of Pittsburgh At Pittsburgh Although it is widely assumed that bidirectional influence accounts for the structure of early interactions, this hypothesis has never been adequately tested against alternate hypotheses. In order to evaluate this and alternate hypotheses, interactions between 54 mother-infant pairs--18 each at 3, 6, and 9 months--have been coded with a set of behavioral descriptors and a 1/4 sec time base. The interaction data will be analyzed with two separate sets of procedures that correspond to the principal ways in which behavior has been operationalized in studies of social interaction. In the first set, transition frequencies among mother-infant joint states will be analyzed with Thomas & Malone's (1979) Model 1. In the second, mother and infant scaled scores will each be scaled along an attentional/affective dimension, and these will be analyzed with Box-Jenkins bivariate time series techniques. Both sets of analyses will provide estimates of: infant sensitivity to own and other's prior response; and comparable estimates for the mother. The hypothesis of bidirectional influence will be supported at those ages for which infant and mother sensitivity to other's prior response is significant. Developmental changes in bidirectional influence will be analyzed with nonparametric techniques, for the Thomas & Malone parameter estimates, and ANOVA for the time series estimates. The proposed study will provide a comprehensive evaluation of the bidirectional and alternate hypotheses. |
1 |
1990 — 1994 | Moore, Christopher Cohn, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mother-Infant Coordination of Vocalization and Affect @ University of Pittsburgh Considerable research on the mother-infant bond has proceeded on the assumption that mothers' facial expression is the principal medium of communication. Cohn and Moore hypothesize a much stronger role for vocal interaction; if this is true, then previous studies might seriously underestimate the extent to which infants are responsive to their mothers' behavior. The research will study the frequency and patterning of mothers' speech, to determine the relationship of these variables to the intensity of mothers' affective behaviors and infants' attention. In attention, the research will seek to determine if specific mothers' vocalizations influence specific infant behaviors. Finally, developmental shifts in vocal-vs-facial influence will be studied. This research promises to throw a powerful light on the mysteries surrounding the bonding of mothers and their infants, and ultimately on the factors that influence crucial developmental processes in infancy. |
0.915 |
1997 — 2010 | Cohn, Jeffrey F | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Facial Expression Analysis by Image Processing @ University of Pittsburgh At Pittsburgh DESCRIPTION (Applicant's Abstract): Facial expression communicates information about emotional response and plays a critical role in the regulation of interpersonal behavior. Current human-observer based methods for measuring facial expression are labor intensive, qualitative, and difficult to standardize across laboratories and over time. To make feasible more rigorous, quantitative measurement of facial expression in diverse applications, we formed an interdisciplinary research group which covers expertise in facial expression analysis and image processing. In the funding period, we developed and demonstrated the first version of an automated system for measuring facial expression in digitized images. The system can discriminate nine combinations of FACS action units in the upper and lower face, quantity the timing and topography of action unit intensity in the brow region; and geometrically normalize image sequences within a range of plus or minus 20 degrees of out of-plane. In the competing renewal, we will increase the number of action unit combinations that are recognized, implement convergent methods of quantifying action unit intensity, increase the generalizability of action unit estimation to a wider range of image orientations, test facial image processing (FIP) in image sequences from directed facial action tasks and laboratory studies of emotion regulation, and facilitate the integration of FIP into existing data management and statistical analysis software for use by behavioral science researchers and clinicians. With these goals completed, FIP will eliminate the need for human observers in coding facial expression, promote standardize measurement, make possible the collection and processing of larger, more representative data sets, and open new areas of investigation and clinical application. |
1 |
2004 — 2005 | Cohn, Jeffrey | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Pittsburgh Infant smiles can predict later adaptive functioning, but little is known about the temporal course of infant smiles or their perceived emotional intensity. This collaborative project combines computer-based measurements of infant smiles with parents' ratings of those smiles. The project goals are to understand how infants smile, and to document the features that make infant smiles appear more or less joyful. |
0.915 |
2006 — 2008 | Cohn, Jeffrey | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research Dhb: Coordinated Motion and Facial Expression in Dyadic Conversation @ University of Pittsburgh When humans converse, semantic verbal content is accompanied by vocal prosody (the emphasis and timing of speech), head nods, eye movements, eyebrow raises, and mouth expressions such as smiles. Coordination between conversants' movements and/or facial expressions can be observed when an action generated by one individual is predictive of a symmetric movement by another: symmetry formation. The interplay between such symmetry formation and subsequent symmetry breaking in nonverbal behavior is integral to the process of communication and is diagnostic of the dynamics of human social interaction. The PIs propose a model in which low level contributions from audition, vision, and proprioception (the perception of the angle of our joints) are combined in a mirror system that assists affective and semantic communication through the formation and breaking of symmetry between conversants' movements, facial expressions and vocal prosody. In the current project, naive participants will engage in dyadic (one-on-one) conversations with trained laboratory assistants over a closed-circuit video system that displays a computer reconstructed version of the lab assistant's head and face. Both the naive participant's and the lab assistant's motions, facial expressions, and vocalizations will be recorded. The visual and auditory stimuli available to the naive participant will be manipulated to provide specific hypothesis tests about the strength and timing of the effects of head movement, facial expression, and vocal prosody. The visual manipulation will be provided by a photorealistic reconstructed avatar head (i.e., a computer animation) driven partially by tracking the lab assistant's head and face, and partially from manipulation of timing and amplitude of the avatar's movement and facial expression. A combined differential equations and computational model for the dynamics of head movements and facial expression will be constructed and tested in real-time substitution for lab assistant's head motion or facial expression as realized by the avatar. The broader impact of this project falls into three main areas: enabling technology for the study of human and social dynamics, applications to the treatment of psychopathology, applications to human-computer interface design and educational technology. (1) These experiments will result in the advancement of methods for testing a wide variety of hypotheses in social interaction where the research question involves a manipulation of perceived social roles. (2) Automated analysis of facial expression provides on-line analysis of social interactions in small group, high stress settings in which emotion regulation is critical such as in residential psychiatric treatment centers and in psychotherapist-client interactions. The reliability, validity, and utility of psychiatric diagnosis, assessment of symptom severity, and response to treatment could be improved by efficient measurement of facial expression and related non-verbal behavior, such as head gesture and gaze. (3) Successful outcomes from the computational models may lead to the development of automated computer interfaces and tutoring systems that could respond to students' facial displays of confusion or understanding, and thereby guide more efficient instruction and learning. |
0.915 |
2010 — 2012 | Cohn, Jeffrey | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Spontaneous 4d-Facial Expression Corpus For Automated Facial Image Analysis @ University of Pittsburgh Facial expression is central to human experience. Its efficient and valid measurement is a challenge that automated facial image analysis seeks to address. Currently, few publically available, annotated databases exist. Those that do are limited to 2D static images or video of posed facial behavior. Further development is stymied by lack of adequate training data. Because posed and un-posed (aka ?spontaneous?) facial expressions differ along several dimensions including complexity, well annotated video of un-posed facial behavior is needed. Moreover, because the face is a three-dimensional deformable object, 2D video is insufficient. A 3D video archive is needed. |
0.915 |
2011 — 2015 | Cohn, Jeffrey | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Communication, Perturbation, and Early Development @ University of Pittsburgh Young infants typically form lasting, emotional attachments to their caregivers. The strength and type of these attachments are related to emotional well-being and cognitive development. This project will explore how face-to-face interactions between infants and adults contribute to this important aspect of child development. |
0.915 |
2012 — 2016 | Cohn, Jeffrey | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Pittsburgh Emotion is the complex psycho-physiological experience of an individual's state of mind. It affects every aspect of rational thinking, learning, decision making, and psychomotor ability. Emotion modeling and recognition is playing an increasingly important role in many research areas, including human computer interaction, robotics, artificial intelligence, and advanced technologies for education and learning. Current emotion-related research, however, is impeded by a lack of a large spontaneous emotion data corpus. With few exceptions, emotion databases are limited in terms of size, sensor modalities, labeling, and elicitation methods. Most rely on posed emotions, which may bear little resemblance to what occurs in the contexts wherein the emotions are really triggered. In this project the PIs will address these limitations by developing a multimodal and multidimensional corpus of dynamic spontaneous emotion and facial expression data, with labels and feature derivatives, from approximately 200 subjects of different ethnicities and ages, using sensors of different modalities. To these ends, they will acquire a 6-camera wide-range 3D dynamic imaging system to capture ultra high-resolution facial geometric data and video texture data, which will allow them to examine the fine structure change as well as the precise time course for spontaneous expressions. Video data will be accompanied by other sensor modalities, including thermal, audio and physiological sensors. An IR thermal camera will allow real time recording of facial temperature, while an audio sensor will record the voices of both subject and experimenter. The physiological sensor will measure skin conductivity and related physiological signals. Tools and methods to facilitate and simplify use of the dataset will be provided. The entire dataset, including metadata and associated software, will be stored in a public depository and made available for research in computer vision, affective computing, human computer interaction, and related fields. |
0.915 |
2012 — 2016 | Cohn, Jeffrey F | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Automated Facial Expression Analysis For Research and Clinical Use @ University of Pittsburgh At Pittsburgh DESCRIPTION (provided by applicant): Facial expression has been a focus of emotion research for over a hundred years. In recent decades observations of facial expressions have yielded critical and dramatic insights about the etiology of psychopathology, and have proven capable of predicting treatment outcomes (see Ekman & Rosenberg, 2005). Despite these original striking findings, there has been surprisingly little follow-up work. The primary reason fr the lack of sustained research is that the most reliable manual systems for measuring facial expression often require considerable training and are labor intensive. Automated measurement using computer vision and machine learning seeks to address the need for valid, efficient, and reproducible measurement. Recent systems have shown promise in fairly small studies using posed behavior or structured contexts with confederates, or trained interviewers, or pre-trained (person-specific) face models. For automated coding to be applied in real-world settings, a large data base with ample variability in pose, head motion, skin color, gender, partial occlusion, and expression intensity is needed. We have developed a unique database that meets this need and the algorithms necessary to enable robust automated coding. The database consists of 720 participants in three-person groups engaged in a group formation task. In a preliminary study, we demonstrated that our algorithms can successfully code two key facial signals associated with human emotion in this relatively unconstrained context (Cohn & Sayette, 2010). To achieve efficient, accurate, and valid measurement of facial expression usable in research and clinical settings, we aim to 1) train and validate classifiers to achieve reliable facial expression detectin across this unprecedentedly large, diverse data set; 2) extend the previous person-specific methods to person-independent (generic) facial feature detection, tracking, and alignment; and 3) make these tools available for research and clinical use. |
1 |
2013 — 2016 | Chow, Sy-Miin Cohn, Jeffrey F Messinger, Daniel S. [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Modeling the Dynamics of Early Communication and Development @ University of Miami Coral Gables DESCRIPTION (provided by applicant): Significance. Computational modeling is central to a rigorous understanding of the development of the child's first social relationships. The project will address this challenge by modeling longitudinal change in the dynamics of early social interactions. Modeling will integrate objective (automated) measurements of emotion and attention and common genetic variants relevant to those constructs. Innovation. Objective measurement of behavior will involve the automated modeling and classification of the physical properties of communicative signals-such as facial expressions and vocalizations. Dynamic models of self-regulation and interactive influence during dyadic interaction will utilize precise measurements of expressive behavior as moderated by genetic markers associated with dopaminergic and serotonergic functioning. The interdisciplinary team includes investigators including from developmental and quantitative psychology, genetics, affective computing, computer vision, and physics who model dynamic interactive processes at a variety of time scales. Approach. Infant-mother interaction, its perturbation, and its development, will be investigated using the Face-to-Face/Still-Face (FFSF) procedure at 2, 4, 6, and 8 months. Facial modeling, head, and arm/hand modeling will be used to conduct objective measurements of a multimodal suite of interactive behaviors including facial expression, gaze direction, head movement, tickling, and vocalization. Models will be trained and evaluated with respect to expert coding and non-experts' perceptions of emotional valence constructs. Dynamic approaches to time-series modeling will focus on the development of self-regulation and interactive influence. Inverse optimal control modeling will be used to infer infant and mother preferences for particular dyadic states given observed patterns of behavior. The context-dependence of these parameters will be assessed with respect to the perturbation introduced by the still-face (a brief period of investigator-requested adult non-responsivity). Individual differences in infant and mother behavioral parameters will be modeled with respect to genetic indices of infant and mother dopaminergic and serotonergic function. Modeling algorithms, measurement software, and coded recordings will be shared with the scientific community to catalyze progress in the understanding of behavioral systems. These efforts will increase understanding of pathways to healthy cognitive and socio-emotional development, and shed light on the potential for change that will inform early intervention efforts. |
0.955 |
2014 — 2017 | Cohn, Jeffrey | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop: Doctoral Consortium At the Acm International Conference On Multimodal Interaction 2014 @ University of Pittsburgh This is funding to support participation by about 8 graduate students from U.S. institutions, along with about 5 senior members of the ICMI community who will serve as mentors, in a Doctoral Consortium (workshop) to be held in conjunction with and immediately preceding the 16th International Conference on Multimodal Interaction (ICMI 2014), which will take place November 12-16, 2014, at Bogazici University in Istanbul, Turkey, and which is organized by the Association for Computing Machinery (ACM). The ICMI conference series is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development. Topics of special interest to the conference this year include: multimodal interaction processing; interactive systems and applications; modeling human communication patterns; data, evaluation and standards for multimodal interactive systems; and urban interactions. ICMI 2014 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), special sessions, demonstrations, exhibits and doctoral spotlight papers. The ICMI 2014 proceedings will be published by ACM Press and included in the ACM Digital Library. As a further incentive for high-quality student participation ICMI 2014 will be awarding outstanding paper awards, with a special category for student papers. More information about the conference may be found online at http://icmi.acm.org/2014/. |
0.915 |
2014 — 2018 | Cohn, Jeffrey | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Pittsburgh Mental trauma following disasters, military service, accidents, domestic violence and other traumatic events is a health issue costing multiple billion of dollars per year. Beyond its direct costs, there are indirect costs including a 45-150% greater use of medical and psychiatric care. While web-based support systems have been developed these are effectively a "one-size-fits all" approach lacking the personalization of regular treatment and the engagement and effectiveness associated with a tailored regimen. This project brings together a multi-disciplinary team of leading researchers in trauma treatment, facial analysis, computer vision and machine learning to develop a scalable, adaptive person-centered approach that uses vision and sensing to improve web-based trauma treatment. In particular, the effort measures specific personalized variables during treatment and then uses a model to adapt treatment to individuals in need. |
0.915 |
2016 — 2019 | Cohn, Jeffrey | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Pittsburgh This project will extend and sustain a widely-used data infrastructure for studying human emotion, hosted at the lead investigator's university and available to the research community. The first two versions of the dataset (BP4D and BP4D+) contain videos of people reacting to varied emotion-eliciting situations, their self-reported emotion, and expert annotations of their facial expression. Version 1, BP4D (n=41), has been used by over 100 research groups and supported a successful community competition around recognizing emotion. The second version (BP4D+) adds participants (n = 140), thermal imaging, and measures of peripheral physiology. The current project greatly broadens and extends this corpus to produce a new dataset (BP4D++) that enables deep-learning approaches, increases generalizability, and builds research infrastructure and community in computer and behavioral science. The collaborators will (1) increase participant diversity; 2) add videos of pairs of people interacting to the current mix of individual and interviewer-mediated video; 3) increase the number of participants to meet the demands of recent advances in "big data" approaches to machine learning; and 4) expand the size and scope of annotations in the videos. They will also involve the community through an oversight and coordinating consortium that includes researchers in computer vision, biometrics, robotics, and cognitive and behavioral science. The consortium will be composed of special interest groups that focus on various aspects of the corpus, including groups responsible for completing the needed annotations, generating meta-data, and expanding the database application scope. Having an infrastructure to support emotion recognition research matters because computer systems that interact with people (such as phone assistants or characters in virtual reality environments) will be more useful if they react appropriately to what people are doing, thinking, and feeling. |
0.915 |
2017 — 2018 | Cohn, Jeffrey F | R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Craniofacial Microsmia: Facial Expression From Ages 1 to 3 Years @ University of Pittsburgh At Pittsburgh Project Summary Significance. Craniofacial microsomia (CFM) impairs facial muscle movement, speech, and hearing, and compromises socio-emotional development. Children with CFM have elevated levels of internalizing behavior (shy, withdrawn) and reduced social competence and peer acceptance. Unknown at present are the mechanisms through which CFM and these social-emotional outcomes become linked. Facial asymmetries and cranial neuropathies associated with CFM likely play an important role in impairing socio-emotional outcomes. Asymmetries of the facial skeleton, soft tissue, and cranial nerve have both intra- and interpersonal effects. Intra-personally, they impact function (unilateral hearing loss, malocclusion, facial expressiveness) and form (noticeable craniofacial malformations), which can impair social signaling and responsiveness. Because asymmetry is negatively correlated with attractiveness, there may be non-specific social effects as well. Many surgical treatments for CFM are designed to restore facial symmetry in static pose (e.g., neutral expression). Less is known about restoring or even measuring spontaneous facial expressiveness. From a developmental perspective, one of the most important consequences of limitations in facial muscle movement is its potentially negative impact on affective communication. In a longitudinal design, we propose to test the hypothesis that deficits in facial expressiveness and structural and functional asymmetry increase risk for internalizing and externalizing problems. If supported, the findings would inform our understanding of socio-emotional development in children with CFM and contribute to clinical evaluation and treatment. Innovation. This is the first effort to 1) use automated, objective measurement of facial expressiveness of communicative behavior and functional asymmetry of children with CFM; 2) model change with development in these parameters and their relation to internalizing and externalizing problems; and 3) use machine learning to investigate the relation among dynamics of expressiveness and asymmetry in relation to CBCL. Approach. Children with and without CFM will be video-recorded at 1 and 3 years with an examiner. Age 1 is an interactive context intended to elicit positive and negative emotion. Age 3 is an interactive context to assess expressive speech and attention. Expressiveness and structural and functional asymmetry are assessed using automatic, objective computer-vision based measurement. Analyses include complementary approaches: statistical (regression and ANOVA) hypothesis testing and machine learning (convolutional neural networks). Relevance Using objective, automatic computer-vision-based measurements, we propose to test the hypothesis that deficits in facial expressiveness and structural and functional asymmetry among children with CFM increases their risk for internalizing and externalizing problems. If supported, clinical assessments of expressiveness and functional asymmetry could effectively target children for specialized interventions and be used to evaluate surgical interventions. Because the proposed procedures are cost effective, they could be applied in a wide range of settings to benefit children with craniofacial disorders and have applicability to other conditions and age groups in which facial expression is compromised (e.g., Mobius Syndrome, Bell's Palsy, injury/burns, and stroke). |
1 |
2017 — 2021 | Cohn, Jeffrey Swartz, Holly |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sch: Int: Collaborative Research: Dyadic Behavior Informatics For Psychotherapy Process and Outcome @ University of Pittsburgh Using multimodal indicators, this project will develop a novel computational framework that models individual and interpersonal behavior in relation to process and outcomes in psychotherapy and other interpersonal contexts. The unique aspect of the project is the explicit joint and dyadic modeling of individuals' multimodal behaviors to holistically understand the system of the dyad. This research will pave the way to a better understanding of the dyadic behavior dynamics in psychotherapy and beyond. The project will build the computational foundations to predict process and outcomes, and more broadly inform behavioral science: The project will (1) contribute to knowledge about the psychotherapeutic process by identifying and characterizing behavior indicators with respect to process and outcome measures; (2) deepen our understanding of dyadic coordination dynamics that contribute to strong working alliance between clients and therapists; (3) make available to the research and clinical communities the Dyadic Behavior Informatics framework and Behavior Indicator Knowledgebase for use in other settings; and (4) establish the foundation for novel education and training materials and interventions. The knowledge and computational tools developed as part of this project will impact computing and behavioral science and applied domains more broadly. |
0.915 |
2017 — 2021 | Cohn, Jeffrey F | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Automatic Multimodal Affect Detection For Research and Clinical Use @ University of Pittsburgh At Pittsburgh Project Summary A reliable and valid automated system for quantifying human affective behavior in ecologically important naturalistic environments would be a transformational tool for research and clinical practice. With NIMH support (MH R01-096951), we have made fundamental progress toward this goal. In the proposed project, we extend current capabilities in automated multimodal measurement of affective behavior (visual, acoustic, and verbal) to develop and validate an automated system for detecting the constructs of Positive, Aggressive, and Dysphoric behavior and component lower-level affective behaviors and verbal content. The system is based on the manual Living in Family Environments Coding System that has yielded critical findings related to developmental psychopathology and interpersonal processes in depression and other disorders. Two models will be developed. One will use theoretically-derived features informed by previous research in behavioral science and affective computing; the other empirically derived features informed by Deep Learning. The models will be trained in three separate databases of dyadic and triadic interaction tasks from over 1300 adolescent and adult participants from the US and Australia. Intersystem reliability with manual coding will be evaluated using k-fold cross-validation for both momentary and session level summary scores. Differences between models and in relation to participant factors will be tested using the general linear model. To ensure generalizability, we further will train and test between independent databases as well. To evaluate construct validity of automated coding, we will use the ample validity data available in the three databases to determine whether automated coding achieves the same or better pattern of findings with respect to depression risk and development. Following procedures already in place for sharing databases and software tools, we will design the automated systems for use by non-specialists and make them available for research and clinical use. Achieving these goals will provide behavioral science with powerful tools to examine basic questions in emotion, psychopathology, and interpersonal processes; and clinicians to improve assessment and ability to track change in clinical and interpersonal functioning over time. Relevance For behavioral science, automated coding of affective behavior from multimodal (visual, acoustic, and verbal) input will provide researchers with powerful tools to examine basic questions in emotion, psychopathology, and interpersonal processes. For clinical use, automated measurement will help clinicians to assess vulnerability and protective factors and response to treatment for a wide range of disorders. More generally, automated measurement would contribute to advances in intelligent tutors in education, training in social skills and persuasion in counseling, and affective computing more broadly. |
1 |