2009 — 2014 |
Lu, Hongjing |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: a Computational Investigation Into Biological Motion Perception @ University of California-Los Angeles
CAREER: A Computational Investigation into Biological Motion Perception Hongjing Lu, Principal Investigator
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
In everyday activities ranging from running in a proper trajectory to avoid hitting other pedestrians, recognizing a friend based upon his walking style from a far distance, to practicing boxing with a partner in the gym, action perception plays a critical role in interpreting the intentions of other people and interacting with the world. Despite the importance of action perception, there are major gaps in our understanding of three basic computational issues: (1) how efficiently human observers can process visual information to make action identifications in different contexts; (2) how humans learn and represent action categories, such as walking, boxing, and dancing; and (3) how humans acquire the ability to understand social interactions by bridging perception and reasoning. With the support of an NSF CAREER award, Dr. Hongjing Lu will integrate computer modeling approaches with behavioral experiments to answer the above three questions. This research will develop greater understanding of how visual information is used to achieve apparently effortless recognition of human actions at different processing levels. The work will significantly extend conventional computational approaches in order to render them applicable to the study of visual processes more complex than those to which they have previously been applied.
Understanding the computational basis of action perception is essential to achieving a scientific account of our ability to understand the external world and to conduct social interactions. Furthermore, understanding how the human visual system perceives actions will guide development of artificial vision systems to recognize and interpret complex biological movements. This research will improve a range of applications, including action visualization (e.g., deciding what visual information is important or could be ignored in 3D animation), security surveillance systems (e.g., detection and recognition of suspicious actions in airports or train stations), robotics (e.g., the ability of machines to interact effectively with people), blind assistance system and driver assistance systems (e.g., blind spot detection systems for drivers when backing the car).
In addition, the integration of research and education activities in the project will provide students with training opportunities in interdisciplinary research, encompassing psychology, statistics, computer science and mathematics. A new Ph.D. Major in Computational Cognition will be established in the Psychology Department at UCLA. Mathematical training will be promoted at both the graduate and undergraduate levels in Psychology; at the same time, training in experimental research will be introduced to students with mathematical and computer science backgrounds.
|
1 |
2014 — 2017 |
Lu, Hongjing |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Understanding Biological Motion @ University of California-Los Angeles
A major issue in the psychological sciences is how people can infer the intentions of others. Humans are remarkably adept at predicting the actions of other people and making inferences about their intention and goals. The present investigation examines how humans make such inferences from the physical movements of others. The work is guided by a computational theory of biological motion understanding that quantifies the action representations that allow people to make inferences in action recognition and prediction. The larger goal is to explain how perception and reasoning operate synergistically to infer hidden goals and intentions.
The proposed research has broad impact in several domains. The inference capacity of most people exceeds that of today's best machine vision systems. For example, in the investigation of the bombing at the Boston marathon, extensive video from surveillance camera systems was available but it was the trained human eye that led to arrests. Human investigators scrutinized hundreds of hours of videos frame by frame and identified suspects who displayed suspicious behavioral patterns. Hence, understanding how humans make inferences and predictions about actions will play an important role in guiding the development of more advanced machine vision systems, useful in forensic sciences as well as many other real-world applications. In addition, individuals with autism or nonverbal learning disabilities often show difficulty in inferring the meaning of observed actions. Investigation of the key computational components underlying action understanding may potentially guide the development of behavioral interventions to facilitate compensatory strategies for understanding actions.
|
1 |
2017 — 2020 |
Lu, Hongjing |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Discovering Hierarchical Representations For Action Understanding @ University of California-Los Angeles
A major issue in the psychological sciences is understanding how people can infer the intentions of others. Humans are remarkably adept at predicting the actions of other people and making inferences about their intention and goals. The present investigation examines how humans make such inferences from the physical movements of others. The work is guided by a computational theory of biological motion understanding that quantifies what aspects of actions allow observers to make inferences about the meaning of actions and what might come next. The larger goal is to explain how perception and reasoning operate synergistically to infer hidden goals and intentions. These findings will guide development of the next generation of intelligent machine-vision systems, useful in forensic sciences as well as many other real-world applications. Such systems will need to perform challenging tasks that currently are difficult and time-consuming for humans (for example, automated interpretation of human actions recorded in low-resolution surveillance video). The project will also help to identify individual differences in action understanding, potentially revealing the nature of the impairments in action understanding observed in people with autism disorder. In addition, the project will provide a unique training opportunity for students who are interested in interdisciplinary research at the interface between cognitive science and artificial intelligence and will provide an in-depth international research experience for a graduate student and postdoctoral fellow.
The research will integrate advanced psychophysical methods with sophisticated computational approaches. A key aim is to develop a unified theory based on a hierarchical non-parametric Bayesian framework, specifying the fundamental computational mechanisms involved in perception of human actions and reasoning about them. More generally, the project will use human body movements as an underutilized approach to understanding general problems in learning: how to construct, use and transform hierarchical representations to support human perception and cognition. Three aims are particularly noteworthy. First, the project will integrate computational modeling approaches with behavioral experiments to investigate the critical connection between perceptual and cognitive systems. Second, the project uses action stimuli derived from motion capture data in the real world as the visual input (CCTV images collected in the UK and secured at the University of Glasgow). By avoiding the limitations of studies that use restricted examples and constrained environments, the investigators maximize the likelihood that the findings will generalize to real-world situations. Third, the project will develop significant extensions of Bayesian approaches in order to study complex visual processes by combining generative models with probabilistic constraints. This award is co-funded by the Perception, Action, and Cognition Program and the Office of International Science and Engineering.
|
1 |
2018 — 2021 |
Lu, Hongjing Holyoak, Keith [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Compcog: Achieving Analogical Reasoning Via Human and Machine Learning @ University of California-Los Angeles
Despite recent advances in artificial intelligence, humans remain unmatched in their ability to think creatively. Intelligent machines can use massive data to learn to identify patterns that are similar to learned examples, but people can use very small amounts of data to discover deep similarities between situations that are superficially very different (e.g., engineers have devised a cooling system for buildings using principles adapted from termite mounds). This type of creative thinking depends on analogy: the ability to find and exploit resemblances based on relations among entities, rather than solely on superficial appearances. The present investigation aims to show how relations can be learned from examples (in the form of either texts or pictures) and then used to reason by analogy. The work integrates recent advances in machine learning with more human-like learning mechanisms. Improved analogy models will increase the power of computer-based information retrieval, allowing both text and pictures to serve as retrieval cues to search large databases for items that are analogous in relational structure. The large analogy datasets generated for the project will be made publically available. More flexible search engines will help to automate creative tasks such as engineering design. Identifying the computational basis for relation learning and analogical reasoning will guide development of artificial intelligence systems by providing more efficient learning mechanisms. The research team is integrating research and education activities by using this project as a training opportunity in interdisciplinary research, encompassing psychology, statistics, computer science and mathematics.
The research will integrate advanced computational approaches with behavioral experiments on human relation learning and analogical reasoning, using both texts and pictures as inputs. The work is guided by cognitive theory on learning and reasoning, and exploits recent advances in the field of machine vision. The project includes the creation and validation of multiple databases of analogy problems. Experiments will be performed to establish human performance levels in a variety of tasks. Computational models will be developed by synergizing big-data learning through deep networks with small-data learning through Bayesian modeling. Models will be evaluated by comparison with human benchmarks. By addressing issues that arise in reasoning from natural inputs such as texts and pictures, the models to be developed will generalize to situations that people encounter in their daily life.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2020 — 2023 |
Holyoak, Keith [⬀] Lu, Hongjing |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: How Does the Brain Represent Abstract Concepts? @ University of California-Los Angeles
The ability to reason about the relations between sets of concepts?relational reasoning?gives rise to abstract thought, and has fueled some of humanity?s greatest achievements in science and technology. Although prior research has identified where in the brain relational reasoning takes place, this project pushes the research field by addressing how the brain represents abstract relations. Specifically, the project aims to address three key questions: (1) Can the brain represent an abstract idea independently of the concrete entities that comprise the content of the idea? (2) Do people represent concepts in an abstract manner only when explicitly required to do so, or are abstract relations also retrieved spontaneously? (3) What neural markers reliably predict differences in reasoning capacity between individuals? That is, do individuals whose brains represent abstract relations more readily also tend to have stronger reasoning skills, and/or to perceive meaningful connections that others miss? This project will identify the computational basis for abstract thought and reasoning, thereby creating an opportunity to refine artificial intelligence systems by providing them with more efficient learning mechanisms. This work will inform future research examining how children, and adults as lifelong learners, form representations of abstract concepts.
This project integrates recent advances in multivariate fMRI, computational modeling, and behavioral methodology to discover the neurocognitive mechanisms underlying the representation of abstract relations. Research will systematically examine the neural bases of this representation, as well as the influence of task context and individual differences. First, behavioral priming and neural similarity measures, alongside metrics from a computational model of relational reasoning, will characterize the overlap in representation between pairs of concepts that are only abstractly related. Second, manipulation of task demands will determine whether the magnitude, location, and stability of neural representations vary with explicit cognitive instructions. Finally, development of a novel 'neural score' metric will determine neural markers of individual differences in relational reasoning.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |