2020 — 2023 |
Firestone, Chaz |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Perceiving High-Level Relations @ Johns Hopkins University
The world contains not only objects and features, but also relations holding between them. When a piece of fruit is in a bowl, and the bowl is on a table, we appreciate not only the visual appearance of the individual objects (e.g., a red apple, a wooden bowl), but also the relations "containment" (in) and "support" (on). A surprising number of everyday tasks depend on these and other relational representations, such as assembling furniture, packing a suitcase, reading a medical chart, or navigating a scene. How does the mind represent visual relations themselves, beyond the objects participating in them? The work proposed here explores an exciting hypothesis about how the mind extracts relations from images ?? namely, that relations are properly *perceived*, in a fast and automatic manner akin to the perception of more familiar visual properties such as size, color, or shape. Across multiple case studies ?? including the perception of fit, balance, containment, support, adhesion, enclosure, and more ?? this work will take a psychophysical approach to the perception of relations, asking whether relational perception proceeds rapidly, automatically, reflexively, and in ways that interact with other perceptual processes.
The work proposed here has three primary aims: (1) To characterize the kinds of relations we perceive; (2) To understand how such relations are extracted by the mind; and (3) To elucidate their function in the mind at large. First, what kinds of relations can we see? Previous work has focused mostly on basic geometric and spatial relations (such as being beside, above, or behind); but objects in the world are related to each other in far richer ways. Here, the investigator will catalog the kinds of relations that appear in perception, with a special focus on ?force-dynamic? relations, including combining, containing, supporting, balancing, covering, tying, connecting, hanging, and other relations in which objects exert physical forces on one another. Second, do we only consider, judge, or infer the relations between objects? Or can we also see them directly? The next aim of this proposal is to investigate the nature of relational perception, by asking whether the mind extracts visual relations in ways that show signatures of genuinely perceptual processing, such as speed, automaticity, reflexiveness (or cognitive impenetrability), and interaction with other perceptual processes. Third, why do we perceive relations at all? Are they just curious quirks of the mind, or do they support other kinds of knowledge? For example, once we see that two objects are connected, do our minds automatically predict that tugging on one will bring the other along? Once we see that two object-parts can combine into a whole, is it easier to remember or count such objects? The final aim of this project is to explore how relational perception supports other sophisticated kinds of mental processing, including automatic prediction of physical contingencies between objects.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2021 — 2023 |
Landau, Barbara (co-PI) [⬀] Firestone, Chaz Bonner, Michael Hafri, Alon |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Neural Mechanisms of Relational Perception
This award was provided as part of NSF's Social, Behavioral and Economic Sciences Postdoctoral Research Fellowships (SPRF) program. The goal of the SPRF program is to prepare promising, early career doctoral-level scientists for scientific careers in academia, industry or private sector, and government. SPRF awards involve two years of training under the sponsorship of established scientists and encourage Postdoctoral Fellows to perform independent research. NSF seeks to promote the participation of scientists from all segments of the scientific community, including those from underrepresented groups, in its research programs and activities; the postdoctoral period is considered to be an important level of professional development in attaining this goal. Each Postdoctoral Fellow must address important scientific questions that advance their respective disciplinary fields. Under the sponsorship of Drs. Michael F. Bonner, Chaz Firestone, and Barbara Landau at Johns Hopkins University, this postdoctoral fellowship award supports an early career scientist investigating how the human brain represents visual relations. The world is more than a bag of objects: We see not only individual objects and their features (e.g., a fluffy cat or a textured mat) but also how they relate (a cat sitting ON a mat). Relations are a property holding between objects, beyond any properties the objects have on their own. How do we represent such relations? Although relations themselves cast no light onto our eyes, a growing body of work suggests that relations between objects are extracted in rapid and automatic visual processing, much as we automatically perceive an object's shape or color. Despite this, we have surprisingly little understanding of how the human brain represents such relations. For example, does the visual system automatically extract the structure of relations (distinguishing [mat on cat] from [cat on mat])? And might the brain represent relations and the participating objects (e.g., cat, mat, and ON) in an integrated, "compressed" manner (much like a computer might compress the contents of a file or image)? The proposed research aims to provide answers to these and other questions, using a set of physical relations (e.g., containment, support, adhesion, and fit) as a case study. By integrating methods from the fields of vision science and cognitive computational neuroscience, this research will advance our understanding of how the human brain extracts relational information from visual scenes. This research also has broad implications for understanding perceptual processing of physical relations, which are an unexplored but important domain in STEM education and crucial for scientific understanding, e.g., about physical mechanics (such as the movement of gas particles in a container).
The proposed research combines psychophysical, neuroimaging, and computational modeling approaches to pursue three objectives, aimed at characterizing: (1) what properties of relations are perceived, (2) where relational information is represented in the brain, and (3) how relational structure is computed and represented. We focus on physical relations between objects (e.g., containment ["on"] and support ["in"]), as such relations are central to many other processes in the mind, including physical understanding (e.g., if the mat moves, will the cat too?). In the first objective, we will use rapid perceptual tasks to measure the influence of relational properties on similarity judgment behavior. In the second objective, we will identify which areas of the brain encode visual relational information, by identifying brain regions in which the participating objects (e.g., cat, mat) are encoded in an integrated, non-linear manner (i.e., where relational representations are not well-approximated by simple weighted sums of the representations of the participating objects). In the third objective, we will test the hypothesis that the brain implements a compositional representation of visual relations, by asking whether a model that explicitly encodes relational structure (e.g., mat as Supporter, cat as Supported) can predict neural patterns for novel relational scenes. The proposed research engages a new frontier in scene representation: how the human brain computes high-level information about the relational structure of the world. It also has direct implications for theories of spatial cognition, language, and intuitive physics.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.903 |