Affiliations: | 2004-2009 | Brain and Cognitive Sciences | Massachusetts Institute of Technology, Cambridge, MA, United States |
| 2009-2011 | | Harvard Medical School - Brigham and Women's Hospital |
| 2011-2017 | Computer Science | Stanford University, Palo Alto, CA |
| 2015-2016 | Computational Science | Minerva Schools at KGI |
| 2017-2023 | Neuroscience | Bates College, Lewiston, ME, United States |
| 2023- | Psychology | Barnard College, Columbia University, New York, NY, United States |
Area:
Vision Science
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Michelle R. Greene is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2009 — 2011 |
Greene, Michelle R. |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Effect of Scene Contextual Relations For Guiding Real-World Visual Search @ Brigham and Women's Hospital
DESCRIPTION (provided by applicant): Visual search is a daily task for all of us, from finding our car keys to looking for a colleague in a crowd. Given the importance of this task, much research has been devoted to it, and thus we know a great deal about visual search in artificial two dimensional displays. However, visual search in the real world occurs in complex, yet highly structured three dimensional environments. What are the principles that guide search in real-world scenes? A separate line of research has highlighted role of contextual regularities between objects and scenes. In other words, knowing that a keyboard is found in offices helps the recognition of both keyboards and offices. Do such regularities help guide attention in real-world visual search problems? While the importance of these statistical regularities has been widely acknowledged, they have not been measured or quantified. It is necessary to measure these regularities to understand the role that they play in search. Here, we have amassed a large scene database of 3500 scenes and have completely measured all objects and regions in these scenes. This rich dataset includes information on what objects occur in different scene categories, and the spatial distributions of the objects'positions. We propose to analyze this dataset to extract statistical regularities existing between objects and their scene context, as well as regularities from the co-occurrence structure between objects. We will use the formal framework of information theory to quantify the degree of regularity in these relationships. This allows us to put an upper bound on the amount of guidance we can expect from these statistics. Then, we will perform behavioral experiments examining the use of these statistics in real-world visual search problems. These data allow us to ask questions and make predictions that have previously been impossible, therefore allowing real-world search to be studied in natural scenes in a controlled and principled way. Health relevance Understanding how attention is deployed in real-world visual search tasks has many public health implications. Understanding difficult visual search problems could lead to better accuracy at interpreting x- ray and MRI data, as well as search for abnormalities during endoscopic surgery. Furthermore, understanding search can help aid those whose search abilities are compromised due to visual (e.g. macular degeneration) or attentional (ADHD or age related cognitive decline) reasons.
|
0.915 |