Yali Amit - US grants
Affiliations: | Computational Neuroscience | University of Chicago, Chicago, IL |
Area:
Neuroscience BiologyWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Yali Amit is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2002 — 2005 | Amit, Yali Geman, Donald Miller, Michael (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Invariant Detection and Interpretation of Specific Objects in Image Data @ Johns Hopkins University Abstract |
0.951 |
2004 — 2010 | Amit, Yali Geman, Donald Younes, Laurent (co-PI) [⬀] Geman, Stuart (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr - (Ase+Nhs) - (Dmc+Int): Triage and the Automated Annotation of Large Image Data Sets @ Johns Hopkins University Proposal: 0427223 |
0.951 |
2007 — 2011 | Amit, Yali | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Chicago The introduction of statistical techniques in computer vision has yielded a number of interesting algorithms able to partially solve certain constrained recognition problems. However limitations on computing power and available training data impose certain difficult tradeoffs which are rarely quantified so that choices of parameters and models are typically done in an ad-hoc manner. These tradeoffs can only be quantified in a context where the statistical properties of the objects and their appearance in the images are well defined, yet this is far from the case in real images. The alternative, which is the goal of this project, is to perform an analysis of the same issues in a synthetic stochastic setting, using a generative model for images. Object classes are stochastically generated and instantiated in the images, together with clutter, occlusion and noise. The generative model should be rich enough to qualitatively pose the same problems as real images, yet sufficiently simple to enable quantitative analysis; hence this is not an attempt to synthesize real images. Questions regarding the limits of feasibility of various tasks such as detection and classification as a function of key parameters defining the generative model is analyzed quantitatively, in particular the analysis of the tradeoff between accuracy and computation time. The emphasis on integrating computation time in the analysis gives rise to new types of statistical questions, and new forms of asymptotic regimes as a function of the image resolution, the number of distinct classes and their variability. The hope is that the proposed framework will offer a setting in which systematic algorithmic choices can be made and contribute to the development of concrete computer vision algorithms. |
1 |
2017 — 2021 | Amit, Yali Brunel, Nicolas Freedman, David Jordan (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns: Multiscale Dynamics of Cortical Circuits For Visual Recognition & Memory @ University of Chicago This proposal aims to integrate two streams of research on learning and memory in an attempt to strengthen the links between theory and experiment, build models that explain experimental observations and use model predictions to guide new experiments. The experimental stream will record neuronal population activity in inferior temporal, perirhinal and prefrontal cortices during performance of delayed matching tasks which require maintenance of visual information in short term memory, using visual stimuli with various degrees of familiarity (from entirely novel to highly familiar). The modeling stream will investigate learning and memory in network models that include learning rules inferred from data, using a combination of mean field analysis and simulation. Models will generate predictions on patterns of delay period activity that will be tested using experimental data. The goals of this combined experimental and theoretical project will be to answer the following questions: · How do changes in synaptic connectivity induced by learning due to repeated presentation of a particular stimulus affect the distributions of visual responses of neurons? In other words, how do neuronal representations change in cortex as a novel stimulus becomes familiar? Can we infer the learning rule in cortical circuits from experimentally observed changes in distributions of neuronal responses as the stimuli become familiar? · Do changes in synaptic connectivity induced by learning rules that are consistent with the statistics of visual responses lead to delay period activity in a task such as the OMS task? Is delay period activity already present upon the first presentation of a stimulus, or does it develop over time? If it is not present during the initial presentations, how is sample information maintained in memory during the delayed match to sample task? see attached continuation RELEVANCE (See instructions): See attached |
1 |