Thomas L. Griffiths - US grants
Affiliations: | University of California, Berkeley, Berkeley, CA, United States |
Area:
computationWebsite:
http://cocosci.berkeley.edu/tom/We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Thomas L. Griffiths is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2006 — 2010 | Griffiths, Thomas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Bayesian Methods For Learning and Analyzing Natural Languages @ University of California-Berkeley Language is one of the most complex aspects of human behavior, and provides the foundation for many kinds of social interaction. The question of how people learn and use language is a subject of extensive research in several behavioral sciences, including cognitive science, psychology, and linguistics. There is a long tradition of using formal approaches to explore answers to this question, and recent work has begun to emphasize the importance of statistical models. With support from the National Science Foundation, Dr. Griffiths at UC Berkeley and Dr. Johnson at Brown University will develop and investigate new methods and models for learning and analyzing natural languages based on Bayesian statistics. In Bayesian statistics, the information about the structure of language provided by linguistic data is combined with a "prior" distribution that constrains the structures under consideration. This approach can make it easier to learn the properties of a language from limited amounts of data, and has a direct connection to theories of human language acquisition that emphasize the role of constraints in learning. This research project aims to integrate the statistical models used for learning and analyzing language with two methods from modern Bayesian statistics: Markov chain Monte Carlo algorithms and nonparametric Bayesian models. These methods make it possible to apply Bayesian inference in complex models of the kind that people typically work with in cognitive science and linguistics. The results of this project will provide new ways of working with traditional models of language, and lead to new models that are potentially of relevance to explaining how people acquire language. By exploring how contemporary statistical methods can be applied to the probabilistic models used in computational linguistics, this project will build closer connections between statistics, linguistics, and cognitive science, and provide opportunities for students to receive training in topics at the intersection of these disciplines. |
0.915 |
2008 — 2009 | Griffiths, Thomas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Probabilistic Models of Learning and Cognitive Development, May 2009 Workshop, Banff, Canada @ University of California-Berkeley The proposed workshop, to be held at the Banff International Research Station for Mathematical Innovation and Discovery, aims to capitalize on a major new direction in research on formal models of human cognition, exploring probabilistic models of learning and cognitive development. The technical advances that have been made in the use of probabilistic models over the last twenty years in statistics, computer science, and machine learning have made this research enterprise possible, resulting in a set of mathematical and computational tools that can be used to build explicit models of psychological phenomena. By indicating the conclusions that a rational learner might draw from the data provided by experience, Bayesian models can be used to investigate how nature and nurture contribute to human knowledge. Although computational models have been used to aid empirical research on learning in the past, the lack of communication and collaboration between formal theorists and experimental laboratories has always been a stumbling block. This workshop will bring together two groups of researchers: experts in computational modeling and scientists studying cognitive development. The goal is both to report and to discuss the progress we have made so far with existing collaborative research and to foster future collaborations between computational scientists and learning researchers, leading to new insights and new models of how people learn and develop. A special emphasis will be placed on developing strategies of the application of these insights to educational research and practice. |
0.915 |
2009 — 2014 | Griffiths, Thomas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Connecting Human and Machine Learning Through Probabilistic Models of Cognition @ University of California-Berkeley People are able to learn new concepts much faster than computers, often requiring only a handful of examples where a computer might require hundreds. This remarkable ability is partly the consequence of extensive experience with the world, resulting in strong prior knowledge about the kinds of objects that are likely to form categories. This research project bridges the gap between human and machine learning by developing probabilistic models of human category learning, connecting psychological data with the latest theories from computer science and statistics. These mathematical and computational models are used to explore how people learn categories so quickly, to capture the effects of prior knowledge on categorization, and to build a catalogue of human concepts that can be used to test psychological theories and to train machine learning systems. In each case, the research combines the ideas, methods, and sources of data used in psychology and computer science, using hierarchical Bayesian models and Markov chain Monte Carlo algorithms to model human cognition, laboratory experiments to test these models, and large databases as a source of statistical information that guides model predictions. This research program is integrated with an educational plan that incorporates undergraduate and graduate teaching and mentoring, development of a textbook on probabilistic models of cognition, tutorials and workshops aimed at increasing contact between the computer science and psychology communities, and outreach through talks and a website. |
0.915 |
2010 — 2014 | Griffiths, Thomas Gopnik, Alison [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of California-Berkeley In the course of development, children change their beliefs, moving from a less to more accurate picture of the world. How do they do this when there are apparently an infinite variety of beliefs from which to choose? And how can we reconcile children's cognitive progress with the apparent irrationality of many of their explanations and predictions? In computer science, probabilistic models have provided a powerful framework for characterizing beliefs, and can tell us when beliefs are justified by the evidence. But they face similar questions: how can one actually get from less warranted beliefs to more accurate ones given a vast space of possibilities? This project brings these threads together, suggesting a possible solution to both challenges. The solution is based on the idea that children may form their beliefs by randomly sampling from a probability distribution of possible hypotheses, testing those sampled hypotheses, and then moving on to sample new possibilities. This "Sampling Hypothesis" provides a natural bridge between understanding how children actually do learn and reason and how computers can be designed to learn and reason optimally. These experiments will provide an important first step in exploring the Sampling Hypothesis: how do evidence and prior beliefs shape the samples of possible beliefs that children generate and evaluate, and how do developmental changes lead to differences in the samples of possible beliefs generated and evaluated. |
0.915 |
2010 — 2014 | Griffiths, Thomas Klein, Dan [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Probabilistic Models For Reconstructing Ancient Languages @ University of California-Berkeley One of the oldest problems in linguistics is to reconstruct ancient protolanguages on the basis of their modern descendants. Identifying ancestral word forms makes it possible to evaluate proposals about the nature of language change and to draw inferences about human prehistory. Currently, linguists painstakingly reconstruct protolanguages by hand, using knowledge of the relationships between languages and the plausibility of sound changes. This research project develops statistical, computational methods that automate or augment the reconstruction process. Unlike past computational approaches, these new models use detailed phonological representations to infer hidden sound changes. Moreover, they automatically infer which words are co-descendent (cognates). |
0.915 |
2012 — 2017 | Theunissen, Frederic (co-PI) [⬀] Griffiths, Thomas Gallant, Jack [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of California-Berkeley The overarching goal of this project is to discover how language-related information is represented and processed in the human brain. To address this issue we propose to use a novel computational modeling approach, voxel-wise modeling. Voxel-wise modeling draws from the principles of nonlinear system identification, and it provides an efficient method for using complex data sets collected under naturalistic conditions to test multiple hypotheses about language representation. The specific research plan is divided into three aims, each targeted at a different form of language-related information. Aim 1 will reveal how low-level features of speech, such as spectral power, spectral modulation and phonemic structure, are represented across human cortex. Subjects will passively listen to human speech while hemodynamic brain activity is recorded by functional MRI. Voxel-wise modeling will then be used to determine how each point in the brain (i.e., each voxel, or volumetric pixel) is tuned for these various features. Using analogous methods, Aim 2 will reveal how syntactic and semantic features are represented across cortex. Finally, Aim 3 will reveal how language-related information is represented when it is delivered by auditory versus visual modalities. In this case speech and video stimuli will be used. Separate models will be estimated for data recorded during auditory and visual stimulation, and voxel-wise tuning will be compared across modalities. The voxel-wise computational models developed under this proposal will reveal how these various types of language-related information are represented across the cortical surface. These models will also provide clear predictions about how the brain will respond to novel speech stimuli. The results of the proposed research will have broad impacts on clinical problems related to speech perception and production, and they could form the basis of powerful brain decoding device that would enable neurological patients to communicate by thought alone. |
0.915 |
2013 — 2017 | Griffiths, Thomas Gopnik, Alison [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Rational Randomness: Search, Sampling and Exploration in Children's Causal Learning. @ University of California-Berkeley How do young children learn so much about the world so quickly and accurately? And how can they learn so much when, at the same time, they often seem so irrational and unpredictable? The research in this proposal will help answer these questions by bringing together ideas from computer science with research on very young children. The basic idea is that young children learn in some of the same ways as the most powerful machine-learning programs. Both the children and the computers explore a wide range of more and less likely possibilities. Moreover, children may sometimes actually explore more widely than adults and so be smarter or at least more open-minded learners. Some of their apparently irrational play, like their wide-ranging pretend play, may really reflect powerful learning methods. |
0.915 |
2013 — 2018 | Gopnik, Alison (co-PI) [⬀] Keltner, Dacher (co-PI) [⬀] Griffiths, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Data On the Mind: Center For Data-Intensive Psychological Science @ University of California-Berkeley Psychological research has traditionally been conducted using laboratory experiments, bringing a small number of people into a research laboratory and asking them to complete a task. But the existence--and increasing availability--of online datasets on human behavior and new technologies for data collection suggests a different approach might be possible: mining large databases for clues about how people reason, learn, and interact. Dr. Griffiths, Dr. Gopnik, and Dr. Keltner will establish a research center at the University of California, Berkeley to explore the potential of this data-intensive approach to psychological science. The research center will work with a network of researchers across the country and companies developing technologies for collection of behavioral data to establish pilot projects in cognitive psychology, developmental psychology, and social psychology. These pilot projects will include examining what online databases reveal about human reasoning, how mobile devices can be used to study how children learn, and whether interactions on social networking websites can answer questions about human emotion. |
0.915 |
2014 — 2016 | Griffiths, Thomas Suchow, Jordan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Dynamics of Updating and Transmitting Individual and Collective Memories @ University of California-Berkeley Memory and culture form the foundation of human knowledge. The accumulation of knowledge across generations drives scientific progress, technological advancement, and the maintenance of societies through shared social norms. Understanding the fundamental cognitive processes that shape memory and cultural transmission has traditionally fallen within the purview of psychologists and anthropologists. However, advances in fields beyond the social sciences, including statistics, machine learning, evolutionary dynamics, and network science offer a rich set of computational techniques for describing information transfer in individuals and groups that can be fruitfully applied to understand memory maintenance and cultural transmission. The proposed project, if successul, will create a software platform for running experiments on personal and cultural memory online, thereby enabling others to conduct their own experiments using these paradigms. Additionally, the public looks toward memory research with the hope that it will one day provide techniques to help them remember more. Better understanding the processes that affect maintenance of personal and collective memories will reveal the conditions under which memories and cultural innovations are best maintained and possible techniques for improving them. Any progress towards this goal would provide a major benefit to society. Planned dissemination of the results of the proposed project are aimed at increasing public understanding of science through visualization of cultural memory dynamics. The Fellow has a history of public engagement and activities supporting inclusion and broadening participation, and continues to be engaged in these activities. |
0.915 |
2014 — 2017 | Griffiths, Thomas Rafferty, Anna (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Diagnosing Misconceptions About Algebra Using Bayesian Inverse Reinforcement Learning @ University of California-Berkeley This project, to be conducted by researchers at the University of California, Berkeley, and Carleton College, will develop an online math tutor to help high school and college students learn algebra. The tutor will diagnose students' misconceptions about algebra by asking them to solve a series of math problems. The website will be made available to students anywhere, making it possible to collect large amounts of data on algebra problem solving that will be used to refine the technological approach, develop computational models of student learning, optimize the design of tests, and identify effective strategies for online learning and teaching. This project will advance the work of the REAL (Research on Education and Learning) program in studying the cognitive basis of STEM (science, technology, engineering, and mathematics) learning, as well as the Cyberlearning program in discovering how to design and effectively use learning technologies of the future. |
0.915 |
2015 — 2020 | Griffiths, Thomas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Testing Evolutionary Hypotheses Through Large-Scale Behavioral Simulations @ University of California-Berkeley Along with the human brain, human cognition is a product of evolution. The complexities of human cognition, both individually and collectively, include sharing information, innovating, and developing complex technologies, institutions and norms. Why and how these aspects of human cognition and behavior came to be is a question that crosses many disciplines, including biology, psychology, linguistics, archaeology and anthropology. It is also an important one, as understanding how various aspects of human cognition and behavior came to be what they are could have implications for engineering artificially intelligent systems, enhancing and augmenting human capabilities, and improving upon societal conditions. |
0.915 |
2016 — 2021 | Seshia, Sanjit [⬀] Griffiths, Thomas Tomlin, Claire (co-PI) [⬀] Sastry, S. Shankar (co-PI) [⬀] Bajcsy, Ruzena (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of California-Berkeley This NSF Cyber-Physical Systems (CPS) Frontier project "Verified Human Interfaces, Control, and Learning for Semi-Autonomous Systems (VeHICaL)" is developing the foundations of verified co-design of interfaces and control for human cyber-physical systems (h-CPS) --- cyber-physical systems that operate in concert with human operators. VeHICaL aims to bring a formal approach to designing both interfaces and control for h-CPS, with provable guarantees. |
0.915 |
2017 — 2020 | Griffiths, Thomas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Compcog: Leveraging Deep Neural Networks For Understanding Human Cognition @ University of California-Berkeley The last few years have seen significant breakthroughs in artificial intelligence and machine learning, resulting in systems that approach or even exceed human performance in interpreting pictures and words. This project explores the implications of these breakthroughs for understanding how the human mind works. Focusing on artificial neural networks, a key technology behind many recent breakthroughs that is capable of discovering novel representations for complex stimuli, the project has two goals. First, assessing the degree of correspondence between human and machine learning by examining whether the pictures or words that are similar in the representations discovered by neural network models are also judged to be similar by people. Second, developing methods for increasing this correspondence, with the goal of being able to use neural network representations to generate good predictions about how people learn and form categories using real images or text. |
0.915 |