2003 — 2006 |
Balasubramanian, Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Time, Space and Information @ University of Pennsylvania
The proposed research involves exploring novel connections between cosmology and the new physics beyond the Standard Model. The PI plans to use the Cosmic Microwave Background as a probe of the new physics near the scale of inflation that could leave a small, but detectable, imprint in the CMB anisotropy. The PI proposes a new explanation for the Type IA supernovae observations, which are postulated to be fainter because of dynamical conversion of photons into axions, without cosmic acceleration. The PI predicts a new particle, the ultralight axion instead of the assumption of dark energy to explain these observations. The PI also proposes to investigate whether the dynamics of extra dimensions can generate the primordial seed for extra-galactic magnetic fields. The PI also intends to apply techniques from statistical physics to understand the transmission of information in neurons.
|
1 |
2004 — 2009 |
Balasubramanian, Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Us-Netherlands Cooperative Research: String Theory and Cosmological Spacetimes @ University of Pennsylvania
0443607 Balasubramanian
This three-year award supports US-Dutch cooperative research on string theory and cosmological spacetimes involving Vijay Balasubramanian and J. de Boer at the University of Amsterdam, The Netherlands. The project will develop three inter-related issues: a) the physics of the cosmological constant and the origin of cosmology from string theory, b) the quantum mechanical origin and nature of gravitational thermodynamics and singularity resolution in cosmological and black hole settings, and c) how time-dependent universes are described in string theory.
The Institute for Theoretical Physics at the University of Amsterdam is currently one of the premier institutions in Europe doing string theory. The Dutch team has strengths on aspects of unstable systems with open string tachyons, cosmological singularities in string theory, the physics of de Sitter space, AdS/CFT correspondence, the construction of a holographic dual description of Minkowski space, brane destabilization in highly curved backgrounds, and on conformal field theory, non-commutative superspace, and supersymmetric gauge theories. These strengths complement those of the US team.
|
1 |
2004 — 2009 |
Balasubramanian, Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
What the Retina Might Know About Natural Scenes @ University of Pennsylvania
Biological organisms collect information from the natural environment that is necessary for their survival. A significant fact about the design of the neural systems responsible for collecting this data is their massive parallelism. For example, the retina expresses at least 15 kinds of parallel information channels which transmit different kinds of information relevant to behaviour to the brain, rather than using a few general purpose cables with a suitably complicated code. A prototypical example is the segregation of ON and OFF pathways in the visual system to process information in bright vs. dark features of a scene. The parallel channels are strikingly heterogeneous, spanning a 50-fold range in nerve fiber diameter and a 10-fold range in voltage spike firing rate. What determines this choice of massively parallel design, and the particular channels that are expressed? This project will explore a basic hypothesis: parallel channels exist to minimize the metabolic and spatial costs necessary to extract behaviorally relevant information from natural stimuli and transmit it to the brain. To accomplish this goal, the PI intends to 1) examine timing precision, reproducibility, information rate, and information per spike encoded by different retinal neurons, 2) study how neuronal circuit structure relates to the structure of natural images, and 3) measure spatial and metabolic costs of different channels and examine how parallelism affects information transmission. The project intends to develop new theory and analysis techniques to study spatio-temporal constraints in neural coding, adaptation to changing stimulus statistics and the biophysics of why large, expensive cells appear to be needed to transmit information at high average rates.
This project, joining theoretical physics and experimental biology, will have a broad inter-disciplinary impact by: (a) training physics graduate students in the problems and techniques of neuroscience, and (b) transferring analytical and theoretical tools of physics to neuroscientists. The PI is also involved in encouraging such synergy between physics and biology by organizing joint physics-neuroscience workshops at Penn and at the Kavli Institute for Theoretical Physics.
|
1 |
2009 — 2014 |
Nelson, Philip [⬀] Balasubramanian, Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Adaptation, Learning and Decision Making in Biological Networks @ University of Pennsylvania
All living organisms respond adaptively to the opportunities and constraints presented by their fluctuating physical environment. For example, (1) a bacterium that survives attack by antibiotic drugs can flourish where others fail; (2) an eye that adapts to the overall level of illumination can provide vision either by night or day, enhancing an organism's prospects for survival; (3) a decision-making system, like the brain, that integrates prior experience can make better decisions about what to do with new incoming sensory information. These three examples cover a range of levels of biological organization, and have traditionally been studied by specialists who have little contact with the other levels. However, living organisms implement adaptation, learning, and decision strategies at all levels of organization, subject to some overarching rules from probability theory. Thus the study of behavior at a single level can improve the understanding at other levels. This proposed research will analyze experiments done at all three of these levels in a common framework of information theory. New techniques will be developed to extract meaning from experimental data. These tools will be used to assess the extent to which single-cell organisms, the retina, and the brain all implement common (or related) decision strategies. These new tools will be useful to researchers in many other fields, and they will be disseminated broadly. The project includes training for several graduate, undergraduate, and postdoctoral students. The project's investigators will perform a number of outreach activities, most notably the creation of an interdisciplinary undergraduate textbook addressing the newly emerging field of systems biology for students in science/technology/engineering/and math fields.
|
1 |
2011 — 2016 |
Balasubramanian, Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Neural Population Coding in the Brain @ University of Pennsylvania
This project explores how information processing by neural circuits is organized to use the resources of the brain efficiently. The proposed theoretical studies apply fundamentally new approaches to analyzing the organization of cortical maps, by investigating how emerging principles of efficient design pertain to the computational mechanisms employed by the central brain. One aim studies how the "place map" in the hippocampus (where individual cells are tuned to fire in particular locations of an environment) should be organized to efficiently support general goal-directed navigation. A second aim studies how the "shape map" in Inferotemporal Cortex (where individual cells are tuned to fire in response to particular visual shapes) should be organized to efficiently support shape perception, given the distribution of shapes in natural visual scenes. These theoretical studies will lead to directly testable predictions of the distribution of tuning curves in cortical area IT and hippocampus. In this way, the theory will provide a lever for further experimental exploration of the architecture of form vision and spatial navigation in the brain. Population codes also involve interactions between the different neurons, but techniques are not yet available to comprehensively study these interactions in cortex. Thus, a third aim uses the retina (a piece of the central brain that has projected out into the eye) as a model system to theoretically and experimentally ask two basic, and as yet unanswered questions: (a) Do neural networks adapt their interactions to stimulus statistics and noise as predicted by optimization theory?; (b) Is noise, as measured from single neurons, simply a mis-reading of correlated activity? By asking and answering these questions, this project will also explain key aspects of how the retina prepares visual input for central processing. Knowledge of how retinal circuits respond to natural and synthetic stimuli will be useful in designing effective prosthetic devices.
This project strengthens the research connections between disciplines by bringing together analytical and theoretical methods from physics and machine learning with experimental techniques from neuroscience. Students and postdocs who thus develop proficiency with both biological and quantitative physical techniques will be better able to cope with scientific and industrial challenges of coming decades. The educational component of this proposal also addresses this national need directly by developing pedagogical materials for a course on "Theoretical and Computational Neuroscience". The PI will give presentations to K-8 and high school students and to the general public with a view to broadening public knowledge of the field. Outreach to historically disadvantaged communities will be carried out through established programs at Penn. Finally, the PI is active in organizing lecture series and conferences that engage physicists to work within quantitative systems neuroscience.
|
1 |
2016 — 2017 |
Balasubramanian, Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Molecular Co-Evolution: Lessons From Pathogen-Immune System Interactions @ University of Pennsylvania
The workshop titled, "Molecular coevolution: lessons from pathogen-immune system interactions", will be held on 11th-12th April 2016, at the Princeton Center for Theoretical Sciences (PCTS). It will bring together theoreticians and experimentalists who study adaptive immunity in vertebrates or in bacteria. Adaptive mechanisms for immunity have evolved to mount a flexible and diverse response and to counteract the rapid evolution of pathogens to enhance organismal survival. The participants of this program work at the cutting edge of research into such immune dynamics. The goal is to foster new collaborations that build and test predictive approaches to coevolution of pathogens and the immune system. Predictive modeling is an essential component for understanding the fundamental science of the immune system. Such basic science advances will eventually enable advances in medicine. Close to half of the participants in this workshop are females, helping to increase diversity and participation of women in STEM fields.
The immune systems in vertebrates and in bacteria have evolved remarkable mechanisms for adapting to a diverse and constantly changing pathogenic environments. In vertebrates, new receptors are constantly generated by genomic recombination, and those that successfully detect threats are selectively amplified to protect the host. Some of the stimulated receptors are retained to form a memory of past infections, and thus to mount a rapid response against future infections of a similar type. This process of affinity maturation in an adaptive immune system constitutes a Darwinian evolution of the immune cells that occurs during the lifetime of an organism. By contrast, in some bacteria and archaea, immunity is directly acquired from interactions with the environment, and is heritable to future generations. These bacteria use the CRISPR-Cas mechanism to acquire specific genomic sequences from attacking phages (viruses that attack bacteria), and use these sequences to target future invaders. In both cases, the rapid turnover of pathogens to escape the adaptive immune response creates an evolutionary arms race at the molecular level. Understanding the complex dynamics of these adaptive immune systems in response to coevolving pathogens is only possible through a joint theoretical and experimental effort. This workshop will focus on a number of questions at this interface, including: (i) what does it mean for an immune repertoire to be "well-adapted", (ii) how can the actual dynamics of the adaptive system reach such a well-adapted state?, (iii) how should, and how does, the adaptive system react to an evolving pathogenic environment, and (iv) what are the molecular signatures of coevolution between viruses and the immune system, and how can we detect those from the biological data? This workshop will bring together many of the leading experts on these topics.
|
1 |
2018 — 2021 |
Balasubramanian, Vijay Cohen, Yale E [⬀] |
T32Activity Code Description: To enable institutions to make National Research Service Awards to individuals selected by them for predoctoral and postdoctoral research training in specified shortage areas. |
Cross-Disciplinary Training in Computational Approaches to the Neuroscience of Audition and Communication @ University of Pennsylvania
Project Summary The overall objective of this training program is to identify, motivate, and train the next generation of neuroscientists in Computational Approaches to the Neuroscience of Audition and Communication (CANAC). This objective maps elegantly onto the 2017-2021 NIDCD Strategic Plan and its Priority Areas that aim to understand the neural basis of hearing and communication at different scales of analysis and in real-world listening environments. These Priority Areas require not only rigorous experimental manipulations and data collection but also coherent computational theory to understand the data and to make testable predictions for future science. As such, we propose a T32 training program to develop the next-generation of scientists grounded in the experimental neuroscience of auditory and communication systems, while also being thoroughly trained and versed in theory and computation. A unique aspect of this training proposal is its integrative philosophy which leverages a highly collaborative and cross-disciplinary approach to science fostered by faculty on the Penn campus: students will master techniques from diverse traditional fields to become independent investigators vested with skills in both computation and experimental neuroscience. Our program curriculum includes core and elective courses designed to achieve this breadth of knowledge and is consolidated by suggested research laboratory rotations that will be taken by interested first- and second-year predoctoral students from associated graduate groups. Upon successful completion of a preliminary exam at the end of the second year, interested students will apply formally to our program, based on a written statement of interests and plans, a thesis proposal, grades, and letters of recommendation. Accepted trainees will receive two years of funding for PhD-thesis work and individual advising on current training options, funding opportunities, and future career plans. Students will receive cross-disciplinary training: they will be co-mentored by two faculty members, one whose expertise is computational and another whose expertise is in the experimental neuroscience of auditory and communication systems. Additionally, because of the direct translational and clinical importance of audition and communication, clinical faculty will also serve as members of the trainees' thesis committees. We will instruct all of our trainees in the responsible conduct of research, and will continue efforts to enhance diversity of our applicants via targeted recruitment, broad advertising, and dissemination of program outputs. We have devised a sophisticated evaluation team to keep track of progress and outcomes, and plan a comprehensive training program for predoctoral trainees, including journal clubs, seminar series, and an annual retreat. Together, these activities comprise an integrative, directed training program that will develop a talented and diverse pool of students to become long-term leaders in the field of auditory and communication neuroscience. !
|
1 |
2019 — 2021 |
Balasubramanian, Vijay Cohen, Yale E [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Coincidence and Continuity: Uncovering the Neural Basis of Auditory Object Perception @ University of Pennsylvania
SUMMARY Auditory objects are the foundational building blocks of our auditory-perceptual world. Auditory objects are formed, in part, by the brain?s ability to extract and organize spectral and temporal regularities from the acoustic environment. This ability allows a person to hear their friend?s voice amongst the noise of a crowded restaurant. In many cases, temporal regularities are formed across multiple frequency channels. This example and several others suggest that the brain can track this temporally correlated neuronal activity across multiple frequency channels and uses this activity as a means to form auditory objects and organize the auditory environment. Despite its clear importance to auditory perception, there is little to no direct evidence in support of the hypothesis that temporal regularities are encoded as temporally correlated activity and that this activity can guide behavior. To fill this information gap, we combine rigorous psychophysics with high-density neuronal recordings and computational theory to identify the interaction of temporal regularities with dynamic network structures and perception. Thus, the overall goal of this proposal is to identify the mesoscopic circuits of the auditory cortical hierarchy that learn temporal regularities ?i.e., coincidence and continuity? of the environment and how neuronal representations of these regularities contribute to two key components of auditory perception: figure-ground segregation and to perceptual invariance, respectively. In Aim 1, we posit that figure-ground segregation is facilitated by the dynamic imprinting into cortical circuits of instantaneous correlations (i.e. temporal coincidence) across frequency bands of the acoustic target. Thus, we test whether tone bursts with synchronous onsets increase the intrinsic noise correlations of cortical neurons, which, in turn, facilitates a listener?s ability to hear a figure stimulus amongst a noisy ground stimulus. In Aim 2, we hypothesize that stimulus invariances are learned from smooth (i.e., temporally continuous) changes in the spectrotemporal structure of auditory stimuli. Based on this theory, we hypothesize that the brain interprets temporally continuous variations in an auditory stimulus as natural transformations of underlying auditory objects and drives hierarchical learning of invariant perceptual representations. Individually and collectively, the Aims provide valuable, quantitative insights into auditory perception and its underlying neuronal mechanisms. The PIs are uniquely qualified to conduct this research with complementary expertise in psychophysics, population neuronal recordings, and computational/theoretical neuroscience.
|
1 |
2019 — 2020 |
Balasubramanian, Vijay Gold, Joshua I [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Mental, Measurement, and Model Complexity in Neuroscience @ University of Pennsylvania
PROJECT SUMMARY Neuroscience is producing increasingly complex data sets, including measures and manipulations of sub- cellular, cellular, and multi-cellular mechanisms operating over multiple timescales and in the context of different behaviors and task conditions. These data sets pose several fundamental challenges. First, for a given data set, what are the relevant spatial, temporal, and computational scales in which the underlying information-processing dynamics are best understood? Second, what are the best ways to design and select models to account for these dynamics, given the inevitably limited, noisy, and uneven spatial and temporal sampling used to collect the data? Third, what can increasingly complex data sets, collected under increasingly complex conditions, tells us about how the brain itself processes complex information? The goal of this project is to develop and disseminate new, theoretically grounded methods to help researchers to overcome these challenges. Our primary hypothesis is that resolving, modeling, and interpreting relevant information- processing dynamics from complex data sets depends critically on approaches that are built upon understanding the notion of complexity itself. A key insight driving this proposal is that definitions of complexity that come from different fields, and often with different interpretations, in fact have a common mathematical foundation. This common foundation implies that different approaches, from direct analyses of empirical data to model fitting, can extract statistical features related to computational complexity that can be compared directly to each other and interpreted in the context of ideal-observer benchmarks. Starting with this idea, we will pursue three specific aims: 1) establish a common theoretical foundation for analyzing both data and model complexity; 2) develop practical, complexity-based tools for data analysis and model selection; and 3) establish the usefulness of complexity-based metrics for understanding how the brain processes complex information. Together, these Aims provide new theoretical and practical tools for understanding how the brain integrates information across large temporal and spatial scales, using formal, universal definitions of complexity to facilitate the analysis and interpretation of complex neural and behavioral data sets.
|
1 |
2020 — 2021 |
Balasubramanian, Vijay Derdikman, Dori |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns: Us-Israel - the Egocentric-Allocentric Transformation of the Cognitive Map @ University of Pennsylvania
Animals have the striking ability to know where they are, and to plan where to go and how to get there. These abilities are likely based on a cognitive map, the brain?s internal representation of space. For 50 years we have known that hippocampal place cells are a component of the cognitive map, responding when an animal is in specific locations. We also know about other components of the map ? e.g., grid cells, head-direction cells, and border cells. But we do not understand how the responses of such cells are generated from sensory experience. One puzzle is that sensory inputs are ?egocentric? (centered and oriented in relation to the individual), whereas the cognitive map is ?allocentric? (centered and oriented in relation to an absolute reference frame in the world). This raises a key question: how does the brain transform egocentric reference frames into allocentric ones to guide behavior? We focus on the part of the cognitive map representing boundaries. Boundaries are experienced egocentrically by animals, but in the medial entorhinal cortex (MEC) and the subicular complex, borders are represented by allocentric boundary cells (ABCs). If ABCs can be generated from egocentric responses in upstream areas, their allocentricity could be propagated to the rest of the cognitive map via synaptic interactions. Recent work shows that the postrhinal cortex (POR), a principal area projecting to the MEC, contains cells with egocentric responses that may encode boundaries. In Aim 1, we propose that these are Egocentric Boundary Cells (EBCs) that efficiently encode orientations and distances to boundary segments, as subjectively experienced during navigation. We will test this idea by recording egocentric POR responses in environments of varying complexity, while testing the tuning of responses to spatial boundaries, and comparing to predictions of efficient coding theory. In Aim 2 we further propose a mechanism whereby EBC responses in POR are conjunctively and hierarchically combined with head-direction responses through Hebbian plasticity in the MEC, to produce ABC responses. We will test this mechanism through environmental manipulations and confusion experiments combined with neural recordings, for which we will have predictions from theoretical models. We will also perform anatomical studies and inactivation experiments to test how components of the network connect, and how functionality is modified when some parts of the network are inactivated. Our approach will achieve a significant milestone, uncovering circuits, brain areas, and mechanisms connecting sensory experience to the generation of the brain?s cognitive map, thus informing clinical approaches to deficits in navigation and episodicmemory. RELEVANCE (See instructions): This work will develop a systems-level understanding of circuits across brain areas that underpin spatial cognition, our ability to know where we are and to plan where we go. We must understand how the brain solves such spatial problems to treat impairment of the ability to navigate, a common deficit in patients with early stage dementia or temporal lobe trauma. As spatial cognition is closely tied to episodic memory and abstract navigation, this work will also help to guide clinical approaches to impairment of these capacities.
|
1 |
2022 — 2026 |
Balasubramanian, Vijay Chaudhari, Pratik [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Ri: Medium: Modl: Occams Razor in Deep and Physical Learning @ University of Pennsylvania
Deep neural networks (DNNs) are machine learning models inspired by how neurons perform computations in the animal brain. Over the past decade these models have led to revolutions in many fields of science and engineering, from making predictions of the next word on the keyboard of a mobile phone to selecting between cosmological models that best explain the structure of the universe. Although computer scientists have gained expertise in building these systems, they do not currently understand why they work and when they can fail. The research agenda focuses on developing theoretical tools that will build such a needed understanding for DNNs, with the hope that these same tools will also shed light on how learning occurs in biological systems, e.g., networks of neurons in the brain. The intellectual goal of the project is to identify common themes in the ways artificial and biological systems learn. The educational and outreach goals include (a) developing curricula at the intersection of computer science, neuroscience, and mathematics, (b) organizing tutorials on artificial intelligence for high-school students in Philadelphia, and (c) mentoring young researchers in the LatinX mathematical research community.<br/><br/>Training a deep network reduces to a high-dimensional, large-scale, and non-convex optimization problem; curiously enough, simple algorithms like stochastic gradient descent are not just sufficient but also seemingly necessary for training DNNs. Accepted statistical wisdom suggests that the larger the model class, the more likely the learned model will overfit the training data. Yet, DNNs generalize extremely well to new data. This project seeks to unravel this apparent paradox: The central hypothesis is that DNNs succeed when the learning tasks exhibit a characteristic structure called “sloppiness.” For sloppy learning tasks, the Fisher Information Matrix of the learned network has eigenvalues that are distributed uniformly across a range that is exponentially large in the rank of the matrix. This project will investigate how this sloppy structure results in the training process exploring only a tiny subset of the function space, thereby yielding both rapid training and good generalization. It will characterize the shape of this tiny subset to understand why networks learn simple, low-dimensional functions for typical learning tasks. Connections will be made to biological and physical systems that learn through local learning rules and also exhibit such a sloppy structure (e.g., networks of neurons in the brain and elastic polymer networks such as proteins). The technical objective is to reveal universal principles of learning, namely a drive towards simplicity and low-dimensional internal representations exhibited by both DNNs and physical learning networks.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |