2014 — 2017 |
Adams, Ryan [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Parallel Methods For Large-Scale Probabilistic Inference
We are undergoing a revolution in data. We have grown accustomed to constant upheaval in computing -- quicker processors, bigger storage and faster networks -- but this century presents the new challenge of almost unlimited access to raw data. Whether from sensor networks, social computing, or high-throughput cell biology, we face a deluge of data about our world. Scientists, engineers, policymakers, and industrialists need to use these enormous floods of data to make better decisions. This research project is about providing foundations for tools to achieve these goals. Simple models give only coarse understanding. The world is sophisticated and dynamic, providing rich information. Furthermore, representation of uncertainty is critical to discovering patterns in complex data. Not only are many natural processes intrinsically random, but our knowledge is always limited. The calculus of probability allows us to represent this uncertainty and design algorithms to act effectively in an unpredictable world. The gold standard for probabilistic analysis is Markov chain Monte Carlo (MCMC), a way to identify hypotheses about the unobserved structure of the world that are consistent with observed data. It is a powerful and principled way to perform data analysis, but traditional MCMC methods do not map well onto modern computing environments. MCMC is a sequential procedure that cannot generally take advantage of the parallelism offered by multi-core desktops and laptops, cloud computing, and graphical processing units. This research will develop new methods for MCMC that are provably correct, but that take advantage of large-scale parallel computing. There are a variety of broader impacts of this work. In addition to the core technical contributions, the project engages in deep scientific collaborations. New photovoltaic materials will lead to better solar cells and more sustainable energy production. New techniques for uncovering genetic regulatory mechanisms will lead to better understanding of disease. Quantitative models of mouse activity will give insight into the neural basis of behavior and provide a deeper understanding of brain disorders.
From a technical point of view, this work pursues two complementary approaches to large-scale Bayesian data analysis with MCMC: 1) a novel general-purpose framework for sharing of information between parallel Markov chains for faster mixing, and 2) a new computational concept for speculative parallelization of individual Markov chains. These theoretical and practical explorations, combined with the release of associated open source software, will yield more robust and scalable probabilistic modeling. It will develop provably-correct foundations and efficient new algorithms for parallelization of Markov transition operators for posterior simulation. These operators will be used in three collaborations that are representative of the methodological demands for large-scale statistical inference: 1) predicting the efficiencies of novel organic photovoltaic materials, 2) discovering new genetic regulatory mechanisms, and 3) quantitative neuroscientific models for mouse behavior. While this proposal focuses on the generalizable technical challenges of these problems, these collaborations provide compelling examples of how machine learning can be broadly transformative.
Finally, the project includes a significant outreach component, engaging with local middle schoolers, and involving underrepresented minorities in summer research.
|
0.915 |
2014 — 2018 |
Adams, Ryan (co-PI) [⬀] Seltzer, Margo [⬀] Brooks, David (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Xps: Full: Cca: Collaborative Research: Automatically Scalable Computation
For over thirty years, each generation of computers has been faster than the one that preceded it. This exponential scaling transformed the way we communicate, navigate, purchase, and conduct science. More recently, this dramatic growth in single processor performance has stopped and has been replaced by new generations of computers with more processors on them; for example, even the cell phones we carry have multiple processors in them. Writing software that effectively leverages multiple processing elements is difficult, and rewriting the decades of accumulated software is both difficult and costly. This research takes a different approach -- rather than converting sequential software into parallel software, this project develops ways to store and reuse computation. Imagine computing only when computer time and energy are cheap and plentiful, storing that computation, and then using it later, when computation might be limited or expensive. The approach used involves making informed predictions about computation likely to happen in the future, proactively executing likely computations in parallel with the actual computation, and then "jumping forward in time" if the actual execution arrives at any of the predicted computations that have already been completed. This research touches many areas within Computer Science, architecture, compilers, machine learning, systems, and theory. Additionally, exploiting massively parallel computation will produce immediate returns in multiple scientific fields that rely on computation.
The approach used in this research views computational execution as moving a system through the enormously high dimensional space represented by its registers and memory of a conventional single-threaded processor. It uses machine learning algorithms to observe execution patterns and make predictions about likely future states of the computation. Based on these predictions, the system launches potentially large numbers of speculative threads to execute from these likely computations, while the actual computation proceeds serially. At strategically chosen points, the main computation queries the speculative executions to determine if any of the completed computation is useful; if it is, the main thread uses the speculative computation to immediately begin execution where the speculative computation left off, achieving a speed-up over the serial execution. This approach has the potential to be extremely scalable: the more cores, memory, and communication bandwidth available, the greater the potential for performance improvement. The approach also scales across programs -- if the program running today happens upon a state encountered by a program running yesterday, the program can reuse yesterday's computation. This project has the potential to break new ground for research in many areas in Computer Science touched by it.
|
0.915 |
2015 — 2016 |
Adams, Ryan Prescott Datta, Sandeep R Sabatini, Bernardo L [⬀] |
U01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Lagging or Leading? Linking Substantia Nigra Activity to Spontaneous Motor Sequences
? DESCRIPTION (provided by applicant): Behaviors are sequences of actions that are executed in the proper order and correct setting to achieve a goal. Action sequences and their association with the specific environmental contexts in which they are beneficial can be hardwired, as in the case of innate behaviors, or learned and flexible, as in the case of adaptive responses to changing surroundings. The basal ganglia, a complex set of phylogenetically ancient subcortical nuclei, collect sensorimotor information from across the cortical mantle and project via output nuclei to thalamic structures that regulate action; this circuit organization suggests that the basal ganglia may play key roles in modulating ongoing patterns of action. Consistent with this possibility, neurological and psychiatric diseases that disrupt basal ganglia function also disrupt action selection, sequencing and execution. Furthermore, neural correlates have been identified within the basal ganglia that predict, accompany and lag different features of behavior. However, three key questions remain open about the relationship between basal ganglia activity and behavior. First, it is unclear whether the basal ganglia primarily encode behavioral sequences, the action components of behavioral sequences, or both. Second, because of the temporal diversity of task-related activity observed in the basal ganglia, it is not clear whether activity in specific populations of neurons is causal for behavior. Finally, because most research into basal ganglia function involves overtraining in operant tasks, it is not clear what the core principles of action encoding are that govern basal ganglia function during spontaneously generated patterns of behavior like exploration. Here we propose to take advantage of a novel 3D machine vision technology uses Baysean inference to classify spontaneous behavior on fast (e.g. neural) timescales to probe the causal relationships between neural activity in the basal ganglia and action. We will focus our analysis of the main output nucleus of the basal ganglia; the substantia nigra pars reticulate (SNpr). We will first seek to identify predictive neural correlates within the SNpr for action components and behavioral sequences by combining our behavioral analysis methods with dense electrical recordings, both during normal exploration and during the execution of innate approach and avoidance behaviors triggered by odor cues from foods, conspecifics and predators. We will then test the causal relationship between activity in these SPnr neurons and specific features of behavior by using closed- loop optogenetics to subtly alter global patterns of activity within SPnr neurons themselves. This work will shed light on the mechanisms used by the brain to create self-generated patterns of action, and yield important clues about how the links between neural activity and action are altered during disease.
|
0.934 |
2017 — 2020 |
Wei, Gu-Yeon (co-PI) [⬀] Brooks, David [⬀] Adams, Ryan (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Small: Virtualized Accelerators For Scalable, Composable Architectures
This project seeks to develop fundamental technologies to enable the next-generation of computing devices that will power future ubiquitous computing devices such as smartphones, self-driving cars, and autonomous robots. The project develops novel tools and techniques at both the hardware and software layers of computer systems. This project will also train new graduate engineers in architecting complex computing systems, modern software and hardware design methodologies, and cutting edge machine learning techniques. All of these skillsets are in broad demand in US industry but have been underrepresented in STEM education.
Heterogeneous architectures comprising general purpose processors, graphics processors, and hardware accelerators designed for specific computing tasks have been widely adopted in today's computing systems for both edge and cloud devices. Specialized computing blocks provide tremendous benefits in energy efficiency. However, a major challenge in the design of such systems is the loss of generality and flexibility that has limited their adoption to a small set of application domains that do not often change. Increased flexibility could be unlocked if accelerators were built from smaller dynamically composable blocks, but existing approaches are difficult to program and scale poorly. This project proposes a design flow to generate a templated System-on-Chip (Soc) with a composable accelerator system that can be physically instantiated for a range of computing platforms. Through a virtualization layer, collections of physical hardware blocks are exposed to software as virtual accelerators. To efficiently search the large design space of the SoC, new design space exploration techniques are under investigation.
|
0.915 |
2017 — 2019 |
Wood, Robert (co-PI) [⬀] Wood, Robert (co-PI) [⬀] Adams, Ryan (co-PI) [⬀] Wei, Gu-Yeon [⬀] Brooks, David (co-PI) [⬀] Kuindersma, Scott (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
S&as: Int: Robobees 2.0 Towards Autonomous Micro Air Vehicles
In 2009, a group of researchers from Harvard led an NSF Expeditions in Computing project to build a colony of flapping-wing robots, called RoboBees, motivated by the multidisciplinary challenges associated with building and controlling effective robotic insects. The research has been exciting and it has tickled the imagination of many "young and old" through numerous museum exhibits and outreach activities. The severe inherent constraints associated with building at-scale flying robotic insects required many innovations and new technologies at each step. For example, a new manufacturing process called pop-up MEMS was developed to enable mass production of small-scale, foldable devices. New electronics were developed to flap artificial insect-scale wings. A new small-scale computer chip (called the BrainSoC), connected to various sensors, was created to control the robot. The culmination of this work has been exciting demonstrations of RoboBees hovering and maneuvering about within carefully controlled environments. The next phase of this work is to imbue these robots with machine intelligence and autonomy: RoboBee 2.0. The main objective of this proposal will be to teach the RoboBees to fly autonomously.
Over the past 10 years, while roboticists have been busily building small-scale robots, there has been a surge of activity in machine learning that has led to rapid advances in machine perception and control. For example, the recent success of deep learning can be attributed to the virtuous cycle of (i) more and higher quality data; (ii) faster parallel computation; and (iii) more efficient learning algorithms. The time is ripe to combine these threads of research to develop machine learning-enabled flight control and perception for RoboBees. This project brings together a multidisciplinary team of experts from different engineering backgrounds to build the next generation of RoboBees. The project seeks to push the envelope by targeting the RoboBees platform, which introduces flight dynamics and sensitivity requirements beyond the bleeding edge of what is possible using off-the-shelf components. This effort builds on the existing experimental RoboBee platform at Harvard built with special onboard electronics, which will be used to record large volumes of flight data. This data can then feed exploration of machine learning flight control algorithms, which begins with simple hovering before tackling more challenging maneuvers such as obstacle avoidance and object tracking. Since hand tuning conventional control algorithms is overly cumbersome, focus will be on modern computing paradigms that can be taught rather than programmed. Development and demonstration of autonomous flight control based on deep learning for insect-scale flapping-wing robots will broadly impact the fields of microrobotics, machine learning, energy-efficient computing, and a broad array of autonomous systems, further extending capabilities of autonomy, to a broad range of robotic platforms, from regular vehicles to tiny robots of diverse configurations and applications.
|
0.915 |