2009 |
Rokem, Ariel Shalom |
F31Activity Code Description: To provide predoctoral individuals with supervised research training in specified health and health-related areas leading toward the research degree (e.g., Ph.D.). |
Neural Mechanisms of Perceptual Learning @ University of California Berkeley
DESCRIPTION (provided by applicant): The proposed research aims to elucidate the neural mechanisms of perceptual learning in the visual modality. Specifically, we aim to discover the loci of learning in the visual system and to describe the computational mechanisms whereby learning occurs. A significant body of research exists on the behavioral consequences of perceptual learning: typically, learning fails to generalize across various physical parameters of the stimulus. For example, if a perceptual discrimination is learned in one part of the visual field, the improvement does not transfer to other parts of the visual field. Thus, learning is assumed to stem from changes occurring in primary sensory mechanisms. However, there is a relative paucity of physiological evidence to explain these phenomena and the underlying neural mechanisms. We will use psychophysical methods in order to track the behavioral consequences of learning in two different visual discriminations, tapping different percpetual mechanisms. Then, we will use fMRI in order to measure learning-induced changes in the spatial extent of responses in visual cortical areas and in the tuning of large populations of cells in these areas. FMRI provides simultaneous recordings from mutliple brain areas. Therefore, it is a suitable method in order to track down the locus of learning in the visual system. Finally, we propose to explore the role of top-down modulation of Perceptual Learning by the cholinergic system. Animal studies show that activity in this system facilitates modality-specific and stimulus-specific perceptual learning. We will test the role of this system in human perceptual learning by administering a cholinesterase inhibitor commonly prescribed as treatment for Alzheimer's Disease (donepezil, trade name: Aricept) during training on a perceptual task. Percpetual training procedures have been suggested to provide health benefits in conditions as varied as dyslexia, amblyopia, congenital prosopagnosia and mild cognitive impairment in aging, but in most cases, the neural mechanisms underlying the training-induced improvements are unknown. Understanding the underpinnings of specific learning will allow the development of more effective clinical interventions in these and other conditions. Understanding the effects of cholinergic modulation on neural substrates of perceptual learning would shed light, not only on the functions of the cholinergic system in healthy individuals, but also on the role this system may have in cognitive disorders, such as Alzheimer's Disease.
|
0.91 |
2012 — 2014 |
Rokem, Ariel Shalom |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Neural Mechanisms of Texture Processing in Central and Peripheral Visual Field
The goal of this project is to characterize the neural substrate underlying differences between perception of the central and peripheral parts of the visual field. Important visual functions, such as reading, require central vision and clinical conditions, such as macular degeneration selectively affect this part of the visual field, conferring significant disability. We will focus our investigation on the perception of boundaries between different textures. These boundaries are used in order to parse images into distinct regions and objects and perception of these boundaries differs between central and peripheral vision. These differences are likely to result from the manner in which information is processed in different parts of the visual field and the way in which information is transmitted through the visual system, via the anatomical connections between different regions of the visual system. To study these differences, we will measure properties of the visual system in healthy human participants, using a combination of methods. We will use behavioral measurements to characterize the perception of texture boundaries in different parts of the visual system. We will use functional magnetic resonance imaging (fMRI) to measure activity in different regions of the visual system and detect activity related to the presence of texture boundaries. We will use diffusion-weighted MRI to characterize the anatomical connections between different parts of the visual system. Finally, we will combine the information from the anatomical measurements and the functional measurements and analyze the way in which information about different parts of the visual field is segregated and shared between parts of the visual system. Understanding the neural representation of different parts of the visual field in texture perception will benefit the development of novel treatments for patients with visual impairments affecting parts of the visual field, such as macular degeneration and disorders in which visual acuity in central vision is affected, such as amblyopia.
|
0.911 |
2015 — 2018 |
Rokem, Ariel Howe, Bill Lazowska, Edward [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Bd Hubs: Collaborative Proposal: West: a Big Data Innovation Hub For the Western United States @ University of Washington
The Big Data Innovation Hub for the Western United States will join stakeholders from academia, industry, non-profit institutions and the community who share common challenges and innovative approaches related to the acquisition, storage, analysis and integration of large or "messy" data, commonly referred to as Big Data. The West's Innovation Hub (Hub) will serve 13 states with Montana, Colorado and New Mexico marking the eastern boundary. This project will develop the organizational and governance structures for the Hub, and initiate efforts toward defining spoke activities for subsequent phases of the data innovation hubs program.
The initial themes include Big Data technology, data-enabled scientific discovery and learning, managing natural resources and hazards, metro data science, and precision medicine. Partnerships fostered through the Hub will enable the use of Big Data to assess risks related to regional and long-term decisions. The Hub's structure will enable impact in later phases of the Hubs program that may include data-driven models for managing natural resources to tools for integrating self-collected patient data for more precise care options. Through coordination activities that inspire the action of its members, the Hub has the potential to facilitate the improved flow of commercial technologies in ways that maximize competitiveness for member organizations, such as universities, and vice versa: the Hub has the potential to expand the impact of its members' technologies through greater adoption or via start-ups. The Hub will be impactful by facilitating cross-discipline approaches to Big Data innovation and problem solving, influencing the next generation of thought leaders and data scientists. The partnerships enabled by the Hub will lead to professional certificate programs and student internships, creating a pipeline of graduates from partner institutions to impact corporations, public and governmental agencies, national labs, resource-planning agencies, and regulatory commissions.
Project URL: BDHub.SDSC.edu
|
1 |
2017 — 2020 |
Connolly, Andrew Balazinska, Magdalena (co-PI) [⬀] Juric, Mario (co-PI) [⬀] Cheung, Alvin Rokem, Ariel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Si2-Sse: An Ecosystem of Reusable Image Analytics Pipelines @ University of Washington
Astronomy has entered an era of massive data streams generated by telescopes and surveys that can scan tens of thousands of square degrees of the sky across many decades of the electromagnetic spectrum. The promise of these new experiments - characterizing the nature of dark energy and the composition of dark matter, discovering the most energetic events in the universe, tracking asteroids whose orbits may intersect with that of the Earth - will only be realized if we can address the challenge of how to process and analyze the tens of petabytes of images that these astronomical surveys will generate per year. With the increasing capacity for scientists to collect ever larger sets of data, often in the form of images, our potential for scientific discovery will soon be limited not by how we collect or store data, but rather how we extract the knowledge that these data contain (e.g. how we account for noise inherent within the data, and understand when we have detected fundamentally new classes and interesting events or physical phenomena). This project is to develop an open source scalable framework for the analysis of large imaging data sets. It is designed to operate as a cloud service, incorporate seamlessly new or legacy image processing algorithms, support and optimize complex analysis workflows, and scale analyses to thousands of processors without the need for an individual user to develop custom solutions for a specific computer platforms or architecture. This framework will be integrated with state-of-the-art image analysis algorithms developed for astronomical surveys to provide an image analytics platform that can be used by future telescopes and cameras and the astronomical community as a whole. Beyond astronomy, the framework will be extended to enable scientists from the physical and life sciences that make use of imaging data (e.g. neuroscience, oceanography, biology, seismology) to focus their work on developing scientific algorithms and analyses rather than the infrastructure required to process massive data sets
Over the last decade, there have been many advancements in astronomical image analysis algorithms and techniques; driven by new surveys and experiments. The complexity of these techniques and the systems that run them has, however, meant that the number of users who make use of these advancements is small; typically restricted to the experiments themselves or to a small group of expert users. Because of this, the community as a whole does not benefit from the significant investment in image analytics for astronomy. In this project, the PIs address these issues by developing and deploying a scalable framework for the analysis of small and large imaging datasets. This cloud-based system will be able to incorporate new and legacy image processing algorithms, support and optimize complex analysis workflows, scale applications to thousands of processors without users needing to develop custom code for specific platforms, and support efficient sharing of algorithms and analysis results among users. It will enable state-of-the-art image analysis algorithms (e.g. those developed for surveys such as the Large Synoptic Survey Telescope; LSST) to be used by the broad astronomical community and in so doing will leverage then tens of thousands of hours that has been invested in the development of these techniques. To accomplish this the team will extract key data analysis functions from the LSST data analysis pipeline into a standalone library, independent of the LSST software stack and data access mechanisms. They will integrate this library with the Myria big data management system. Myria is an elastically scalable big data management system that operates as a service in the Amazon cloud that wedeveloped at the University of Washington. Compared with other big data systems, Myria is especially attractive because it integrates PostgreSQL database instances within its storage layer and thus provides access to PostgreSQL's rich libraries of spatial functions, which are frequently used in astronomical data analysis pipelines. At the same time, it has rich support for new and legacy Python code and for complex analytics. By integrating the library of LSST image analytics functions with Myria, new image analytics pipelines will become significantly easier to write. The skeleton of the analysis pipeline will be expressed in the MyriaL declarative query language (i.e. SQL extended with constructs such as iterations and others). The core data processing functions will directly map to Python functions, enabling the reuse of legacy code and the easy addition of new functions. The resulting code will be amenable to optimization and efficient execution using the Myria service. By doing so, they intend to reduce barriers to adoption. Users will be able to express their analysis in Python without worrying about how data and computation will be distributed in a cluster. The image analysis framework developed as part of this proposal will be made publicly available as open-source software. The PIs will utilize the use case of neuroscience to demonstrate how their system, developed for astronomy, can be deployed across multiple domains.
This project is supported by the Office of Advanced Cyberinfrastructure in the Directorate for Computer & Information Science and Engineering, the Astronomical Sciences Division and Office of Multidisciplinary Activities in the Directorate of Mathematical and Physical Sciences.
|
1 |
2017 — 2021 |
Rokem, Ariel Shalom |
R25Activity Code Description: For support to develop and/or implement a program as it relates to a category in one or more of the areas of education, information, training, technical assistance, coordination, or evaluation. |
Summer Institute in Neuroimaging and Data Science @ University of Washington
Project Summary/Abstract The study of the human brain with neuroimaging technologies is at the cusp of an exciting era of Big Data. Many data collection projects, such as the NIH-funded Human Connectome Project, have made large, high- quality datasets of human neuroimaging data freely available to researchers. These large data sets promise to provide important new insights about human brain structure and function, and to provide us the clues needed to address a variety of neurological and psychiatric disorders. However, neuroscience researchers still face substantial challenges in capitalizing on these data, because these Big Data require a different set of technical and theoretical tools than those that are required for analyzing traditional experimental data. These skills and ideas, collectively referred to as Data Science, include knowledge in computer science and software engineering, databases, machine learning and statistics, and data visualization. The Summer Institute in Data Science for Neuroimaging will combine instruction by experts in data science methodology and by leading neuroimaging researchers that are applying data science to answer scienti?c ques- tions about the human brain. In addition to lectures on the theoretical background of data science methodology and its application to neuroimaging, the course will emphasize experiential hands-on training in problem-solving tutorials, as well as project-based learning, in which the students will create small projects based on openly available datasets.
|
0.958 |
2018 — 2020 |
Harchaoui, Zaid Fazel, Maryam Arendt, Anthony Aravkin, Aleksandr Rokem, Ariel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tripods+X:Edu: Foundational Training in Neuroscience and Geoscience Via Hackweeks @ University of Washington
Data-driven science and engineering requires close collaboration and coordination among researchers from different communities, including core sciences, statistics, and optimization. This project will build on and broaden the successful existing "hackweek" model to bring together participants from neuroscience and geoscience with experts in machine learning and optimization. The hackweeks will incorporate tutorials on core methods, hands-on sessions, and group activities designed to promote deeper understanding and closer collaboration of both data-driven scientific problems in neuroscience and geoscience, as well as fundamental methodologies and how they apply to these sciences.
In particular, the investigators plan to redesign geo-hackweek and neuro-hackweek, two events that the have been held annually at the University Washington by two of the PIs in recent years. Geo-hackweek will be redesigned to include the discussion of geophysical data interpolation and denoising, geophysical inverse problems, and Gaussian process models, and connecting these to techniques in optimization, including sparse and low-rank models, stochastic optimization, and PDE-constrained optimization. Neuro-hackweek will be augmented to include tutorials on the use of optimal transport models and Wasserstein distances in the analysis of neuroimaging data. This project aims to
(1) Expose participants from domain sciences to foundational topics, so they better understand data science tools, and in particular gain insight into how and when these algorithms work well (or do not work well); (2) Train participants to consider methods in the context of domain-specific problems, be able to identify domain-specific challenges, and think critically about how to effectively leverage optimization and machine learning tools for specific problem classes; (3) Expose students with foundations background to application domains, to understand practical challenges in application of machine learning tools; (4) Generate pedagogical material that can used in similar events; (5) Encourage collaborations between domain experts and experts on theory and methods.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2019 |
Rokem, Ariel Shalom |
RF1Activity Code Description: To support a discrete, specific, circumscribed project to be performed by the named investigator(s) in an area representing specific interest and competencies based on the mission of the agency, using standard peer review criteria. This is the multi-year funded equivalent of the R01 but can be used also for multi-year funding of other research project grants such as R03, R21 as appropriate. |
A Data Science Toolbox For Analysis of Human Connectome Project Diffusion Mri @ University of Washington
Project Summary/Abstract The connections between different brain regions play an important role in normal brain function. This project proposes to create an end-to-end pipeline for analysis of human white matter connections using ?tractometry? methods. In tractometry, tissue properties are estimated in the long-range connections between remote brain regions. The project will focus on the analysis of the Human Connectome Project diffusion MRI dataset, which provides one of the largest available publicly available datasets of diffusion MRI from a sample of normal healthy individuals. Based on this dataset, we propose to create a normative distribution of tissue properties in the major white matter connections, to develop novel statistical methods to connect the properties of white matter connections to cognitive abilities, and to create visualization tools for further communication and exploration of the data. The tools created will be initially applied to the Human Connectome Project Dataset, but will also be useful in smaller studies on speci?c populations and in other large-scale datasets, such as the ABCD study.
|
0.958 |
2019 — 2021 |
Balazinska, Magdalena [⬀] Pfaendtner, W. James Beck, David (co-PI) [⬀] Rokem, Ariel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hdr: I-Dirse-Fw: Accelerating the Engineering Design and Manufacturing Life-Cycle With Data Science @ University of Washington
The manufacturing life cycle begins with the discovery of new molecules and materials. This first step is often initiated through computer simulations that explore the space of possible molecules and materials, and identify promising candidates that can later be tested in laboratories. As simulations have grown in scale and complexity, this step has become a critical bottleneck. New data-driven approaches present the opportunity to increase the speed and accuracy of such predictions, with broad potential impact on the US Manufacturing sector. This Harnessing the Data Revolution Institutes for Data-Intensive Research in Science and Engineering (HDR-I-DIRSE) Frameworks award brings together Engineers and Data Scientists to conceptualize a new Engineering Data Science Institute where these tools can be applied for new discovery. The effort will develop new data science approaches to accelerate the engineering life cycle: design, characterization, manufacturing, and operation. This life cycle starts with the discovery of new molecules and materials, followed by advanced characterization with high throughput methods augmented by machine learning. Then, efficient manufacturing and operation of systems that use these materials can be designed and developed. By focusing on this holistic lifecycle, the researchers will build a broadly applicable foundation in Engineering Data Science methods. The new Institute will seek to create an Engineering Data Science environment that supports engineers and scientists (students, postdoctoral researchers, and faculty) through a synergistic set of collaboration and education activities.
This collaborative effort follows three thrusts. The first focuses on the reduction of the experimental design space with data science tools targeting the discovery of new molecules and polymers. The research develops a new, formal framework for pairing accurate predictive simulations with data-driven models to create a scalable and transferable workflow that can be deployed across multiple examples of molecular engineering applications. The second thrust addresses a manifold of cross-cutting needs at the intersection of image data analytics and characterization of materials and systems. It also builds community cyberinfrastructure through open-source software resources with support for execution in public clouds. The final thrust focuses on improving manufacturing, optimization, and control. It further enhances cyberinfrastructure resources through a suite of open-source software solutions to systematically develop digital twin models for complex engineering and manufacturing systems, and apply them for optimization and control. This project is part of the National Science Foundation's Harnessing the Data Revolution (HDR) Big Idea activity and is co-funded by the Office of Advanced Cyberinfrastructure.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2021 |
Milham, Michael Peter (co-PI) [⬀] Poldrack, Russell A [⬀] Rokem, Ariel Shalom Satterthwaite, Theodore |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Nipreps: Integrating Neuroimaging Preprocessing Workflows Across Modalities, Populations, and Species
Project Summary Despite the rapid advances in the neuroimaging research workflow over the last decade, the enormous variability between and within data types and specimens impedes integrated analyses. Moreover, the availability of a comprehensive portfolio of software libraries and tools has also resulted in a concerning degree of analytical variability. Generalizing the preprocessing ? that is, the intermediate step between data generation by the measurement device and the subsequent statistical modeling and analysis ? beyond fMRIPrep, we propose a framework called NiPreps (NeuroImaging Preprocessing toolS) that we envision as a workbench for the development of such pipelines. By exclusively addressing the preprocessing of the data, fMRIPrep has successfully allowed researchers to focus their effort and expertise on the portion most relevant to scientific inference (i.e., statistical and computational analyses) and reduce methodological variability. NiPreps expands fMRIPrep to operate on new imaging modalities (diffusion MRI, arterial spin labeling, positron emission tomography, and multi-echo functional MRI) and disciplines (e.g., preclinical imaging). Despite some remarkable analysis workflows that display end-to-end consolidation, integrations across applications (e.g., analyses of human and nonhuman data) remain exceptionally challenging. Hence, we will evolve fMRIPrep into NiPreps, a software framework integrating BIDS and following the BIDS-Apps specifications. First, the project will consolidate the NiPreps foundations, with the generalization of fMRIPrep's driving principles and methods across modalities and domains of application. Second, we will expand the portfolio of end-user NiPreps with dMRIPrep, ASLPrep, PETPrep, and better coverage of multi-echo fMRI by fMRIPrep. Finally, we will address the NiPreps community's consolidation to ensure the sustainability of the framework, converging the communities around each -Prep with hackathons and docusprints. In short, NIPreps will pave the way towards next-generation imaging, ultimately allowing neuroscientists to seek a unified statistical framework capable of rigorously integrating cross-application and cross-species data analysis.
|
0.911 |