2019 — 2022 |
Papanikolopoulos, Nikolaos (co-PI) [⬀] Park, Hyun Soo Wang, Youbing |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of Real-Time 3d Social Signal Imaging System (Ssis) @ University of Minnesota-Twin Cities
This project represents a step toward a computational model capable of detecting early social behavioral markers in children at risk for autism spectrum disorder, schizophrenia, and obsessive-compulsive disorder. The real-time 3D Social Signal Imaging System (SSIS) will be designed to precisely measure social signals utilizing cameras producing billions of pixels dozens of times per second. The infrastructure will be designed to enable reconstruction of the 3D geometry of gaze, face, finger, body, and physical appearance. The system is expected to be capable of generating a vast amount of multiple perspective visual data to reconstruct high fidelity 3D signals, needed to enable social intelligence that can decode every nuance of human expression.
The ability to discern subtle social signals (e.g., gaze following) can be computationally modeled by leveraging a massive camera system. The Social Signal Imaging System (SSIS) facilitates quantitative measurements of the social signals in 3D at unprecedented temporal and spatial resolutions. This development involves the following steps: (i) Design a distributed visual computing architecture to efficiently process the Multiview visual data streams; (ii) Build a new high-fidelity 3D representation of the view-invariant social signals (gaze, face, finger, body, appearance); (iii) Create a novel 3D dataset of social signals for use in discovering behavioral markers; and (iv) Develop new computer vision algorithms (recognition, matching, tracking, reconstruction) tailored to social signal imaging that minimize computational latency while maintaining accuracy. The system provides a unique characterization of microscopic social signals that enable overcoming fundamental limitations of existing approaches in behavioral assessment of at-risk children. This work impacts diverse disciplines such as robotics, neuroscience, psychology, psychiatry, and medicine. The outcomes will be disseminated through K-12 students from under-represented groups via workshops, machine learning and technology summer camps, and other activities.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.9 |
2020 |
Ding, Long (co-PI) [⬀] Luo, Wenqin [⬀] Park, Hyun Soo |
R34Activity Code Description: To provide support for the initial development of a clinical trial or research project, including the establishment of the research team; the development of tools for data management and oversight of the research; the development of a trial design or experimental research designs and other essential elements of the study or project, such as the protocol, recruitment strategies, procedure manuals and collection of feasibility data. |
Developing a Mouse Chronic Pain Scale by 3d Imaging and Measurement of Mouse Spontaneous Behaviors @ University of Pennsylvania
PROJECT SUMMARY Rodent models are highly valuable for elucidating the molecular and cellular mechanisms of chronic pain. Because rodents cannot articulate their sensation, ?pain-like? behaviors have been used as the proxy. However, sensitivity and specificity of many existing methods for measuring rodent ?pain? sensation, especially ?chronic pain?, are uncertain. Here we propose to explore the feasibility of a largely automated and data-driven behavioral assay for identifying spontaneous pain in freely behaving mice. Specifically, we will take advantage of recent advances in 3D motion analysis, which enable precise and robust measurements of movements without human intervention, to extract movement features from freely moving mice in various pain states (baseline, induced acute pain, chronic pain, and with painkiller treatment). We will generate a database of movement features of control mice and mice with induced acute cheek/leg pain or chronic neuropathic cheek/leg pain, using both sexes of two mouse strains. We will then use machine-learning algorithms to identify the best combination of movement features for predicting the pain state (a ?mouse chronic pain scale?). These efforts are expected to produce a novel and objective method to assess spontaneous pain, a characteristic feature of chronic pain, in mice. This method can supplement our recent method in measurements of evoked responses (a ?mouse acute pain scale?) to provide efficient, robust, and comprehensive assessments of pain-related rodent behaviors and facilitate mechanistic investigations of brain circuits in mediating and modulating pain. Our interdisciplinary team is well suited to complete these Aims, utilizing combined expertise in mouse somatosensory/pain system (PI Luo), behavioral, systems and computational neuroscience (PI Ding), and 3D imaging and computer vision (PI Park).
|
0.9 |
2020 — 2023 |
Guala, Michele (co-PI) [⬀] Hong, Jiarong Iungo, Giacomo Valerio Park, Hyun Soo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of Grand-Scale Atmospheric Imaging Apparatus (Gaia) For Field Characterization of Atmospheric Flows and Particle Transport @ University of Minnesota-Twin Cities
Understanding the flow and transport of particles (e.g., snow, sand, pollens, etc.) in atmospheric environments is critical for applications related to wind energy, meteorology (e.g., snow settling), geomorphology (e.g., desert migration), oceanography (e.g., spray generation), agriculture (e.g., pollen dispersal), public health (e.g., airborne disease transmission), etc. These processes involve flows over a broad range of spatial and temporal scales and complex atmospheric phenomena which are impossible to be fully reproduced in the laboratory. Conventional field measurements (e.g. meteorological tower, LiDAR, Sodar and Radar) of these processes do not have sufficient resolutions to probe into their detailed underlying physics. To bridge this gap, with a team of flow physicists, computer scientists, and engineers, the proposal aims to develop a Grand-scale Atmospheric Imaging Apparatus (GAIA), a stand-alone and imaging-based field measuring system, able to quantify atmospheric flows and particle transport over large sample regions with unprecedented spatiotemporal resolution. Though collaboration with 11 university, national labs and industries across the globe, GAIA will enable fundamental and applied research across engineering, geoscience and computer science, and will support a number of existing educational programs involving underrepresented groups and minorities.
The goal of the project is to develop a Grand-scale Atmospheric Imaging Apparatus (GAIA), envisioned as a field instrument conducting particle image/tracking velocimetry (PIV/PTV) by exploiting particles (e.g., snow, sand, pollen, droplets, etc.) naturally present in the atmosphere to investigate both flow (using them as tracers) and the transport of the particles themselves depending on their inertial properties with respect to the flow. The development of GAIA innovates every single component of conventional PIV/PTV including both the hardware and processing software to address key challenges in conducting high-resolution flow imaging under harsh field conditions. Specifically, GAIA involves multi-mode and multi configuration Lego design and mechanical automation for the hardware and an integration of PIV/PTV concept with state-of-the-art machine learning multiview 3D scene reconstruction for data processing. Such innovation enables GAIA to conduct high-resolution imaging of flow and particle transport across a broad range of scales with sample volumes up to orders of magnitude larger than those of conventional PIV/PTVs. In addition, GAIA incorporates several unique sensors (e.g., digital inline holography) for in situ characterization of meteorological conditions and particle properties (e.g. shape, concentration, etc.) with unprecedented details. The GAIA will be tested under different field conditions in conjunction with cutting-edge 3D Doppler scanning LiDARs. Such integration enables the first-ever measurements of atmospheric flow and particle transport from sub-meter to kilometer scales, providing benchmark datasets not only for the fundamental study of atmospheric flow and particle transport, but also for learning-based motion reconstruction in computer science.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.9 |
2020 — 2023 |
Hayden, Ben Park, Hyun Soo Zimmermann, Jan (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ncs-Fo: Neural Correlates of Social States in Macaques @ University of Minnesota-Twin Cities
Many biological organisms interact with one another by transmitting and perceiving social signals. These include gaze, facial expression, and body pose that convey information about the individual?s internal state, including their focus of attention, intended actions, and emotional level. Despite the ubiquity and potential value of these social interactions, understanding how neural activity gives rise to social interactions is still a largely uncharted area. Past experiments were conducted either by subjective behavioral observations in natural social settings or by quantifiable methods such as neuroimaging in restricted social settings. This project will address these limitations by leveraging the project team's recently developed high resolution motion capture system, which can measure, detect, and quantify natural social behaviors and their corresponding neural activity. This research will open new opportunities to study early behavioral markers, such as those for at-risk children with autism spectrum disorder, schizophrenia, and obsessive-compulsive disorder.
The project team's main innovation is a new statistical model called social states, designed to encode the social context of joint behaviors. These social states will be associated with neurophysiological activities in two brain regions, the dorsolateral prefrontal cortex (dLPFC) and dorsal anterior cingulate cortex (dACC), both located in the prefrontal cortex. Using the neural correlates of social states, the project will develop a novel method to model the dynamics of social state transitions, facilitating an understanding of how these two brain regions are responsible for processing social signals. While the project will focus on specific brain regions, the planned research will provide a general computational foundation for understanding the complex social behaviors of macaques. The planned research will advance the computational understanding of cognitive and neural processes by learning from millions of neurobehavioral data points in real, unrestricted environments. Developing such a computational model is a complex, real world problem, requiring a new holistic, transformative, and integrative approach. The planned solution is built upon domain knowledge from multiple disciplines, including machine learning, primate physiology, neuroscience, and computer vision.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.9 |
2022 — 2025 |
Park, Hyun Soo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Learning 3d Equivariant Visual Representation For Animals @ University of Minnesota-Twin Cities
Recent advances in computer vision make it possible to track humans in the wild with remarkable accuracy. Generalizing these new approaches towards diverse animal species, however, is still premature, despite the significant scientific and societal impact on multiple disciplines such as biology, neuroscience, and medicine. These approaches are built upon a supervised learning paradigm that requires sizable annotated data, but attaining comparable annotated visual data for animal species is fundamentally infeasible, as it requires expert knowledge and there is limited availability of species-specific images, leading to a large bias in the tracking models. In this research project, the investigator will develop new computer vision theories and algorithms that can effectively explore a large or potentially infinite number of unlabeled images of animals. The developed fundamentals are generic, and therefore readily applicable, with some modifications, to similar computer vision tasks, such as 2D/3D human pose estimation, deformable object registration, and dense correspondence estimation. The project integrates research with education and outreaches to K-12 students of under-represented groups through a series of programs.<br/><br/>While the primary focus of this research program is on learning a visual representation of animals, this project addresses a core computer vision problem of landmark localization/keypoint detection/pose estimation given a limited amount of labeled data. The project will make use of 3D equivariance, an intrinsic property in visual data of articulated deformable objects, to uncover shared and repeatable visual relationships across views, time, and species. The investigator will integrate the proposed equivariance through 3D reconstruction of animals into representation learning, which will facilitate the transfer of visual information from one image to another. More specifically, the project will work on: (1) a new multiview geometry to learn the visual transformation across views, which allows cross-view self-supervision; (2) a re-formulation of non-rigid structure from motion, parametrized by 3D pose to enable learning from monocular videos; and (3) disentanglement of appearance and 3D pose to learn the visual transformation across species.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.9 |