Ying Wu - US grants
Affiliations: | 2006 | Northwestern University, Evanston, IL |
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Ying Wu is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2003 — 2008 | Wu, Ying | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Transductive Learning For Retrieving and Mining Visual Contents @ Northwestern University Contemporary visual learning methods for visual content mining tasks are plagued by several critical and fundamental challenges: (1) the unavailability of large annotated datasets prevents effective supervised learning; (2) the variability in different working environments challenges the generalization of inductive learning approaches; and (3) the high-dimensionality of these tasks confronts the efficiency of many existing learning techniques. The goal of this research project is to overcome these challenges by exploring a novel transductive learning approach. |
1 |
2004 — 2012 | Wu, Ying | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Visual Analysis of High-Dimensional Motion: a Distributed/Collaborative Approach @ Northwestern University This project is about analyzing high-dimensional motion (HDM) from video. HDM refers to various complex motions with high degrees of freedom, including the articulation of human body, the deformation of elastic shapes and the multi-motion of multiple occluding targets. The goal of this project is to overcome the curse of dimensionality embedded in this challenging visual inference problem, by systematically pursuing a new distributed/collaborative approach that unifies various HDMs. |
1 |
2005 — 2009 | Katsaggelos, Aggelos (co-PI) [⬀] Choudhary, Alok [⬀] Wu, Ying Memik, Seda Memik, Gokhan (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Northwestern University ABSTRACT |
1 |
2009 — 2015 | Wu, Ying | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Northwestern University Although persistent and long-duration tracking of general targets is a basic function in the human vision system, this task is quite challenging for computer vision algorithms, because the visual appearances of real world targets vary greatly and the environments are heavily cluttered and distractive. This large gap has been a bottleneck in many video analysis applications. This project aims to bridge this gap and to overcome the challenges that confront the design of long-duration tracking systems, by developing new computational models to integrate and represent some important aspects in the human visual perception of dynamics, including selective attention and context-awareness that have been largely ignored in existing computer vision algorithms. |
1 |
2010 — 2012 | Wu, Ying | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Sino-Usa Summer School in Vision, Learning, Pattern Recognition Vlpr 2010 @ Northwestern University The recent decade has witnessed rapid advances in computer vision research, not only in its fundamental studies but also its emerging applications. This Sino-USA summer school in Vision, Learning and Pattern Recognition (VLPR 2010) is held in Xi'an City, China. It brings together a high-quality team of leading American and Chinese researchers in computer vision to offer a one-week educational program to students and junior scholars from both US and China. This education program provides an important opportunity to discuss recent advance in Perception, Motion and Events, and allows technical and culture exchanges between researchers from two countries. Such interactions are important for fostering new understanding and new collaborations in science, education, and culture. |
1 |
2012 — 2016 | Argall, Brenna Lynch, Kevin Murphey, Todd (co-PI) [⬀] Colgate, J. Edward Wu, Ying |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Equipment Development: Bimanual Robotic Manipulation and Sensory Workspace @ Northwestern University Proposal #: 12-29566 |
1 |
2012 — 2017 | Wu, Ying | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Mining and Learning Visual Contexts For Video Scene Understanding @ Northwestern University This project investigates a fundamental and critical, but largely unexplored issue: automatically identifying visual contexts and discovering visual patterns. Many contemporary approaches that attempt to divide and conquer the video scenes by analyzing the visual objects separately are largely confronted. Exploring visual context has shown its promise for video scene understanding. Discovering visual contexts is a challenging task, due to the content uncertainty in visual data, structure uncertainty in visual contexts, and semantic uncertainty in visual patterns. The goal of this project is to lay the foundation of contextual mining and learning for video scene understanding, by pursuing innovative approaches to discovering collocation visual patterns, to empowering contextual matching of visual patterns, and to facilitating contextual modeling for visual recognition. The research team develops a unified approach to mining visual collocation patterns and learning visual contexts, and to provide methods and tools that facilitate contextual matching and modeling. |
1 |
2016 — 2019 | Wu, Ying | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Modeling and Learning Visual Similarities Under Adverse Visual Conditions @ Northwestern University In many emerging applications such as autonomous/assisted driving, intelligent video surveillance, and rescue robots, the performances of visual sensing and analytics are largely jeopardized by various adverse visual conditions in complex unconstrained environments, e.g., bad weather and illumination conditions. This project studies how and to what extend such adverse visual conditions can be coped with. It will advance and enrich the fundamental research of computer vision, and bring significant impact on developing "all-weather"computer vision systems that benefit security/safety, autonomous driving, and robotics. The project contributes to education through curriculum development, student training, and knowledge dissemination. It also includes interactions with K-12 students for participation and research opportunities. |
1 |
2018 — 2021 | Wu, Ying | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: a Unified Compositional Model For Explainable Video-Based Human Activity Parsing @ Northwestern University An ultimate goal of computer vision is understanding scene and activities from images and video. This task involves many perceptual and cognitive processes at various semantic levels. A next step beyond visual classification is visual interpretation, that is, to explain the relations among visual entities through visual inference and reasoning. Due to the enormous variability across instances of this problem, semantic parsing for explaining a visual scene and activities is highly challenging. This project studies how the structural composition of visual entities can be used to overcome the diversity in the visual scene and activities. It advances and enrich the basic research of computer vision, and brings significant impact on many merging applications, including autonomous or assisted driving, intelligent robots, and intelligent video surveillance. This research also contributes to education through curriculum development, student training, and knowledge dissemination. It includes interactions with K-12 students for participation and research opportunities. |
1 |
2020 — 2023 | Wu, Ying | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Visual Reasoning and Self-Questioning For Explainable Visual Question Answering @ Northwestern University Visual question answering (VQA), aiming to answer a question in natural language related to a given image, is still in its infancy. Current approaches lack flexibility and generalizability to handling diverse questions without training. It is therefore desirable to explorep explainable VQA (or X-VQA) that can provide explanations of its reasoning in natural language in addition to answers. This requires integrating computer vision, natural language, and knowledge representation, and it is an incredibly challenging task. By exploring X-VQA this project advances and enriches the fundamental computer vision, image understanding, visual semantic analysis, machine learning, and knowledge representation. And it also greatly facilitates a wide range of applications including visual chatbots, visual retrieval and recommendation, and human-computer interaction. This research also contributes to education through curriculum development, student training, and knowledge dissemination. It includes interactions with K-12 students for participation and research opportunities. |
1 |