2002 — 2005 |
Triesch, Jochen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hierarchical Architecture For View-Based Object Recognition in Cluttered Scenes @ University of California-San Diego
For computers and robots to live up to their full potential as human assistants, they must be able to reliably perceive humans and objects in their environment. This project addresses the recognition of objects in complex everyday scenes (e.g.\ office, household, or traffic scenes). Biological vision systems have successfully solved the vision problem and the philosophy of this project is to try to extract principles from the information processing in the primate visual system and to apply them in the design of object recognition algorithms. The goal is to understand how object recognition in complex environments can be achieved in a hierarchical architecture that mimics the layout of the object recognition pathway in the primate brain, and to build a demonstration system capable of recognizing a large number of objects in complex everyday scenes. In particular, this project will focus on three processing principles: hierarchical representations, massive feedback, and active scene analysis. If successful, the project will further our understanding of how objects can be recognized using hierarchical view based object representations, how these representations can be learned from unsegmented training images, and how feedback can aid recognition in this kind of hierarchical recognition architecture. This will potentially open a range of new application areas for computer vision systems and may also lead to a better understanding of object recognition in the brain.
|
0.907 |
2003 — 2006 |
Movellan, Javier [⬀] De Sa, Virginia (co-PI) [⬀] Triesch, Jochen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Developmental Methods For Automatic Discovery of Object Categories @ University of California-San Diego
The goal of this project is to develop systems that autonomously learn to detect and track objects in video images. Current approaches require large training datasets for which the object of interest has been manually segmented by human operators. This process is very laborious and time consuming, greatly limiting progress in the field. This is arguably the main reason why computer vision technology has not found a niche yet in every-day life applications. In this project new machine learning and machine perception systems are being explored that avoid the manual segmentation step. The training input to these systems is a video dataset of unlabeled, naturally moving faces in various background conditions. The target output is a state of the art face detection system. The approach being explored is based on the idea that one can develop sophisticated object detectors in an unsupervised manner by biasing the training process using an ensemble of low-level interest operators (motion, color, contrast).
This project is expected to provide a new class of machine learning and machine perception algorithms that train themselves by observation of vast image datasets. The use of large datasets is expected to make a critical difference on the robustness of the systems and to allow them to handle realistic every-day life environments. Such systems would have significant applications in education, security, and personal robots. Besides its practical applications, this project has the potential for broad scientific implications in machine learning, machine perception, and developmental psychology.
|
0.907 |
2003 — 2006 |
Triesch, Jochen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Us-Germany Cooperative Research: International: Probabilistic Cue Integration of Multimodal Sensor Data in Biologically Inspired Machine Vision Systems @ University of California-San Diego
0233810 Heineman This dissertation enhancement award supports a graduate student from William Heineman's lab at the University of Cincinnati during here two-month tenure in the lab of Gunther Wittstock in the Department of Chemistry at the University of Oldenburg, Germany. The goal of the research is to develop an ultrasensitive and miniaturized immunoassy applicable to clinical diagnostics and detection of biowarfare agents. The goal will be met using scanning electrochemical microscopy in to perform and detect the immunoassay. Using microbeads as a mobile solid phase for immounoassay increases the surface area to volume ratio, since they can be dispersed without solution, which minimizes diffusion distances. This research could lead to new technologies for high-density chip-based testing systems and could benefit in the detection of biowarfare agents such as toxins, bacteria, spores, and viruses.
The project also has a clear educational objective. The project will allow a graduate student to benefit from performing research in another country. She will develop a heightened appreciation of the world while also learning important new technical skills.
|
0.907 |
2005 — 2009 |
Deak, Gedeon Triesch, Jochen Lee, Kang |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dhb: the Emergence of Social Attention-Sharing in Infancy: Behavioral and Computational Tests of a New Theory @ University of California-San Diego
The Emergence of Social Attention-Sharing in Infancy: Behavioral and Computational Tests of a New Theory (Proposal No: 0527756) P.I: Gedeon Deak; Co-investigators: Kang Lee and Jochen Triesch
How do infants learn to interact with other people? An infant and parent playing together might seem like a simple interaction, but they actually form a complex social system. Infants must respond quickly and appropriately to caregivers, but the caregivers' behavior is quite variable. Nonetheless, most infants learn to use caregivers' behaviors to make sense of their surroundings. For example, by 9 months of age infants use caregivers' gaze shifts (that is, movements of the head and/or eyes to look around) to find nearby interesting sights. Attention sharing skills like this are crucial for learning language and non-verbal skills throughout childhood. How do early attention-sharing skills like gaze-following develop? Answering this will shed light on developmental disorders characterized by attention-sharing deficit, most notably autism. It also will help us understand what is distinctive about human social skills and interactions. Through a long-term study of a group of infants from 3 to 12 months, and innovative computer simulations of infants' social learning, we will test a theory of how attention-sharing skills develop. The theory proposes that attention-sharing skills emerge from several factors: brain-based learning processes, basic emerging perceptual routines and emotional responses, and the presence of caregivers who produce semi-predictable behaviors. By following infants for 9 months we will track how their attention-sharing skills develop. We will also test their learning and perceptual skills month-by-month, to see how these relate to the onset of new attention-sharing skills. In addition, monthly in-home observation of infants and parents at play will tell us how caregivers' natural behaviors help infants learn new attention-sharing skills. Finally, we will use standardized tests of infants' developmental status and caregivers' emotional well-being. Simultaneously, computer and robotic simulations will further test the theory. In a 3D 'virtual living room,' a simulated infant gets social input by watching a virtual parent handle and look at interesting objects. The virtual parent's actions are based on detailed recordings of the real parents' behaviors during the in-home play observations. The virtual infant uses specific learning routines to search for patterns in the virtual parent's behaviors, and alters its responses as specified by the theory. If the theory is correct, we should see the virtual infant learn attention-sharing skills like real infants. Further, by manipulating virtual infants' learning routines, or caregivers' behaviors, we can model how deficits in attention-sharing skills develop. This project will answer questions about cognitive and social development during the first year. Merging detailed behavioral observations with state-of-the-art computer modeling techniques will open new avenues for studying social development. Ultimately the results will probably support new diagnostic tools for early identification of social learning disabilities in infancy. It will also test a comprehensive theory of how human infants develop robust attention-sharing skills, supporting species-specific social learning abilities later in childhood.
|
0.907 |