Area:
Learning Theory, Computer Vision
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Alessandro Verri is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2002 — 2005 |
Cauwenberghs, Gert [⬀] Poggio, Tomaso Verri, Alessandro Dagnelie, Gislin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Trainable Visual Aids For Object Detection and Identification @ Johns Hopkins University
This project leverages advances in statistical learning theory, machine vision, and massively parallel very-large-scale-integration technology to develop a custom-trainable, versatile, self-contained, and mobile system for visually impaired users. The system will aid the user in interacting freely with other people and the environment, by rapidly detecting and localizing key visual environmental cues and rapidly recognizing and identifying familiar people and objects. At the core of the system is the "Kerneltron", a massively parallel Support Vector "Machine" (SVM) in silicon. The SVM hardware will be trained on-line by the end user to accommodate a variety of visual detection and recognition tasks in everyday situations through presentation of examples. The recognition core will be embedded in a portable prototype visual aid, interfacing with a CCD camera front-end, and an audio synthesizer back-end. Menu-driven keypad control will allow direct input and feedback from the user in training and directing the system. The user interface will be based on "OpenEyes", a wearable computer vision system for the blind. Proof of concept demonstration of the hardware system and evaluation of the training and test performance will be conducted with feedback from volunteer impaired users.
|
0.927 |