2002 — 2009 |
De Sa, Virginia |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Optimal Information Extraction in Intelligent Systems @ University of California-San Diego
This is a Faculty Early Career Development (CAREER) award. The research will explore how an organism extracts information from its environment for learning and perception, both to understand human learning and to create better machine learning algorithms. The first objective is to develop and apply new algorithms to better understand the mapping in the sensory pathways. An important goal is to understand how the visual pathway computes the invariant responses observed in inferotemporal cortex. The second objective is to study the extraction of information from cross-sensory interaction and its role in the development of perceptual invariance. This work will involve integrated computer simulations, mathematical modeling, and psychological experiments. As part of this goal, the researcher will study input feature selection, output feature selection, and the general problem of how dimensions should best interact in machine learning algorithms. The final research goal is to bring together the new knowledge in constructing a better autonomous learning machine that can learn to recognize objects. The algorithm will be more modular than current algorithms and will collect its own training data autonomously through a camera, microphone, and other sensors.
The educational goal is to train students in the lab as well as in the classes to think about problems from a variety of approaches. They will be educated in the advantages and limitations of computational modeling, computational analysis, psychophysics and electrophysiology.
This CAREER award recognizes and supports the early career-development activities of a teacher-scholar who is likely to become an academic leader of the twenty-first century. The research will improve our understanding of optimal integration between sensory modalities. This will lead to improvement in computer sensing algorithms, including computer vision, speech recognition, and any other application where other sources of information may be available. The work is also expected to give insight to the general problem of how to optimally combine different sources of information for machine learning. The educational aspects of this project are designed to give students a multidisciplinary perspective along with specific skills allowing them to use and appreciate a variety of approaches and techniques.
|
0.915 |
2003 — 2006 |
Movellan, Javier [⬀] De Sa, Virginia Triesch, Jochen (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Developmental Methods For Automatic Discovery of Object Categories @ University of California-San Diego
The goal of this project is to develop systems that autonomously learn to detect and track objects in video images. Current approaches require large training datasets for which the object of interest has been manually segmented by human operators. This process is very laborious and time consuming, greatly limiting progress in the field. This is arguably the main reason why computer vision technology has not found a niche yet in every-day life applications. In this project new machine learning and machine perception systems are being explored that avoid the manual segmentation step. The training input to these systems is a video dataset of unlabeled, naturally moving faces in various background conditions. The target output is a state of the art face detection system. The approach being explored is based on the idea that one can develop sophisticated object detectors in an unsupervised manner by biasing the training process using an ensemble of low-level interest operators (motion, color, contrast).
This project is expected to provide a new class of machine learning and machine perception algorithms that train themselves by observation of vast image datasets. The use of large datasets is expected to make a critical difference on the robustness of the systems and to allow them to handle realistic every-day life environments. Such systems would have significant applications in education, security, and personal robots. Besides its practical applications, this project has the potential for broad scientific implications in machine learning, machine perception, and developmental psychology.
|
0.915 |
2003 — 2010 |
Dobkins, Karen (co-PI) [⬀] De Sa, Virginia Kriegman, David (co-PI) [⬀] Cottrell, Garrison (co-PI) [⬀] Boynton, Geoffrey (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Igert: Vision and Learning in Humans and Machines @ University of California-San Diego
Consider creating (a) a computer system to help physicians make a diagnosis using all of a patient's medical data and images along with millions of case histories; (b) intelligent buildings and cars that are aware of their occupants activities; (c) personal digital assistants that watch and learn your habits -- not only gathering information from the web but recalling where you had left your keys; or (d) a computer tutor that watches a child as she performs a science experiment. Each of these scenarios requires machines that can see and learn, and while there have been tremendous advances in computer vision and computational learning, current computer vision and learning systems for many applications (such as face recognition) are still inferior to the visual and learning capabilities of a toddler. Meanwhile, great strides in understanding visual recognition and learning in humans have been made with psychophysical and neurophysiological experiments. The intellectual merit of this proposal is its focus on creating novel interactions between the four areas of: computer and human vision, and human and machine learning. We believe these areas are intimately intertwined, and that the synergy of their simultaneous study will lead to breakthroughs in all four domains.
Our goal in this IGERT is therefore to train a new generation of scientists and engineers who are as versed in the mathematical and physical foundations of computer vision and computational learning as they are in the biological and psychological basis of natural vision and learning. On the one hand, students will be trained to propose a computational model for some aspect of biological vision and then design experiments (fMRI, single cell recordings, psychophysics) to validate this model. On the other hand, they will be ready to expand the frontiers of learning theory and embed the resulting techniques in real-world machine vision applications. The broader impact of this program will be the development of a generation of scholars who will bring new tools to bear upon fundamental problems in human and computer vision, and human and machine learning.
We will develop a new curriculum that introduces new cross-disciplinary courses to complement the current offerings. In addition, students accepted to the program will go through a two-week boot camp, before classes start, where they will receive intensive training in machine learning and vision using MatLab, perceptual psychophysics, and brain imaging. We will balance on-campus training with summer internships in industry.
IGERT is an NSF-wide program intended to meet the challenges of educating U.S. Ph.D. scientists and engineers with the interdisciplinary background, deep knowledge in a chosen discipline, and the technical, professional, and personal skills needed for the career demands of the future. The program is intended to catalyze a cultural change in graduate education by establishing innovative new models for graduate education and training in a fertile environment for collaborative research that transcends traditional disciplinary boundaries. In this sixth year of the program, awards are being made to institutions for programs that collectively span the areas of science and engineering supported by NSF
|
0.915 |
2008 — 2014 |
Movellan, Javier [⬀] Bartlett, Marian De Sa, Virginia Todorov, Emanuel (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Int2-Large: Collaborative Research: Developing Social Robots @ University of California-San Diego
The goal of this project is to make progress on computational problems that elude the most sophisticated computers and Artificial Intelligence approaches but that infants solve seamlessly during their first year of life. To this end we will develop a robot whose sensors and actuators approximate the levels of complexity of human infants. The goal is for this robot to learn and develop autonomously a key set of sensory-motor and communicative skills typical of 1-year-old infants. The project will be grounded in developmental research with human infants, using motion capture and computer vision technology to characterize the statistics of early physical and social interaction. An important goal of this project is to foster the conceptual shifts needed to rigorously think, explore, and formalize intelligent architectures that learn and develop autonomously by interaction with the physical and social worlds. The project may also open new avenues to the computational study of infant development and potentially offer new clues for the understanding of developmental disorders such as autism and Williams syndrome.
|
0.915 |
2008 — 2011 |
De Sa, Virginia Makeig, Scott (co-PI) [⬀] Poizner, Howard (co-PI) [⬀] Todorov, Emanuel (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Lifelike Visual Feedback For Brain-Computer Interface @ University of California-San Diego
de Sa 0756828
Brain computer interfaces (BCIs) translate basic mental commands into computer-mediated actions. BCIs allow the user to bypass the peripheral motor system and interact with the world directly through brain activity. These systems are being developed to aid users with motor deficits which can stem from: neurodegenerative disease (such as Lou Gehrig's disease, or ALS), injury (such as spinal cord injury), or even environmental restrictions which make movement difficult or impossible (such as astronauts in space suits). BCI systems typically require extensive user training to generate reproducible and distinct brain waves. Furthermore, until very recently, most BCI systems have interacted with the user in unintuitive or unnatural ways, such as moving a cursor or bar left and right by engaging in two unrelated forms of mental imagery, such as moving the right hand vs. the left foot. Realistic visual feedback of interpreted motor action should substantially improve usability and performance of BCI systems. This hypothesis is based on four observations: 1) humans have evolved to adapt their motor control in response to visual and proprioceptive feedback; 2) rapid motor adaptation is demonstrated in virtual reality experiments; 3) animals improve their neural signal when given visual feedback of their decoded neural activity; and 4) visual feedback of interpreted movement should activate the mirror neuron system, producing a stronger movement signal. The proposed work aims to improve upon current BCI systems based on motor imagery by providing more natural and lifelike feedback. This task can be broken down into 3 main objectives: 1) analyze motor imagery with visual feedback in an offline setting; 2) develop algorithms for real-time EEG analysis; and 3) construct a real-time BCI system utilizing lifelike motion animations as visual feedback. While results of objectives 1 and 2 should each in their own right contribute to the current state of the art in BCI systems, the largest BCI performance and usability gains should be made by introducing lifelike feedback into an online paradigm in the third objective. The proposed system can also be used to study learning and sensory-motor processing in normal subjects by studying their adaptation to the system. It may also inform more costly invasive recording experiments by helping to determine optimal placements of implants. All software written for EEG signal processing and analysis will be made available as add-ons to EEGLAB which is distributed in accordance with University of California policy for research, education, and non-profit purposes. The EEGLAB project is also developing an EEG database in conjunction with the San Diego Supercomputer Center. Representative data sets will be released via this database in accordance with University of California policy.
|
0.915 |
2010 — 2015 |
De Sa, Virginia |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Divvy: Robust and Interactive Cluster Analysis @ University of California-San Diego
This project will develop software for the application of rapid, robust, and interactive dimensionality reduction and clustering algorithms to real-world datasets. The software, called Divvy, will provide parallel visualization of multiple dimensionality reduction and clustering techniques, flexible domain knowledge integration, customizable exemplar and outlier visualization, and dynamic indicators of cluster quality using theoretically sound cluster quality measures. Divvy will leverage recent advances in parallel and graphics processing unit computing in order to deliver near real-time calculation of partitions on many datasets. Divvy also will be used as a platform for psychophysical studies that investigate the role and behavior of human researchers in the data-analysis process.
Machine learning techniques are increasingly essential for scientific analysis in many different fields. As datasets increase in size and dimensionality, scientists need access to tools that can help them quickly and easily perform exploratory data analysis and visualization. Divvy will allow a user to rapidly interact with and visualize the results of many different dimensionality-reduction and clustering algorithms through an intuitive interface. By collecting a broad set of cutting-edge machine-learning tools in one user-friendly interface, Divvy will enable substantial improvements in data analysis methodology for researchers outside of machine learning and related fields. This project will support workshops and tutorials at conferences outside the machine learning field in order to evangelize recent machine learning techniques and encourage adoption of Divvy.
|
0.915 |
2012 — 2016 |
De Sa, Virginia Makeig, Scott (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Small: Towards More Natural and Interactive Brain-Computer Interfaces @ University of California-San Diego
Brain computer interfaces (BCIs) translate basic mental commands into computer-mediated actions. BCIs allow the user to bypass the peripheral motor system and to interact with the world directly via brain activity. These systems are being developed to aid users with motor deficits stemming from neurodegenerative disease, injury, or even environmental restrictions which make movement difficult or impossible. One popular class of EEG-driven BCI systems is based on imagined movement. In these systems the user interacts with a computer through motor imagery such as the imagination of hand vs. tongue movement. But the ability of users to control such a BCI is very variable, and all the factors involved are not fully understood. For example, EEG signals can change drastically from offline training to online use. Unfortunately, drift in EEG can lead to loss of control of the BCI, which leads to user frustration and further drift of EEG signals from their training baselines.
The PI's goal in this project is to create a more robust BCI system by specifically addressing loss of control and system drift. Her hypothesis is that explicitly training on a signal that incorporates a user's satisfaction and, more importantly, dissatisfaction with the current performance may result in a more natural interface, and thereby lead to a reduction in loss of control and improved system usability and performance. The research will be carried out in three stages. First, active and passive EEG signals of dissatisfaction and satisfaction will be analyzed in a simulated online setting. Next, a real-time online system that recognizes dissatisfaction vs. satisfaction to control 1-D cursor movement will be constructed and system performance compared to that of a standard left/right motor imagery system. Finally, the best working parts of the dissatisfaction/satisfaction system will be integrated with the more standard left/right system, to create a better hybrid system. The (dis)satisfaction signals will be based on actively controlled motor imagery signals, interpreted emotion, and detection of error-like signals.
Broader Impacts: This project has the potential to vastly improve the robustness of EEG-based BCI systems, by responding to natural signals of satisfaction and dissatisfaction, by being resistant to drift, and by naturally taking advantage of frustration which is a common cause of loss of control. By training the BCI to recognize frustration the PI expects to turn this typically negative trait into a positive. The project will support and train an under-represented minority graduate student and a post-doc in this important interdisciplinary area, and it will create projects for under-represented REU participants as well as for high school students through the PI's partnerships with the NSF Temporal Dynamics of Learning Center (TDLC, where she is a member of the faculty governing and admissions committee for the REU program) and the Preuss School (a charter school for low income students with no college educated parent). All software written for EEG signal processing and analysis, as well as data from the experiments, will be made available as add-ons to EEGLAB which is distributed by co-PI Makeig.
|
0.915 |
2015 — 2018 |
De Sa, Virginia |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: a Novel P300 Brain-Computer Interface @ University of California-San Diego
Brain computer interfaces (BCIs) translate basic mental commands into computer-mediated actions, thereby allowing the user to bypass the peripheral motor system and interact with the world directly via brain activity. These systems are being developed to aid users with motor deficits stemming from neurodegenerative disease, injury, or even environmental restrictions which make movement difficult or impossible. One of the most successful classes of EEG-driven BCI systems is the P300, which works by detecting user responses to flashed stimuli. In most P300 systems, a grid of letters and/or other symbols is presented and rows or columns of the symbols are flashed in random order; the user attends to the desired symbol (usually by silently counting when it flashes). A major problem with these grid-based P300 systems is that the user must ideally look at the flashed target and minimally attend to the tiny letters, but late-stage ALS and other locked-in patients for whom these systems are most needed have trouble foveating targets and making controlled eye movements. The PI's hypothesis is that a BCI that flashes segments of one large letter can retain the combinatorial efficiency that comes with querying several letters at once, while having the advantage of one central focus (no gaze shifts required). This research aims to design and test this new segment speller idea. Project outcomes have the potential to vastly improve the usability of P300 EEG-based BCI systems for those with visual, sensory and motor impairments. All software written for EEG signal processing and analysis will be made available as add-ons to EEGLAB which is distributed by the Swartz Center for Computational Neuroscience (SCCN) at UCSD and part of the Temporal Dynamics of Learning Center. Data will also be made available through the HeadIT data archive that is also run by the SCCN.
This research task can be broken down into three main objectives: develop and test the response to flashed segments; improve the single-trial classification of the responses to flashed segments; and design a logic for selecting segments and interpreting their responses. The developed system will provide another method for BCI speller control that does not depend on the ability to shift gaze. The PI argues that this method will have a higher information transfer rate than other space invariant BCI spellers due to being able to probe multiple letters at once. Besides being advantageous for those with impaired eye movements and/or impaired vision, the method should have other advantages over the standard P300 systems. When errors are made, they will tend to be to visually similar symbols. Incorporating language priors and active segment selection is easily accommodated, and this may result in higher information transfer rates with slower flash rates. In addition the work on improving recognition of single-trial temporal EEG signals and incorporating Bayesian language models into spellers could be useful for other types of brain-computer interfaces.
|
0.915 |
2016 |
Craig, Kenneth Denton (co-PI) [⬀] De Sa, Virginia Goodwin, Matthew Scott Huang, Jeannie S |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Development of a New Technology For Assessing Pediatric Pain (Ntap) @ University of California San Diego
DESCRIPTION (provided by applicant): Advanced sensing and pattern recognition technologies open new possibilities for automated clinical assessment. Integration of this technology into the clinical arena is thus timely. In particular, there is promise in the use of suh technologies to provide automated assessment of poorly quantifiable clinical variables such as pain. Suboptimal pain assessment is particularly prevalent in children, who often rely on pain assessment by proxy which has been shown repeatedly to poorly correlate with patients' self-reports of pain. A number of observational scales have been developed for assessing pain by proxy. However, even some of the most widely used clinical scales were not developed from a rigorous psychometric perspective. Characterizations of the facial display in pain differ dramatically from each other, and differ substantially from empirical descriptions, leading to dramatically different estimates of pain. Suboptimal pain assessment in children results in delays in adequate pain management and unrelieved pain, which may contribute to significant morbidity and mortality in children. Recognition of this issue has led the World Health Organization to mandate that health entities recognize the rights of children to have their pain alleviated. In order to accomplish this goal, a more reliable and accurate method for pain assessment in this at-risk population is needed. We propose the Development of a Novel Tool for the Assessment of Pediatric Pain (NTAP). The primary aim is to develop and evaluate an automated NTAP tool that utilizes novel computer vision and wearable physiology sensor technologies to estimate pain severity in children. The research team comprises expertise from researchers in computer vision (Bartlett & Littlewort), pediatric clinical research and child healt outcomes (Huang), physiological measurement (el Kaliouby & Picard), and pain assessment in children (Craig). The project will collect a dataset of clinical pain in children following a known pain insult (pancreatitis, and postoperative pain following appendectomy.) The dataset will contain video, electrodermal signals, self-report of pain intensity, elapsed time since pain insult and clinical severity ratings. Initial analysis of collected video data will be performed using our NSF-funded automated facial expression recognition system (CERT: Bartlett & Littlewort), and electrodermal activity (EDA) monitoring and recording will be performed by the wearable, wireless Q Sensor from Affectiva (el Kaliouby & Picard). Machine learning (the development of algorithms for making predictions based on a large set of examples/data) will be employed to develop a system for estimating pain from facial expression and electrodermal activity signals. Evaluation protocols will address validity, reliability, and reproducibility. The proposed NTAP too will provide an automated pain estimation system for pediatric pain in the clinical setting that may improve pain assessment in children and provide a foundation for pain assessment in populations with communication limitations.
|
0.915 |
2018 — 2021 |
De Sa, Virginia |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Improving Usability and Reliability For Motor Imagery Brain Computer Interfaces @ University of California-San Diego
Brain-computer interfaces (BCIs) allow a user to interact with the world directly through brain activity. These systems are being developed to provide a communication method for users with severe motor impairments who are not able to control the movements of their arms, tongue, and even eyes well enough to communicate in the usual ways. While the cognitive abilities of these individuals are thought to be largely preserved, they are often described as being "locked in" to their bodies, unable to interact with the outside world through the usual means of typing, talking, etc. Electroencephalogram (EEG) based motor imagery BCIs attempt to distinguish brain activity by measuring electrical activity on the scalp caused by the user imagining moving different body parts. Commonly, such systems try to distinguish when the user is imagining moving their right or their left hand. Imagining different body parts can then be mapped to different tasks to allow a user to interact with the world (e.g., to turn a light on or off, or to move a robot arm to one object or another). The goal of this research is to make these types of systems easier for users to learn and more reliable, by improving the feedback that is given to the user and improving the classification of the brain signals. The work has the potential to open up this method of communication for more people, and project outcomes may have even broader impact by enabling us to learn more about brain signals that can be used for communication in BCIs. In addition, diverse graduate students will be trained in interdisciplinary research, and undergraduate students in the BCI class will work on small related projects, some of which will be presented to high school students to encourage and stimulate their interest in science.
The ability of users to generate discriminable control signals is very variable. Moreover, environmental effects such as other brain processes, emotion and fatigue affect current BCI systems. The goal of this project is to improve the usability of EEG-based motor-imagery brain-computer interfaces. To this end, a multi-pronged approach will be used. First, richer feedback will give users a better visualization of the effects of their imagery and provide them with a better chance to learn how to discriminate the motor imagery of different body parts. Second, the machine classification of the EEG signal during motor imagery will be improved. This will include looking for other signals that may provide additional insight into the top-level state and goals of the user as well as developing new deep learning algorithms that can benefit from multi-task learning and transfer learning between individuals. Third, different closed-loop control methods will be explored to improve the total information transfer rate of the BCI as well as to reduce the number of training trials needed. The team's prior work has shown that interactive signals that respond to the feedback provided by the system are more robust to system estimation errors and non-stationarities. These signals can arise passively but also can be actively used by exploiting interactive commands that vary with the received feedback. Whether active control of interactive commands, or active control of standard commands with passive interactive recognition, performs better will be tested.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2022 — 2025 |
Cottrell, Garrison [⬀] De Sa, Virginia |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcns Us-Japan Research Proposal: Modeling the Dynamic Topological Representation of the Primate Visual System @ University of California-San Diego
The goal of this project is to understand how we see by building computer models that "see the way we do." It is obvious that we learn to talk; it is less obvious that we learn to see. Babies have roughly 20/400 vision, which means they are legally blind, and the world initially looks very blurry to them. They must learn to distinguish people (especially their mother and family) as well as toys, food, and other objects over months and years of development. How is it that we come to be able to see so well that we can play ball, read a book, and thread a needle? One way to understand how this happens is to build computational models that mimic the way the brain works. Artificial Intelligence has blossomed in recent years with the advent of deep neural networks, which are a very simplified model of the brain. They are capable of recognizing faces and objects, and are enabling the creation of self-driving cars. However, there are fundamental differences between these computer vision models and our own visual system that make them less robust. This project will add more features of the human visual system to these models. For example, we have a foveated retina, which enables high fidelity vision only within a small spot of the visual field, about the size of your thumbnail at arm's length. As a result, we move our eyes about 3 times a second in order to bring the world into focus. This project will build a computational model that has a foveated retina, "moves its eyes," and takes data from brain recordings into account.<br/><br/>Recent models of the visual system have been benchmarked against cortical recordings (CORnet, BrainScore), but appear to be reaching a plateau. To move beyond this, the next generation of models will have to come closer to the brain in both anatomy and physiology. This project will incorporate radical changes to convolutional networks as well as novel data from the primate visual system. Missing from most models of the visual system are: 1) biologically realistic lateral and feedback connections, including distinct pools of excitatory (E) and inhibitory (I) neurons with the full set of lateral interactions (E->E, E->I, I->E, I->I), and purely excitatory feedback connections; 2) the log-polar mapping from retina to V1, separating central from peripheral representations and adding rotation and scale invariance; and 3) saccades, adding dynamics to the representations. Missing from most neurophysiological recordings are 1) recordings from IT during free viewing of objects (saccading); 2) pharmacological suppression of central and peripheral V1 while recording from IT in order to measure their contributions to representations; and 3) simultaneous recording from multiple areas of IT providing crucial data on their interactions. This project will incorporate all of these advances in order to build biologically realistic vision systems.<br/><br/>A companion project is being funded by the National Institute of Information and Communications Technology, Japan (NICT).<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2023 — 2026 |
Eguchi, Amy De Sa, Virginia Cottrell, Garrison [⬀] Berg-Kirkpatrick, Taylor |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ret Site: Research Experience For Teachers in Interdisciplinary Ai @ University of California-San Diego
Artificial Intelligence (AI) is having an increasing impact on everyday life, from smart speakers and digital assistants to medical discoveries and judicial sentencing. Therefore, the ethical and social implications of AI technologies make it imperative that all citizens understand both the positive and negative impacts on the future of society. This new RET site at the University of California San Diego (UCSD) will provide research opportunities for high school teachers to deepen their understanding of the field of AI while developing materials to use in their classrooms. Teachers will be primarily recruited from high schools in districts serving students who are underrepresented in STEM and from low socio-economic backgrounds. The six-week summer program will include a two-week boot camp to prepare teachers to participate in an intensive AI research project across a range of applications. During the academic year, teachers will continue to engage with research faculty through monthly dinner seminars where they will exchange ideas and discuss the latest updates in AI research. <br/><br/>The intellectual focus of this RET Site from UCSD is Interdisciplinary Artificial Intelligence, with a focus on the applications of Deep Learning. The high school computer science and math teachers, mostly from the Computer Science Teachers Association San Diego Chapter (CSTA SD), which is headquartered at UCSD, will participate in a two-week summer “boot camp”, followed by four weeks of intensive research with AI faculty and graduate students from Computer Science and Engineering, Cognitive Science, and Psychology. Additional objectives are to improve the ability of UCSD faculty to communicate ideas to the public through collaborating with and learning from teacher participants. The team also strives to use ideas from participants that can be incorporated into teaching UCSD AI systems. Finally, the main goal of the site is that teachers understand the ethical implications and current challenges of AI to promote awareness, discussion, and excitement among their students.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |