Area:
Machine Learning, Natural Language Processing
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Kai-Wei Chang is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2019 — 2021 |
Chang, Kai-Wei Nishi, Akihiro (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ai-Dcl: Governing Bias in Ai System With Humans in the Decision Loop @ University of California-Los Angeles
Artificial intelligence (AI) systems have advanced dramatically in recent years and they have been applied in many real-world applications that touch our daily lives. Despite their remarkable performance, these technologies inherently carry the risk of aggravating the societal biases present in their training data. This can lead to unfair decisions based on sensitive demographic attributes (e.g., gender), as well as the unintentional generation of insulting outputs (e.g., tagging a person as an animal). This project develops a hybrid AI system that includes humans in the decision process (human-in-the-loop) in order to ensure decisions that are robust, unbiased, and fair. The results will enable intelligent machines to seamlessly integrate with human experts to: 1) identify various types of biases in the model predictions and 2) learn to mimic the behavior of human experts and take implicit societal factors into consideration when making automatic decisions.
The technical approach is based on developing a human-machine hybrid intelligence framework allowing human experts to censor and guide an AI agent in order to identify harmful decisions and correct biases. Specifically, the team will build a bias diagnosis module with a censor model, to predict if a decision is fair or not. When the censor model is uncertain, it will request that a human expert make judgments under an active imitation learning framework. The feedback from the bias diagnosis module will be used to improve the AI system and to correct the bias exhibited in its prediction. The approaches will be to various natural language processing and computer vision applications, including entity co-reference resolution (e.g., AI thinks a female pronoun is less likely to refer to a leader) and object detection in images (e.g., AI cannot identify a tie worn by a woman).
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |