2022 — 2026 |
Lo, Joseph (co-PI) [⬀] Rudin, Cynthia [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fw-Htf-R: Interpretable Machine Learning For Human-Machine Collaboration in High Stakes Decisions in Mammography
The specific objectives of the Future of Work at the Human-Technology Frontier program are (1) to facilitate convergent research that employs the joint perspectives, methods, and knowledge of computer science, engineering, learning sciences, research on education and workforce training, and social, behavioral, and economic sciences; (2) to encourage the development of a research community dedicated to designing intelligent technologies and work organization and modes inspired by their positive impact on individual workers, the work at hand, the way people learn and adapt to technological change, creative and supportive workplaces (including remote locations, homes, classrooms, or virtual spaces), and benefits for social, economic, and environmental systems at different scales; (3) to promote deeper basic understanding of the interdependent human-technology partnership to advance societal needs by advancing design of intelligent work technologies that operate in harmony with human workers, including consideration of how adults learn the new skills needed to interact with these technologies in the workplace, and by enabling broad workforce participation, including improving accessibility for those challenged by physical or cognitive impairment; and (4) to understand, anticipate, and explore ways of mitigating potential risks arising from future work at the human-technology frontier.<br/><br/>Breast cancer is one of the most common causes of illness and death in the US and worldwide. Breast cancer screening programs using annual mammography have been highly successful in lowering the overall burden of advanced cancers. In response to increasing caseloads, artificial intelligence is being widely adopted in the field of radiology. So far, these artificial intelligence systems have been opaque in the way they work, and when they make mistakes, radiologists find it difficult to understand what went wrong. This project seeks to design an artificial intelligence system that can explain its reasoning process for deciding whether a woman’s mammograms contain a breast lesion that is suspicious. This system can improve human-machine interactions by helping radiologists to make better decisions of whether to recommend that the woman undergo a biopsy. It can also help to educate medical students and other trainees. Ultimately, this system can lead to better patient care, impacting both academic and community-based clinical practice. <br/><br/>This project does not aim to replace radiologists with black box models: its models are decision aids, rather than decision makers, following along the reasoning process that radiologists must use when deciding whether to recommend a biopsy. The approach includes the design of novel deep learning architectures that perform case-based reasoning with tailored definitions of interpretability. These models do not lose accuracy when compared to their black box counterparts. Separate models are proposed for each of the mammographic tasks of classifying mass margin, mass shape, and mass density. An important aspect of the project includes building user-interface tools for radiologists to provide fine annotation, which mitigates the harmful effects of confounding. The models' innate interpretability will allow for better troubleshooting and easier analysis, which will be transformative for not only computer-aided diagnosis in medical imaging but also computer vision in general. Wide implementation of interpretable artificial intelligence in the medical field will be a game changer for human-machine interaction and can improve efficiency in the healthcare sector, helping not only to manage workloads for physicians but also to improve the quality of patient care.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |