2018 — 2020 |
Jog, Varun Loh, Po-Ling |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Developing a Theory For Function Optimization On Graphs Using Local Information @ University of Wisconsin-Madison
This project seeks to advance the frontiers of knowledge concerning efficient search on networked data. The investigators will devise new algorithms for optimizing a function defined over a graph. Such algorithms will be easily implementable, broadly applicable, and backed by mathematical theory. Some example application areas in which the work will be relevant include faster web retrieval and cybersecurity, where it is important to identify key individuals in a large web of interconnected data. The project will also support the educational goals of the investigators by training multiple graduate students working at the interface of theoretical and applied data science research.
The work conducted in this project will establish new connections between continuous and discrete optimization. The investigators will explore notions of smoothness and convexity that may be used to characterize the convergence properties of their proposed optimization algorithms; unlike optimization on graphs, optimization on continuous domains is backed by a mature theory that has been developed over several decades. Developing an analogous theory for discrete domains such as graphs poses many challenges, however -- in particular, it requires developing new notions of derivatives, Hessians, smoothness, or convexity, which have no obvious analogs. This work is divided into the following sub-projects, each with a distinct set of research objectives: (1) Develop and analyze iterative and local algorithms on graphs; (2) Find suitable notions of smoothness and convexity on graphs, and analyze their consequences. In addition, the algorithms will be implemented and evaluated on various real-world networks.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.939 |
2020 |
Jog, Varun Loh, Po-Ling Mcmillan, Alan Blair |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Can Machines Be Trusted? Robustification of Deep Learning For Medical Imaging @ University of Wisconsin-Madison
Machine learning algorithms have become increasing popular in medical imaging, where highly functional algorithms have been trained to recognize patterns or features within image data sets and perform clinically relevant tasks such as tumor segmentation and disease diagnosis. In recent years, an approach known as deep learning has revolutionized the field of machine learning, by leveraging massive datasets and immense computing power to extract features from data. Deep learning is ideally suited for problems in medical imaging, and has enjoyed success in diverse tasks such as segmenting cardiac structures, tumors, and tissues. However, research in machine learning has also shown that deep learning is fragile in the sense that carefully designed perturbations to an image can cause the algorithm to fail. These perturbations can be designed to be imperceptible by humans, so that a trained radiologist would not make the same mistakes. As deep learning approaches gain acceptance and move toward clinical implementation, it is therefore crucial to develop a better understanding of the performance of neural networks. Specifically, it is critical to understand the limits of deep learning when presented with noisy or imperfect data. The goal of this project is to explore these questions in the context of medical imaging?to better identify strengths, weaknesses, and failure points of deep learning algorithms. We posit that malicious perturbations, of the type studied in theoretical machine learning, may not be representative of the sort of noise encountered in medical images. Although noise is inevitable in a physical system, the noise arising from sources such as subject motion, operator error, or instrument malfunction may have less deleterious effects on a deep learning algorithm. We propose to characterize the effect of these perturbations on the performance of deep learning algorithms. Furthermore, we will study the effect of random labeling error introduced into the data set, as might arise due to honest human error. We will also develop new methods for making deep learning algorithms more robust to the types of clinically relevant perturbations described above. In summary, although the susceptibility of neural networks to small errors in the inputs is widely recognized in the deep learning community, our work will investigate these general phenomena in the specific case of medical imaging tasks, and also conduct the first study of average-case errors that could realistically arise in clinical studies. Furthermore, we will produce novel recommendations for how to quantify and improve the resiliency of deep learning approaches in medical imaging.
|
0.939 |