2007 — 2011 |
Chen, Yixin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Planning With Complex Constraints and Preferences by Nonlinear Programming and Constraint Partitioning
Proposal 0713109 "Planning with Complex Constraints and Preferences by Nonlinear Programming and Constraint Partitioning" PI: Yixin Chen Washington University
ABSTRACT
This project aims to develop efficient algorithms for solving complex planning problems that arise from many applications such as manufacturing, aerospace engineering, emergency planning, and workflow scheduling. Two main goals of this research are to improve the expressiveness of planning models and to reduce the computational costs. The key innovation of this project is a unified, efficient, and extendible framework for complex planning with trajectory constraints and preferences.
Most existing planning techniques have difficulty in supporting more expressive models due to limitations in their formulations and search methods. This project will develop a new nonlinear programming model for planning. It will provide a formalism to accomodate complex features such as trajectory constraints and numerical objectives. To reduce the search cost, this project is developing a constraint partitioning approach that decomposes the constraints of a planning problem into much simpler subproblems. Previously used in the SGPlan planner, this approach has been proved effective. However, with the new complex features, the original partitioning strategies become inadequate. The project will develop new non-linear planning formulations and partitioning methods in order to exploit the structure of complex planning problems. Constraint partitioning may achieve orders of magnitudes speedup and greatly alleviate the exponential explosion of the search space.
This research has several broader impacts. There are many potential applications, ranging from production management to aerospace engineering. This project will apply its results to mobile computing to support the rapid development of mobile devices such as cellular phones and PDAs. The planners to be developed will be made publicly available to provide state-of-the-art AI tools for users from various disciplines.
|
0.957 |
2010 — 2015 |
Chen, Yixin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi Type I: Collaborative Research: Machine Learning in Taxonomic Research @ University of Mississippi
Intellectual Merit It is estimated that less than 10 percent of the world's species have been described, yet species are being lost daily due to human destruction of natural habitats. Considering the fast pace of habitat destruction, experts fear that many species will become extinct before they can be discovered and formally described. The job of describing the earth's remaining species is exacerbated by the shrinking number of practicing taxonomists and the very slow pace of traditional taxonomic research. In describing new species of animals, taxonomists typically rely on specimens deposited in natural history museums. They have to make careful counts and measurements on large numbers of specimens from multiple populations across the geographic ranges of both known and newly discovered species, in order to diagnose the new species as distinct from all of its known relatives. The process is laborious and can take years or even decades to complete, depending on the geographic range of the species. In this project, the research team will develop new machine learning methods for taxonomic research, with the specific aim of fundamentally increasing the pace of taxonomic revision.
The scientific research will focus on two areas: species identification and new species discovery. In distinguishing a species from others, taxonomists must identify a set of diagnostic characters that distinguishes the species in question from all of its known relatives. To automate and expedite this laborious process, the team will explore existing feature subset selection techniques, and will develop new ones. Images of categorized specimens will be used to train a collection of statistical models representing the known taxonomic grouping of organisms. An "optimal" set of body shape characters will be automatically identified. New species discovery is the most important research objective in taxonomy. From a machine learning point of view, detecting new species is fundamentally different from the problem of recognizing known species because by definition, the training set does not contain prior knowledge of a new species. The research team will formulate new species discovery as a novelty detection problem, and will develop an efficient novelty detection framework for taxonomic tasks.
Broader Impact The project, if carried out successfully, will demonstrate the fruitfulness of fusing technologies from different fields. From the biology side, the project will demonstrate that machine learning techniques can assist taxonomists and evolutionary biologists in various research tasks, hence fundamentally accelerate the pace of taxonomic revision. From the computer science side, researchers will benefit from the computational challenges motivated from real-world biological problems and the database created in the project. New machine learning algorithms will be developed to impact taxonomic research. The PIs will integrate the research with their educational activities at both University of Mississippi and Tulane University. The PIs will devote additional efforts to mentoring female and underrepresented minority students involved in exploring cutting-edge interdisciplinary research.
|
0.964 |
2010 — 2014 |
Lu, Chenyang (co-PI) [⬀] Chen, Yixin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Small: Generalized Submodular Optimization For Integrated Networked Sensing Systems
Wireless sensor networks are evolving from specialized platforms to shared infrastructure for multiple applications. Shared sensor networks offer flexibility, adaptivity, and cost-effectiveness through dynamic resource allocation among different applications. Shared sensor networks face the critical need for optimizing Quality of Monitoring (QoM) subject to resource constraints. The emerging QoM optimization problems in shared sensor networks are computationally challenging due to their nonlinear, discrete, and dynamic nature. This project exploits a key property known as submodularity that many QoM attributes of physical phenomena exhibit. This project develops efficient and theoretically sound distributed approaches for QoM optimization, through a novel integration of submodular optimization in a market-based approach. It further studies online algorithms which can quickly adapt to network and application dynamics, partition-based algorithms that scale effectively for large-scale networks, and new optimization algorithms that can accommodate the optimization of energy consumption and heterogenous networks. Expected results of this project include theory, algorithms, and software for managing and optimizing a new generation of integrated networked sensing systems with high societal and environmental impact. The project also deploys an integrated sensor network for environmental monitoring in Tyson Research Center of Washington University for environmental and ecological research. This study promotes interdisciplinary collaboration with environmental and biological scientists, as well as outreach activities for high-school students.
|
0.957 |
2012 — 2015 |
Levine, David Chen, Yixin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ices: Small: Artificial Human Agents For Virtual Economies
The goal of the project is to be able to replace human agents with artificial agents in studying two-player games. This project, if successful, will greatly enhance the ability of economists to test economic theories by partially replacing laboratory experiments with simulations. Over the last decades laboratory studies have proven invaluable both for the validation (and invalidation) of economic theories, and for the practical purpose of testing mechanisms (such as auctions) in the laboratory prior to practical implementation. The ability to use simulations with artificial agents in place of laboratory experiments with live human beings will both reduce the cost of validation and testing, and make it possible to explore quickly a much wider range of theories and policy alternatives. It will also enhance our understanding of human behavior and enrich our knowledge of the connection between human and artificial intelligence.
Agent-based modeling is an emerging and attractive approach to validating economic theories. Existing research has focused on simple and naive agents. This project proposes as the next step to develop artificial human (economic) agents capable of mimicking the behavior of human laboratory subjects in the context of two player simultaneous move games. Substantial and detailed data is available on human play under these conditions. Existing algorithms fall only slightly short of the ability to mimic human play but do not yet implement fully autonomous agents. Based on hidden Markov models, the PIs propose to develop and investigate a framework of belief learning that is broad enough to encompass many existing learning algorithms including reinforcement learning, fictitious play, and smooth fictitious play. Moreover, based on this framework, the PIs propose to derive more sophisticated learning methods to fully develop artificial agents. This research will pursue two important directions. First, the PIs will introduce the initial calibration of priors based on available information and a cognitive hierarchy model. Second, the PIs will allow for the reconsideration of the existing model when "surprises" occur. The project will lead to the next stage of research in both economics and computer science in broadening the class of artificial agents to attack broader and more economically important tasks. It will also enrich the research on graphical model learning for artificial intelligence.
|
0.957 |
2013 — 2017 |
Hu, Fei Li, Shuhui (co-PI) [⬀] Mccallum, Debra (co-PI) [⬀] Chen, Yixin Zhou, Hongbo (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Edu: Collaborative: When Cyber Security Meets Physical World: a Multimedia-Based Virtual Classroom For Cyber-Physical Systems Security Education to Serve City / Rural Colleges @ University of Alabama Tuscaloosa
This project establishes a multimedia-based virtual classroom with a virtual lab teaching assistant for the education of cyber physical system (CPS) security. Such a virtual classroom helps college students in resource-limited rural areas to learn the latest CPS security knowledge via an on-line peer-to-peer learning environment with other students from larger schools. This project includes three novel contributions: (1) all learning materials embrace an application-driven learning approach, with examples from diverse areas such as healthcare, renewable energy, and industrial controls used as the basis for CPS attack analysis; (2) with the help of a multimedia company, the project is building interesting virtual classroom lectures; and (3) to meet the open access lab requirements, the project is building interactive virtual lab helper software to enable remote students to conduct virtual hardware lab experiments and obtain help using multimedia tools. The design encourages innovative learning in several ways: developed labs require an iterative process with idea incubation to force students to follow a more mature creative design process; all labs intentionally include some ambiguity to encourage the search for multiple answers to a single problem; and the 3E (Explain-Exploit-Explore) based pedagogy is adopted for all CPS security labs/projects. The basic level labs emphasize concept explanations. The intermediate level senior projects require students to exploit previous knowledge to perform a multidisciplinary CPS security task. The advanced-level labs require independent exploration to reach creative solutions. The resulting teaching methodologies can be extended to other rural colleges and this project uses a proactive dissemination plan to achieve this aim.
|
0.964 |
2013 — 2017 |
Lu, Chenyang [⬀] Chen, Yixin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Small: Real-Time Wireless Sensor-Actuator Networks
Recent years have witnessed rapid adoption of wireless sensor-actuator networks (WSANs) in process industries. Real-world deployments of industrial standards such as WirelessHART have demonstrated the feasibility to achieve reliable wireless communication in industrial environments. However, existing wireless technologies still face significant challenges in supporting process control applications that impose stringent real-time requirements on network communication. Moreover, optimizing control performance over wireless networks is challenging due to the coupling between control performance and communication delays, as well as inter-dependencies among multiple layers of the network protocol stack. To meet these critical challenges faced by process industries, this project is developing a control-aware network design approach for WSANs. This holistic approach integrates wireless networking, real-time scheduling and non-linear optimization in a unified framework with three novel technologies: (1) an efficient local search approach that leverages real-time scheduling analysis to optimize control performance over wireless networks; (2) an agile multilayer optimization approach that adapts transmission scheduling, routing and sampling rate selection in a hierarchical fashion in response to network changes; (3) a hierarchical network architecture that enables large-scale real-time wireless control networks. The project implements and evaluates the network architecture and protocols using realistic industrial process control applications on physical wireless testbeds through collaboration with Emerson, an industrial leader of the WirelessHART standard.
Successful completion of this research will enable a broad range of process control applications with high societal and industrial impacts such as oil refining, petrochemicals and water treatment. Collaboration with industrial partners will lead to real-world deployments and technology transfer to process industries through industrial standards. This research will produce an open-source implementation of the real-time wireless protocol stack, which will further broaden the impacts on research, education and industry.
|
0.957 |
2013 — 2017 |
Lu, Chenyang (co-PI) [⬀] Chen, Yixin Bailey, Thomas Kollef, Marin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sch: Exp: Integrated Real-Time Clinical Deterioration Prediction For Hospitalized Patients and Outpatients
Unexpected deaths of hospitalized patients continue to be common despite evidence that patients who are at risk often show signs of clinical deterioration hours in advance. Existing early warning systems have significant shortcomings because of their poor reliability and the need for monitoring by overburdened clinical staff. Almost 1 out of 5 patients are readmitted within 30 days of hospital discharge with an annual cost to tax payers of $15-17 Billion. Hence, there is an urgent need for automated early warning systems that can provide timely and accurate information.
The project seeks to integrate and mine patient data from multiple sources, including routine clinical processes, bedside monitoring, at-home sensing, and existing electronic data sources to facilitate optimized patient-centered decision making. Specifically, the project aims to develop techniques and systems to provide early warning of clinical deterioration and hospital readmission of discharged patients using a novel two-tier system. Tier 1 uses data mining algorithms on existing hospital data records to identify patients who are most at risk of clinical deterioration and readmission. Tier 2 combines clinical data with sensor data to improve the accuracy of predictions on patients who are identified as being at risk by Tier 1.
Key innovative aspects of the project include: (1) new data mining algorithms for predicting clinical deterioration and readmission from heterogeneous, multi-scale, and high-dimensional data streams; (2) an alert explanation system to identify the most relevant prognostic factors and suggests possible intervention based on novel feature ranking algorithms; (3) a novel scheme based on cost-sensitive learning to dynamically reconfigure the sensors for achieving good tradeoff between monitoring cost and effectiveness. The resulting advances in healthcare practices that are currently employed in general wards offer several key benefits including (1) reduced workload on clinical staff; (2) capability for continuous monitoring of ward patients that can be used to triage nursing efforts in order to optimize the desired clinical outcomes; (3) capability to extend hospital monitoring to patients at high-risk for hospital readmission with the attendant benefits of reducing readmissions by targeting early preemptive therapeutic interventions.
Plans for transitioning the technology to clinical practice include rigorous evaluation of the technology in real-world settings and broad dissemination of the algorithms and their open-source implementations. Some potential broader impacts of the project include improved clinical outcomes, reduced patient mortality rates and healthcare costs, and enhanced opportunities for research-based interdisciplinary training of graduate students in health informatics. Additional information about the project can be found at: http://www.cse.wustl.edu/~wenlinchen/project/clinical/
|
0.957 |
2014 — 2017 |
Chen, Yixin Tang, Yinjie [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Abi Innovation: Integration of Flux Balance Analyses With Data Mining and 13c-Labeling Experiments to Decipher Microbial Metabolisms
Microorganisms play important roles in ecology, biogeochemical cycles, human diseases, bioremediation and bioenergy. Currently, high-throughput sequencing is being used to map the genomes of microbial species. However, the DNA sequence of a microbe does not provide a complete understanding of its functioning. To bridge the knowledge gap between genotype and phenotype, metabolic flux analysis is an important phenomic tool to investigate in vivo enzymatic activities. Analysis of metabolic fluxes can identify bottleneck pathways in the biosynthesis of desirable products, decipher the function of unknown genes, discover new enzymes, and reveal the mechanisms of diseases. In this project, a user-friendly metabolic flux analysis platform will be developed to provide a robust cyberinfrastructure for biologists, helping them analyze large amounts of phenomic data efficiently. In addition, this platform includes an open source database for storing and disseminating flux analysis data on diverse microbial metabolisms. This database can assist metabolic flux analyses of new microorganisms, enabling the systems biology community to benefit from the fast advancement in "Big Data" technology. Ultimately, this platform can be a springboard for future development of high throughput methodologies to analyze diverse biological systems. The broader impact of this project not only includes educational efforts for high school students (especially minority and under-represented groups) to promote their inquiry and interests in STEM fields, but also development of a wiki-styled website and discussion forum for flux analysis technologies and applications.
Current systems biology studies (for example, transcriptomics) rely on model organisms for genome annotation and have limited power to reveal novel metabolic pathways. In addition, post-transcriptional and post-translational regulations hinder the possible phenotypic information that can be determined from genomic approaches. Thereby, it is of great value to build a new metabolic flux analysis platform to decipher microbial metabolisms and metabolic regulations. This project has three tasks to develop novel capabilities for metabolic flux analysis. The first task will be to build a comprehensive carbon-fate map for 13C-assisted pathway identification and to provide effective computational algorithms for 13C-metabolic flux analysis. Thereby, this flux analysis platform can precisely quantify the pathway activities using the labeling information from 13C-tracer experiments. The second task will be to build a database of published microbial fluxomic results, which can then be analyzed via data mining approaches to predict pathway linkages and novel microbial metabolisms in new species. The third task will be to create new approaches to integrate genome-scale flux balance analysis with both 13C-labeling and data mining results to precisely characterize microbial metabolisms. Once the new model platform is developed, it will be tested using case studies. Based on available experimental data on Shewanella metabolic flux analyses, the model applicability will be validated and improved. Ultimately, this new platform can be widely used by the systems biology society to provide new insights into diverse microbial species. The project web link is: http://tang.eece.wustl.edu/
|
0.957 |
2015 — 2018 |
Chen, Yixin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Iii: Small: Collaborative Research: Towards Interpretable Machine Learning
This research project investigates the design and development of machine learning algorithms that make decisions that are interpretable by humans. As predictions of machine learning models are increasingly used in making decisions with critical consequences (e.g., in medicine or economics), it is important that decision makers understand the rationale behind these predictions. The project defines interpretable algorithms through three key properties; Simplicity: intuitively comprehensible by users who are not experts in machine learning, Verifiability: a clear relationship between input features and model output, and Actionability: For a given input and desired output, the user should be able to identify changes to the input features that transform the model prediction to the desired output. The project investigates how to design distance metrics supporting simplicity and verifiability, as well as algorithms to identify input changes to change outputs. The project will be evaluated in a medical context, addressing the problem of early detection of hospital patients at risk of sudden deterioration.
This work builds on the well-understood k-Nearest-Neighbor classifier, which would inherently seem to provide simplicity and verifiability. The challenge is in high dimensions, e.g., when used for document classification; differences are spread across more dimensions than are humanly comprehensible. The project uses novel dimensionality reduction approaches to create dissimilarity metrics that are interpretable and accurate. Visualization techniques to present this data will be explored, including techniques supporting more complex classification approaches such as ensembles. The project investigates novel methods for delivering actionability in machine learning algorithms by identifying changes that can truly transform an entity's class membership - a problem that has recently been identified as surprisingly difficult. A secondary outcome will be improvements in classifier robustness, as small changes that change class membership are a good indication of non-robustness.
|
0.957 |
2016 — 2018 |
Avidan, Michael Chen, Yixin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sch: Int: Anesthesiology Control Tower: Forecasting Algorithms to Support Treatment (Actfast)
There is a significant public health concern in the United States regarding major complications and death following surgery. Forty million Americans undergo surgery yearly. Approximately five percent die within a year of their operation, and roughly ten percent suffer major in-hospital morbidity (e.g., stroke, heart attack, pneumonia, renal failure, wound infection). Early recognition of risk and appropriate management could often prevent or modify these adverse outcomes. We believe that this represents an exciting scientific opportunity. Modern intraoperative monitoring yields a wealth of data from thousands of operating rooms across the US. Integration and real-time analysis of these data streams has the potential to revolutionize perioperative care. We propose to develop, validate and assess machine-learning, forecasting algorithms that predict adverse outcomes for individual patients. The forecasting algorithms will be based on data derived from various sources, including the patient's electronic medical record, the array of physiological monitors in the operating room, and evidence-based scientific literature. These algorithms will facilitate patient-specific clinical decision support, utilizing an innovative approach of an anesthesiology control tower. The anesthesiology control tower for the operating room will be conceptually similar to an air traffic control tower for a busy airport. We are optimistic that the ambitious goal of developing forecasting algorithms for individual surgical patients can be realized, since members of our team have already developed and validated similar forecasting algorithms for critically ill hospital patients. Modifiable risk factors for adverse events could be detected early during surgery, allowing targeted interventions and preventive measures. The ACTFAST study will provide important information on the potential utility of incorporating forecasting algorithms into routine surgical care, including in under-resourced healthcare settings. This project will also yield important educational benefits. There will be tremendous learning for the students who help to develop and validate the forecasting algorithms. Furthermore, the control tower concept is a disruptive educational innovation, which will equip anesthesiology trainees with a new ability to provide simultaneous care to multiple surgical patients. It is notoriously difficult to construct high fidelity scientific models for individual humans, as we are complex biological systems. Ultimately, the success of this ambitious project, which engages interdisciplinary perspectives and applies sophisticated forecasting algorithms to clinical decision support, will have substantial scientific and clinical impact.
Modern intraoperative monitoring yields a wealth of data from thousands of operating rooms across the United States. Integration and real-time analysis of these data streams has the potential to revolutionize perioperative care. The objective for this investigation is to exploit our experience in running innovative machine learning algorithms, including filtering and outcome-related models, in order to build forecasting algorithms tailored to individual surgical patients. Our central hypothesis is that with sufficient knowledge of patient characteristics coupled with repeated, high-fidelity time series data from the intraoperative electronic medical record, advanced models can be constructed for individual patients that will forecast risk for adverse postoperative outcomes. First, using a training dataset, we will apply data mining and machine-learning methods that we have previously validated to extract useful information from electronic health records and real-time physiological variables. We will develop algorithms using hybrid-learning techniques to combine the strength of non-parametric (generative) models, such as histogram and kernel density estimation, and parametric (discriminative) models, such as support vector machines, logistic regressions, and kernel machines to improve predictions of adverse perioperative outcomes. The goal is to deliver superior prediction quality, with good interpretability and high computational efficiency, that supports fast processing of big data. Second, using a testing dataset, we will validate the predictive accuracy of the developed algorithms, by determining the reliability in forecasting adverse outcomes. The developed algorithms will be tested for accuracy of their predictive performance. These evaluation methods include hold-out sampling, cross-validation, and bootstrap sampling. After being trained and tested, the performance of the developed algorithms will be additionally validated prospectively (out-of-sample performance), using standard measures of accuracy, precision and robustness. We envision that the developed algorithms will facilitate patient-specific clinical decision support, utilizing an innovative approach of an anesthesiology control tower, which will be conceptually similar to an air traffic control tower for a busy airport. The main contributions of this project will include: (1) new machine-learning algorithms for forecasting perioperative adverse events from heterogeneous, multi-scale, and high-dimensional data streams; (2) a clinical decision support system that identifies prognostic factors and suggests interventions based on novel feature ranking algorithms; and (3) a transformative approach to the education of the anesthesiology team and the paradigm of perioperative care. Successfully advancing real-time analytic methods and modeling for individual surgical patients, using heterogeneous and sparse data, would have tremendous scientific and societal impact.
|
0.957 |