2006 — 2008 |
Hicks, Michael [⬀] Sazawal, Vibha |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sod-Hcer: Evaluation of Complex Designs--a Comparative Study @ University of Maryland College Park
A critical requirement for any science is the ability to measure. Without measurement, it is difficult to evaluate and improve. Not only is our ability to measure and evaluate software designs today weak, we also lack a thorough understanding of the shortcomings of the strengths and weaknesses of existing software design evaluation methods. To learn more about the shortcomings of current evaluation methods, the PI will in this project study two software products from the systems domain which share the important design criteria of modifiability and performance, the Click modular router and the Jetty http server and servlet container. For each of these two programs, there exist clear design goals and extensive change histories. Using change history data, the PI will create an approximate "master list" of modifiability and performance flaws in early versions of each program. For example, a feature request that required changes to many files could be flagged as a modifiability flaw. She will then apply several evaluation methods (e.g., quantitative software metrics, architecture-based evaluation, and semiautomatic methods such as design snippets) to early versions of each subject program, and compare the output of each method to the approximate master list of design flaws. Project outcomes will include establishment of a benchmark and experimental protocol that can be used to critique new techniques or tools for evaluating software designs using change history logs, and an increased understanding of the deficiencies of existing techniques and tools for evaluating software designs.
Broader Impacts: Software designs matter because software matters. To improve software designs, we must have adequate tools for evaluating candidate designs. This project will provide insight into new tools and techniques for software design evaluation. Programmers using these new tools and techniques will produce better software designs and thus better software. In addition, the benchmark produced by this experiment will prove valuable to software engineering educators; instructors will be able to design assignments and projects that involve the benchmark, and which will expose students to issues in evaluating software designs and empirical software engineering research.
|
1 |
2009 — 2012 |
Davis, Larry [⬀] Sazawal, Vibha |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ise:Planning:Collaborative Research: Informal Discovery of Programming Concepts Via Reflective Programming @ University of Maryland College Park
This planning effort, a collaboration of teams at the University of Maryland, Cornell University, Carnegie Mellon University and the Sciencenter of Ithaca, deals with the development and testing of a unique methodology for educating youth in computer programming. Through a mobile robot that is cleverly disguised as a small animal, participants will learn to manipulate the system by physically moving it as well as setting variables via electronic buttons thereby learning programming and design. The eventual use of this system and methodology is in museum exhibits so preliminary survey data will be gathered from various venues that presently use less capable devices. Iterative testing will be done at the Sciencenter in its exhibits.
|
1 |
2009 — 2013 |
Hicks, Michael (co-PI) [⬀] Foster, Jeffrey [⬀] Sazawal, Vibha |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: User-Centered Software Analysis Tools @ University of Maryland College Park
The proposed work explores techniques for making static analysis tools more effective in the hands of users. This will lead to the development of new static analysis algorithms, techniques, and interfaces that provide the information users need to find, verify, and fix defects.
The core of the proposed work will be a framework for static analysis visualization and interaction, with several components. First, the framework will include checklists to help users triage defect reports, i.e., decide whether they are true or false positives. The aim is to develop ways to instrument static analyses to automatically generate checklists based on imprecision introduced during the analysis. Second, the framework will include lightweight query and search facilities to help users work with static analysis tool results. Users need effective ways to query the knowledge-base generated by a tool when trying to understand an error report; current static error reports tend to provide too little or too much information. Third, the framework will include a generic visualization for program paths, a core part of many static analysis tools' defect reports. While some tools include simple path visualization, our framework will aim for far more effective interfaces by applying information visualization principles.
|
1 |