1991 — 1994 |
Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Using Demonstration in User Interfaces @ Carnegie-Mellon University
This research investigates the utility of using demonstrational techniques in user interfaces for computer programs. Demonstrational user interfaces provide concrete examples on which the user operates, rather than requiring the user to deal with abstractions such as variables and control structures. In the demonstrational approach, the user provides examples and the system infers how the examples would be generalized to create something that is more general purpose. More successful approaches limit the inferences to a specific domain. Two applications are chosen in this research. One is text formatting and the other is a visual shell to the UNIX operating system. In the second case, a macro system is created without using a programming language by a rule-based inference system. The user interacts with simple commands. The system determines from the user's actions how to construct more complicated commands given also its knowledge about file and operating system details. The two applications are to be released to up to about 3000 subjects providing evaluation of potential advantages and issues. This research addresses issues relevant to broader programmatic goals including making computer systems adapt to end users.
|
0.915 |
1994 — 1997 |
Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Demonstrational Interfaces For Visualization and End-User Programming @ Carnegie-Mellon University
IRI-9319969 Myers This is the first year of a three year continuing award to investigate architectures for script-based and object-based demonstrational programming interfaces and their use in visualization. The overall concept of demonstrational interfaces is to allow the user to operate on example values from which the system generalizes. The approach of script-based interfaces, as contrasted with object-based approaches, lends itself to generalization when a specifically given script trace is extended to other cases. The object-based approach is investigated as it relates to data visualization by generalizing the properties of the data and the typical graphics used in the domain. The ultimate goal is to make the design of a custom interface as simple as sketching a picture on a napkin.
|
0.915 |
1999 — 2002 |
Myers, Brad Corbett, Albert |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A More Natural Programming Language For Children @ Carnegie-Mellon University
Most children are not programmers but would like to create software that is similar to their favorite commercial applications, such as games and educational applications. Today's programming environments make this too difficult. The goals of this project are to study how children (fifth grade and older) reason about programming concepts, and then to create a new general-purpose programming language and environment for children that takes advantage of these findings. All aspects of the new system will be thoroughly analyzed with usability techniques such as Cognitive Dimensions and user tests with children. The language will be general-purpose, in that it will not be limited to creating a particular kind of program. It will be extensible, modular, scalable, and well-structured, so students can easily write programs that range from one line all the way up to many thousands of lines. Unlike most current environments for children, the system will support the creation of complete and sophisticated programs. The PI will exploit modern technologies to simplify the programming process, including direct manipulation and demonstrational techniques, programming environment technologies like structure editors, and mechanisms for extending programs by harnessing and reusing existing components. There has not been a successful new programming language for children in many years. Since the PI's system will be based on empirical studies of how children think, it should have an excellent chance of allowing the majority of children (and, by extension, the general public) to create interesting and useful applications http://www.cs.cmu.edu/~NatProg
|
0.915 |
1999 — 2002 |
Myers, Brad Corbett, Albert Stevens, Scott |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dli-Phase 2: An Intelligent Authoring Tool For Non-Programmers Using the Informedia Video Library @ Carnegie-Mellon University
Abstract
IIS-9817527 Myers, Brad Carnegie Mellon University $150,000 - 12 mos.
Using the Informedia Digital Video Library to Author Multimedia Material
This is the first year funding of a three year continuing award. This project will create a comprehensive Intelligent Video Editor that will allow people without special training to author interesting compositions using digital video. In particular, the editor will support sophisticated interactive behaviors for the videos and for extra graphical drawings (called synthetic graphics) layered on top of the videos. For example, users might specify which objects in the video can be clicked on to choose the next video clip, or that an arrow should be drawn that shows the path that an object will follow, or that the video is part of a lesson and a viewer's answer to a question determines the next action. There will also be high-level facilities for searching and organizing videos, video editing, demonstrating behaviors, writing scripts in a more natural programming language, and testing and debugging the code. The tools created will be continuously tested with school children and adults to evaluate and refine the various features. The goal is to make it as easy to use the video material found in a digital library as it is to use textual material found in today's libraries.
|
0.915 |
2001 — 2004 |
Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Making It Easier to Interact With Technology Through Handheld Personal Universal Controllers @ Carnegie-Mellon University
Making it Easier to Interact with Technology through Handheld Personal Universal Controllers
This is a standard award. In this project the PI will investigate how a hand-held computer exploiting wireless communications technologies can be used as a Personal Universal Controller (PUC) to control all kinds of home, office and factory equipment. When users point their PUC at a light switch, at a photocopier in an office, at a machine tool in a factory, at a VCR at home, at a piece of test equipment in the field, or at almost any other kind of device, the device will send to the hand-held a description of its control parameters. The PUC will use this information to create an appropriate control panel, taking into account the properties of the controls that are needed, the properties of the hand-held (the display type and input techniques available), and the properties of the user (what language is preferred, whether left or right handed, how big the buttons should be based on whether the user prefers using a finger or a stylus). The user can then control the device using the PUC. The device will not need to dedicate much processing power, hardware, or cost to the user interface, since it will only need to contain a description of its capabilities and storage for the current settings, along with hardware for wireless communication. PUC programs will use intelligent "model-based" techniques to create useful and appropriate interfaces that are customized for each user. The PI's preliminary research suggests that an interface on a hand-held can be significantly better than the interface supplied by the manufacturer, so the PUC should enable people to make more effective use of their equipment, as well as making it practical to add intelligence to a broader range of appliances. The PUC can also facilitate access for people with disabilities, since the interfaces will be customized to the individual's preferences and needs. But this research will have benefits beyond just remote control devices for appliances, in that it will help further the cause of separating the user interface from the application code, which has been a basic goal of user interface software research from the beginning.
|
0.915 |
2002 — 2008 |
Stevens, Scott Myers, Brad Koedinger, Ken Corbett, Albert |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Collaborative Research: Putting a Face On Cognitive Tutors: Bringing Active Inquiry Into Active Problem Solving @ Carnegie-Mellon University
EIA-0205301 Albert Corbett Carnegie-Mellon University
ITR: Collaborative Research: Putting a Face on Cognitive Tutors: Bringing Active Inquiry into Active Problem Solving
Collaborative project with: 0205506 Michelene Chi University of Pittsburgh
This project builds on a growing body of research concerning effective learning and tutoring strategies. The project involves constructing and evaluating educational technology that emulates human tutors by integrating a state-of-the art educational technology called Cognitive Tutors with a innovative interactive questioning environment called Synthetic Interviews to produce an inactive learning environment that rivals the effectiveness of human tutors. Cognitive tutors are built around a cognitive model of problem solving knowledge and provide precisely the support students need to complete problems successfully. Used alone, cognitive tutors do not support the help-seeking and meta-cognitive skills that characterize active learners. By incorporating a novel interactive communication technology called Synthetic Interviews, an Active Learning Environment is offered that rivals the effectiveness of human tutors in supporting deep student learning. Synthetic Interviews allow learners to engage in active inquiry by providing the means for conversing in-depth with an individual. Synthetic Interviews permit knowledge capture in a new form providing utility similar to an expert system but a development effort approaching the simple video taping of a conversation. The Active Learning Environment serves as a research tool to examine both computational and pedagogical challenges and also as an educational environment in classrooms and homes. In particular, the domains of knowledge that are constructed around this learning environment are mathematics and biology courses. The project promises to make important contributions to cognitive science, computer science and educational practice including the following: 1) The analysis of student questions during synthetic interviews will contribute to basic cognitive models of the functional relationship between declarative conceptual knowledge and procedural problem solving knowledge, 2) This project will integrate cognitive models of student knowledge and tutorial dialogue structure. More generally, the project will help define a design and engineering process for intelligent learning environments, 3) The research will inform the design of more effective computer-based learning environments. 4) The research and the active learning environment can support improved professional development both for pre-service and in-service teachers. The Active Learning Environments for mathematics and biology that are developed in this project promise to directly improve educational practice nationally. Current generation cognitive mathematics tutors are already in use in about 2% of middle schools and high schools around the country. The demand for effective mathematics and science education continues to grow. States are increasing mathematics graduation requirements and instituting assessments that govern student graduation and school evaluations. If Active Learning Environments are more effective than current generation Cognitive Tutors, they promise to rapidly enter widespread classroom use.
|
0.915 |
2003 — 2007 |
Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Using Handhelds to Help People With Motor Impairments @ Carnegie-Mellon University
People with Muscular Dystrophy (MD) and certain other muscular and nervous system disorders such as Cerebral Palsy (CP) and ALS often lose their gross motor control while retaining fine motor control. The result is that they frequently have difficulty operating a mouse and keyboard. However, they typically retain the use of their fingers, and can control a pencil or stylus, and thus can use a handheld computer such as a Palm. Even people with motor impairments who can use a normal keyboard often tire easily, so having a variety of input techniques that they can switch among may allow the use of computers for longer periods. This project will develop techniques to enable people with a wide range of motor impairments to use a handheld computer as an interface to computers. The PI will invent new input techniques for entering text and controlling a mouse pointer that are optimized for use by people with motor impairments. He will measure and refine the user interfaces and input techniques through extensive formal and informal user testing that will highlight the conditions and impairments for which each input technique will be useful. He will determine which parameters best characterize a motor impairment with respect to a user's performance with a stylus and handheld user interface. He will develop novel software that runs on a handheld device and can be used by therapists and researchers to measure the performance parameters of individuals with motor impairments, and to automatically adjust parameters and selection of input techniques. Particular techniques to be investigated for handheld text input include adaptive word prediction and word completion designed for people with muscular difficulties (in particular, the distance the user's fingers must travel to select a word should be minimized), and new interaction techniques for specifying characters by using the edges of a handheld's screen (preliminary investigations suggest that the raised edges that are already part of handhelds, or an additional overlay guide, can help people with certain muscular disabilities such as tremors or spasms enter text more accurately). New techniques for entering mouse locations will include software filters.
Broader Impacts: As a consequence of this research, some people with muscular impairments who could not use a computer will be able to, and many of those who could use one will be able to do so more effectively and for longer periods of time. Furthermore, the elderly commonly suffer from motor-limiting conditions like arthritis, and a handheld interface may be an attractive non-aggravating option for computer access. The improved evaluation tools to be developed as part of this project will make it easier for therapists to determine the right interaction techniques for each individual, which will help to optimize each person's effectiveness. The PI plans to collaborate with the United Cerebral Palsy (UCP) Association, the Muscular Dystrophy Association (MDA), the University of Pittsburgh Medical Center (UPMC), and others, to ensure that the tools and techniques developed as part of this research will be deployed for real use by many people who will benefit. The knowledge gained will be disseminated through the usual research channels, as well as through the Rehabilitation Engineering Society of North America (RESNA) and other forums used by therapists, so as to impact as large an audience as possible.
|
0.915 |
2003 — 2007 |
Myers, Brad Pausch, Randy (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Lowering the Barriers to Successful Programming @ Carnegie-Mellon University
Although computer programming has existed in its modern form for half a century, it remains accessible only to a small fraction of the population. While programming is an inherently difficult activity, there are currently many barriers that prevent large portions of the population from learning to program a computer. Some of these barriers can be overcome through new facilities in programming environments. Current advanced programming environments provide a number of tools to help programmers construct programs: direct manipulation facilities to set up user interface widgets, code editors with coloring and indenting, and various pop-up menus that help construct code. However, there is little support to help users write the code that handles the dynamic responses to events and high-level behaviors. Furthermore, the tools for debugging code when something goes wrong are not much different than what has been available for 60 years: print statements, break-points and inspecting the values of variables. This is in spite of the fact that research shows that debugging and specifying the dynamic behavior of code are some of the most difficult aspects of programming.
In this project, Alice, a programming environment that makes it very easy to program interactive 3D graphics, will be augmented to provide significantly better tools to help both novice and expert programmers develop and debug their programs. A number of novel ideas that have not yet appeared in any system will be implemented, along with the best ideas from prior systems and research (which have never been combined into one system). Thorough formative and summative user testing of the ideas will provide an understanding of which of the features are most useful; this will guide further refinement of Alice so it does the best possible job of guiding programmers to create correctly working solutions.
In addition to addressing the mechanical issues of programming, Alice provides an opportunity to address the fact that relatively few women learn to program, which can be called a sociological barrier. Many girls turn away from science and technology during their middle school years, usually never to return. Middle school girls represent a particularly difficult challenge, requiring a highly motivating system with tremendous mechanical support. The PIs' approach to lowering the social barriers is to provide programming as a means to other motivating ends, especially storytelling.
The intellectual merit of the proposed research will be to discover new techniques to aide in the construction and debugging of programs, and to measure and validate the impact and effectiveness of these techniques with both novice and expert programmers. The techniques envisioned to help with code creation include: graphical and textual editors for storyboards that will help transition story ideas into code; demonstrational and direct manipulation techniques for specifying dynamic behaviors; check-pointing and undo facilities so programmers can more easily explore multiple solutions and backtrack to a known state when necessary; smart copy-and-paste that will help in transforming and reusing code; support for collaboration and sharing of code; tools that will suggest likely causes and fixes for run-time errors; and embedded special-purpose editors to help construct Boolean expressions and scientific formulas. Techniques to support understanding, testing and debugging include: providing easy inspection of data so programmers can tell what is happening; pausing or break-pointing on any program event including objects that are being changed, created, or deleted; changing values at run time to explore the effects; providing a time-line visualization to show important events and enabling programs to run forwards and backwards from any point; support for asking "why" questions that will tie graphics and program events to the code that caused them; support for asking "why not" questions, which will use heuristics to propose reasons why some event did not happen; and search capabilities so programmers can find variables with particular values, or objects with certain properties.
Broader Impacts: The broader impacts resulting from this research will be to help make programming more accessible to novice programmers, and more effective for both novice and expert programmers. One important target group will be middle school boys and girls who are not necessarily motivated to learn programming. The PIs believe they can make programming accessible and compelling to this audience, while at the same time making it easier for experienced programmers. These benefits will be evaluated using thorough user tests at all points of the design and implementation.
|
0.915 |
2003 — 2009 |
Shaw, Mary Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Collaborative Research: Dependable End-User Software @ Carnegie-Mellon University
ABSTRACT 0324770 Brad A. Myers Carnegie-Mellon U
There has been considerable work in empowering end users to be able to write their own programs, and as a result, users are indeed doing so. In fact, the number of end-user programmers in the United States is expected to reach 55 million by 2005, as compared to only 2.75 million professional programmers. The programming systems used by these end users include spreadsheet systems, web authoring tools, and graphical languages for demonstrating the desired behavior of educational simulations. Using such systems, end users create software, in forms such as educational simulations, spreadsheets, and dynamic e-business web applications. Unfortunately, however, errors are pervasive in end-user software, and the resulting impact is sometimes enormous. When the software end users create is not dependable, there can be serious consequences for the people whose retirement funds, credit histories, e-business revenues, and even health and safety rely on decisions made based on that software. To address this problem, we propose a fundamental paradigm shift to a new way of thinking about the way end users create software. Our intent is to address the following research question: Is it possible to bring the bene.ts of rigorous software engineering methodologies to end users? We do not propose to transform end users into engineers. Rather, our plan is to enable systems to create software to collaborate with those users, in a software development paradigm that combines traditionally separate functions - blending speci.cation, design, implementation, component integration, debugging, testing, and maintenance into tightly integrated, highly interactive environments. These environments will employ new, incremental, feedback devices supported by analysis and inferential reasoning to help the user reason about the dependability of their software as they work with it, in a manner that respects the user's problem-solving directions to an extent unprecedented in existing software development environments.
|
0.915 |
2005 — 2009 |
Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Automatically Generating Consistent User Interfaces For Multiple Appliances @ Carnegie-Mellon University
During the last four years, the PI and his students have been investigating the use of handheld devices to control all kinds of home, office and factory equipment (such as stereos, VCRs, telephones, copiers, FAX machines, and clocks), as well as the non-driving functions of automobiles (such as the air conditioning and navigation systems). They have developed a high-level XML-based language to specify an appliance's features from which a high quality user interface can be generated on a PocketPC personal digital assistant (PDA) or on a mobile phone. However, two important problems remain: to automatically generate consistent interfaces for the user across different appliances; and to automatically generate a combined user interface for multiple appliances that operate as a logical unit. No existing automatic generation system for user interfaces has addressed these problems. Solving the consistency problem would, for example, mean that people could set the time on the VCR and the time to start recording in the same way that they set their alarm clock. By providing the interface on the user's handheld, the same consistent way to set the time would be used in every place that time-setting is required on all appliances. Solving the combined interface problem would, for example, mean that for an entire entertainment system, the user would just need to press "play DVD" and the system would automatically turn on the TV, switch the TV to "input3," turn on the stereo, switch the stereo to "aux" input, turn off the cable box, turn on the DVD player, and finally cause the DVD to start playing. These are the scenarios the PI plans to tackle in this project. He will develop a system that uses the interdependencies among all of a user's connected appliances to automatically create a combined user interface. The intellectual merit of the proposed research lies in determining how to generate interfaces that are consistent with one another and how to combine interfaces for multiple appliances. For consistency, this includes the sub-problems of matching which parts of different appliances should be made consistent with each other, and then how consistency can be provided when the appliances may have more or fewer features for related functions. The PI will have to develop fundamental new knowledge about what aspects of consistency are most important to preserve across appliances, and how to embody that knowledge into rules that the handheld can use to automatically generate consistent user interfaces. For combining multiple appliances, the intellectual merit will include techniques for describing the interconnection among appliances and for combining pieces of multiple appliances' user interfaces together. Extensive user studies will inform the designs and verify the results.
Broader Impacts: No one has previously tried to automatically generate consistent interfaces from specifications for different appliances or computer applications that share some or many functions. The results will be useful for all researchers and developers who are concerned with consistency across multiple user interfaces for appliances or computers. Similarly, no one has automatically created a combined user interface using models defined separately for separate appliances. The results of this research will make complex appliances much easier to use for the general public, so that they are able to make better use of their devices and also to transfer their knowledge to the operation of new, potentially more complicated, appliances.
|
0.915 |
2008 — 2012 |
Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cpa-Sel: Better Tools For Software Understanding @ Carnegie-Mellon University
TITLE: Betters Tools for Software Understanding PI: Brad A. Myers, Carnegie Mellon University
Understanding software is a prerequisite to taking any action to change it, and this remains expensive and error-prone. Research with programmers in the field has identified significant barriers to understanding. Lab and field studies of usability barriers in understanding and using APIs will result in models of how developers understand the design of the objects. These results will lead to new software tools for API exploration and understanding. Work on enabling people to better understand and fix bugs through new visualizations and interaction techniques will allow them to ask "Why" and "Why Not" questions about their code, with the answer visualizing the responsible code and dataflow. New tools will support understanding code by others during reverse engineering activities, focusing on how data and control can flow through large and complex programs. Using static analysis techniques and a new "WhatTree" visualization will allow programmers to investigate the update paths of their programs, while supporting the programmers in collecting and keeping track of facts and hypotheses about how the program operates. The results will improve programmer success and thus their overall productivity.
|
0.915 |
2008 — 2011 |
Myers, Brad John, Bonnie (co-PI) [⬀] Zimmerman, John (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pilot: Exploratory Programming For Interactive Behaviors: Unleashing Interaction Designers' Creativity @ Carnegie-Mellon University
This project focuses on the creation of novel user interface building tools that help designers create interactive behaviors. The designers using these tools are not professional programmers, but have training in Interaction Design, Graphic Design, Industrial Design, or an equivalent. Focusing on interactive behaviors, means a focus on what an application does as opposed to how it looks. Today, interactive behaviors are often programmed by designers using scripting tools, or else designers collaborate with developers who implement the designs in conventional languages. Exploring interactive behaviors today requires programming, and the techniques available to designers are too difficult to use, and do not adequately support the designers? fundamental need to explore. The intellectual merit of the project is an understanding of how designers think about and create interactive behaviors, and the invention of new methods, models and representations that allow designers to more naturally and creatively design behaviors using computers. A novel programming environment that explicitly supports creative exploration of alternative versions and fosters collaboration is the outcome. The broader impacts of the tool will be to enable designers to explore more interactive behaviors and be more creative, which will ultimately improve the user interfaces that they create and that we use every day.
|
0.915 |
2011 — 2014 |
Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Small: Better Tools For Authoring Interactive Behaviors @ Carnegie-Mellon University
Web pages and most other things created for computers are interactive, in that they respond to what people do with them. They have animations, buttons that cause information to change, and game characters that move around under a person's control. Creating these "interactive behaviors" has usually required writing programs, usually in a conventional programming language such as C++, JavaScript, or Adobe's Flash. However, this is a barrier to the vast majority of people who do not know how to program. In particular, there is a large class of people, often called "interaction designers", who are trained in how to make more aesthetic and usable interactive behaviors, but who are not professional programmers, and therefore cannot create these parts by themselves. Research shows that they do identify programming as the main barrier to creating and improving interactive behaviors. One reason that it is important that interaction designers be able to create the behaviors themselves is because by quickly creating, trying out and modifying the behaviors, they are able to creatively explore and develop new and better designs.
The ultimate goal of this research is to provide a new tool that enables interaction designers and other non-professional programmers to create systems with interactive behaviors in a more natural way. To achieve this, investigators will first study how designers and other people think about interactive behaviors. This will provide insight about how such behaviors can be expressed more "naturally", which means how a person can instruct a computer in a way that is close to the way the person is thinking about the desired result. Preliminary studies show that designers do not think about behaviors in the same way as professional programmers. Next, the investigators will use this knowledge about the natural expressions to create a new authoring tool which will make it much easier for designers to create their own interactive behaviors. The initial design for the tool uses techniques that are familiar to designers, such as the drawing model of programs such as Adobe Photoshop or Microsoft PowerPoint, the computation style of spreadsheets such as Microsoft Excel, and the event-based style (such as: "when a bullet intersects a spaceship, then the spaceship should start the blowing-up animation"), which has been found to be a natural way to express these behaviors. The result will be new knowledge and tools that will make programming more accessible to more people, and thus broaden the range of people who can program, while specifically enabling interaction designers to create their own behaviors.
|
0.915 |
2013 — 2017 |
Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Large: Collaborative Research: Variations to Support Exploratory Programming @ Carnegie-Mellon University
In any design or learning activity, exploration is a key component. Significant research and conventional wisdom show that the best way to achieve a high-quality design is to explore multiple variations and iteratively evaluate them. When novices learn a new skill or system, they must explore and practice the available options. Similarly, when experts try to understand and improve an existing design, they must explore different approaches to modifying its behavior. Unfortunately, exploration is risky, error-prone, and cumbersome using today's tools. For instance, when users decide their current design is not effective, the only mechanisms available for selectively backtracking out of changes are linear undo and version control, which make it difficult to isolate backtracking to specific edits, or else users must manually remove undesired edits, which is slow and fallible. Further, today's tools do not support comparing two variants of a design or combining elements from multiple variants. Research is showing that these manual processes inhibit exploration, making users and designs less effective.
To address these problems PIs from four partner institutions have come together to undertake a research program that is both broad and deep, focusing on the creation and management of variations during a system's implementation and evolution. The goal is to discover new theories, algorithms, visualizations, and tools that support variations in code. The team will evaluate all of their approaches through lab and field studies, and they will investigate how users can be educated in more effective ways to work with variations. Based on a choice calculus for representing variations in software, they will develop a theory for formally defining and reasoning about variations. They will leverage theories of human behavior such as Minimalist Learning, Attention Investment, and Information Foraging, to develop a theory of Variation Foraging. They will develop an infrastructure including multiple levels of transcripts of users' editing operations that will support a novel form of selective undo and enable users to investigate their existing variants, return to any previous variant, and mix and match elements from multiple variants. They will develop algorithms to enable recording of interactions with variants so they can be explored and reused to explore and test new variants; these recordings will be augmented with automatically created data to help users understand behaviors they have not explicitly explored. Using this infrastructure the PIs will invent visualizations, search facilities, and interaction techniques that provide effective ways for users to find, understand, explore, reuse and create variants, and be able to ask "why" questions to understand the differences among variations of a system. For novices, an "Idea Garden" will help them explore new strategies for identifying which variations can help solve a problem and how to implement them.
Broader Impacts: This research will enhance infrastructure for research and education by producing an integrated, open source web development environment for use by researchers and the world. The work will therefore benefit society by empowering the tens of millions of end-user programmers to creatively build content and applications for the web. The PIs will advance discovery while promoting learning by integrating their research into undergraduate courses on creativity and software engineering, and by supporting summer camps for at least 300 high school students per year. Project outcomes will be disseminated to researchers through publications and presentations, to computing educators through the above-mentioned camps and the National Girls Collaborative Project, and through public deployment. The PIs expect high interest because the work will be based on JavaScript, which is today's most popular programming language and for which there is a high demand for better tools. The research will address underrepresentation via its focus on investigating how to support both male and female end-user programmers, by involving high-school members of underrepresented groups, and by engaging many of the PIs? female students.
|
0.915 |
2014 — 2017 |
Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Twc: Small: Empirical Evaluation of the Usability and Security Implications of Application Programming Interface Design @ Carnegie-Mellon University
The objective of this project is to gather empirical evidence on the tradeoffs between security and usability in programming language and library design. Although it is well known that poorly-designed interfaces can lead to increased defect rates and software vulnerabilities, there is currently little specific guidance to designers on what precise language and library features make programmers more or less likely to write vulnerable code. Furthermore, little of the existing guidance is empirically based. The project will develop empirically-based guidance on two issues. First, the ISO/IEC standardization working group for the C programming language is currently evaluating multiple proposals for adding concurrency to the language, and this project will produce data to inform their decision-making process. Second, by evaluating the impact of the use of mutability, the project will provide data that may influence how future programming languages and libraries are designed.
The project involves three parts. The first phase is an analysis of flaws in code that uses the draft versions of the C concurrency APIs under consideration as well as comparable Java databases on concurrency-related flaws. In the second and third phases, programmers who have between 2 and 5 years of experience will be asked to complete tasks using competing interface designs. The first set of experiments will evaluate competing C and C++ parallel language extensions to determine which language and library features are more likely to result in secure code. Specifically, the investigators will measure the programmers' ability to produce concurrent code free from security-related defects, such as "data races" and "time-of-check-to-time-of-use" errors using the different libraries. The investigators will then build upon this work to evaluate tradeoffs between security and usability when using immutability to reduce the likelihood of vulnerabilities in concurrent code. Through these two experiments, the project will advance the science of cybersecurity by developing a methodology for empirically evaluating how library and language design affect the frequency with which trained professional programmers inadvertently introduce security vulnerabilities during implementation.
|
0.915 |
2018 — 2021 |
Kittur, Aniket [⬀] Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: Knowledge Acceleration For Programming @ Carnegie-Mellon University
Programming is a critical skill that is vital for the future of work and having a globally competitive workforce. While there are many resources available for programmers to learn the details for writing code, an increasing amount of the time all programmers spend is not on writing code but instead on choosing among and adapting the growing amount of existing code and libraries available to them. One study reported that the most frequent programmer activity is searching for and trying to understand unfamiliar code, and more than 30% of all searches are for determining which APIs to use and how to use them. However, after each sense-making episode in which a programmer gains knowledge for themselves, their work is essentially lost, with no one else benefiting. Although there are many tools to help programmers find the answers, there are very few tools to help programmers make use of the knowledge gained performing the task, or share that knowledge with others. Capturing the work that programmers do in foraging, navigating, and organizing code-relevant information could significantly benefit later programmers interested in similar information. By referencing the captured knowledge from the resulting code, this can provide design rationale for why the API is used that way, which is one of the most often missing pieces of documentation. Furthermore, by making it easier for programmers to build off one another's knowledge, this proposed work has the potential to reduce common security vulnerabilities that arise from programmers not learning from others' mistakes, leading to more secure and correct code.
In this research the PI aims to help the initial programmer collect, navigate, and organize knowledge to meet their goals, while capturing this knowledge and making it useful for later programmers with similar needs. This project studies the sense-making processes that programmers engage in while searching for and organizing knowledge for themselves, as well as studying which work that they do is useful for others. This project investigates how programmers spend their time searching for and making sense of complex information for themselves in order to accomplish their goals, including choosing among different APIs or methods within an API, adapting code snippets found on the Internet to meet their needs, or trying to learn unfamiliar code to fix an error or add a new feature. When performing tasks like these, programmers continually are making hypotheses, proposing questions, and discovering answers, both about the details as well as the meta-level questions such as the design rationale of why the decisions were made. These studies will inform the design, development, and evaluation of tools to support both the initial and later programmers. This research has the potential to significantly accelerate the speed at which programmers can create correct code by helping them gain relevant knowledge faster.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2018 — 2021 |
Mitchell, Tom (co-PI) [⬀] Myers, Brad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Multimodal Conversational Assistant That Learns From Demonstrations @ Carnegie-Mellon University
Intelligent assistants such as Apple's Siri, Amazon's Alexa and Microsoft's Cortana are rapidly gaining popularity by providing a conversational natural language interface for users to access various online services and digital content. They allow computing tasks to be performed in contexts where users cannot touch their phones (such as while driving), and on wearable and Internet of Things (IoT) devices (such as Google Home). However, such conversational interfaces are limited in their ability to handle the "long-tail" of tasks and suffer from lack of customizability. This research will explore a new multi-modal, interactive, programming-by-demonstration (PBD) approach that enables end users to add new capabilities to an intelligent assistant by programming automation scripts for tasks in any existing third-party Android mobile app using a combination of demonstrations and verbal instructions. The system will leverage state-of-the-art machine learning and natural language processing techniques to comprehend the user's verbal instructions that supply information missing in the demonstration, such as implicit conditions, user intent and personal preferences. The user's demonstration on the graphical user interface will be used for grounding the conversation and reinforcing the natural language understanding model. The system will point the way to allowing the general public to more effectively use their smartphones, IoT devices and intelligent assistants, increasing the adoption, efficiency and correctness of uses of these technologies. The integration of intelligent assistants with PBD will have broad impact by exposing people to programming concepts in an easy-to-learn way, and thereby increasing computational thinking.
This project will result in several innovations beyond the current state of the art through advances in programming by demonstration (PBD) and intelligent assistants, and especially in their integration. The work will explore leveraging verbal instructions as an additional modality to address long-standing challenges in PBD research including generalizing the data descriptions and adding control structures. How to coordinate the two modalities to help the intelligent assistant learn new tasks effectively and efficiently from users will be investigated, and how users utilize the two modalities in multi-modal PBD systems for programming tasks in different situations will also be studied. New ways to leverage the displayed graphical user interfaces (GUI) of apps to enhance the speech recognition and language understanding by using the strings and other context of the GUI on the smartphone will be developed. The ability of the conversational assistant to participate in this generalization process will be enhanced, with a focus on having the system ask appropriate and helpful questions so the task automation will fit the user's needs and intentions. New approaches to representing scripts created by PBD systems that users can read, understand and edit will be explored, as will increasing trust and usefulness of the scripts and supporting error handling, debugging and maintenance. The new technology will also be able to extract data from and enter data into apps, and to learn, through demonstration and verbal instruction, how to transform the data into appropriate formats. Finally, how to support sharing of scripts created by PBD systems while ensuring the appropriate levels of privacy and security will also be investigated.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2020 — 2023 |
Myers, Brad Vasilescu, Bogdan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: Personalizing Api Documentation @ Carnegie-Mellon University
Application programming interfaces (APIs), including libraries, frameworks, toolkits, software development kits and web services, are used throughout most programmers' code. Since programming is a human activity, the usability of the APIs has a big impact on the effectiveness of the programmer and the resulting code: poor usability of APIs has been shown to result in bugs and security holes in code, as well as to reduce the programmer's productivity. Further, today's programmers must be learning new APIs all the time, as they switch projects or start using new packages or web services. A longstanding, but often overlooked, complaint about the usability of APIs is in the documentation. To be effective, API documentation must inform programmers, who have varying levels of expertise, how to correctly and effectively use the API. This project will create and empirically evaluate a system that automatically estimates the knowledge needs to use an API, and the needs and learning style of the user, to create personalized documentation focused on what the user needs to know. The project has the potential to significantly improve API usability and API learning, which could improve programmer effectiveness and productivity, and reduce bugs and security flaws, which would have a significant impact on all computerized systems.
This project involves fundamental research to identify and represent the knowledge that a programmer has, and programmers' learning styles, as it relates to APIs, based on user-centered studies and computational techniques including mining software repositories and natural language processing. From these sources, the system will identify how to create personalized documentation, which presents the right information in an appropriate format, without requiring the documentation writer to do much more work. The research will also identify new ways to support process-oriented learning and tinkering so they are both more effective. The research includes validating all of these for effectiveness through appropriate user studies using real and large APIs in collaboration with the research team's industrial partners. The research also aims to support diverse learning styles that are often the styles favored by underrepresented populations that tend to be unsupported in software.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |