2007 — 2012 |
Dyer, Charles Zhu, Xiaojin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Text-to-Picture Synthesis @ University of Wisconsin-Madison
Title: Text-to-Picture Synthesis
PIs: Xiaojin (Jerry) Zhu and Charles Dyer Institution: University of Wisconsin-Madison
Abstract
One challenge in artificial intelligence is to enable natural interactions between people and computers via multiple modalities. It is often desirable to convert information between modalities. One example is the conversion between text and speech using speech synthesis and speech recognition. However, such conversion is rare between other modalities. In particular, relatively little research has considered the transformation from general text to pictorial representations. This project will develop general-purpose Text-to- Picture synthesis algorithms that automatically generate pictures from natural language sentences so that the picture conveys the main meaning of the text. Unlike prior systems that require hand-crafted narrative descriptions of a scene, algorithms will generate static or animated pictures that represent important objects, spatial relations, and actions for general text. Key components include extracting important information from text, generating corresponding images for each piece of information, composing the images into a coherent picture, and evaluation. The proposed approach uses statistical machine learning and draws ideas from automatic machine translation, text summarization, text-to-speech synthesis, computer vision, and graphics. This research will produce computational methods as well as working systems.
Text-to-picture synthesis is likely to have a number of important broad impacts. First, it has the potential for improving literacy across a range of groups including children who need additional support in learning to read, and adults who are learning a second language. Second, it may be used as an assistive communication tool for people with disabilities such as dyslexia and brain damage, and as a universal language when communication is needed simultaneously to many people who speak different languages. Third, it can be a summarization tool for rapidly browsing long text documents. This research will foster collaboration between researchers in computer science and other disciplines, including psychology and education. Results of the project will be disseminated through technical publications, public web pages and software, seminars and talks, and classroom education.
URL: http://www.cs.wisc.edu/~jerryzhu/ttp/
|
0.934 |
2009 — 2014 |
Zhu, Xiaojin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Semi-Supervised Learning For Non-Experts @ University of Wisconsin-Madison
This project develops semi-supervised machine learning algorithms that are practical, and at the same time guided by rigorous theory. In particular, the project is developing learning theory that quantifies when and to what extent the combination of labeled and unlabeled data is provably beneficial. Based on the theory, novel algorithms are being developed to address issues that currently hinder the wide adoption of semi-supervised learning. The new algorithms will be able to guarantee that using unlabeled data is at least no worse, and often better, than supervised learning. The new algorithms will also be able to learn from unlimited amounts of supervised and unsupervised data as they arrive in real-time, something humans can do but computers cannot so far.
This project has a number of broader impacts: (1) An open-source software will be an enabling tool for new discoveries in science and technology, by making machine learning possible or better in situations where labeled data is scarce. Since the software specifically targets non-machine-learning-experts, the impact is expected to be across the whole spectrum of science and technology that utilizes machine learning. (2) It advances our understanding of the learning process via new machine learning theory, which can be applied to both computers and humans. (3) The proposal contains projects ideally suited to engage students in computer science education and research.
|
0.934 |
2010 — 2017 |
Zhu, Xiaojin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Using Machine Learning to Understand and Enhance Human Learning Capacity @ University of Wisconsin-Madison
Understanding and enhancing human learning are important challenges in the 21st century. Existing human category learning models cannot quantify important capacities such as people's (in)ability to generalize from training to test, to learn from imperfect data, or to learn by actively asking questions.
This research project studies human learning using machine learning. It first develops machine learning theory and algorithms to quantify these human learning capacities: It establishes learning-theoretic error bounds on human generalization performance; It models human learning from an imperfect teacher with non-parametric Bayesian methods; It models human's ability to ask informative questions with active learning theory. The project then studies computational approaches to enhance human learning: It develops "machine teaching" algorithms when the computer knows the target concept, and selects the optimal training examples to teach a human learner; It develops "human machine co-learning" algorithms when the computer does not know the target concept, but instead learns alongside the human and suggests better learning strategies to her. Each topic is verified by human experiments.
The project advances machine learning with new learning theory and algorithms on tasks where humans excel. It advances cognitive psychology with new models of human learning. It has broader impacts in understanding human intelligence, and in benefiting students with new educational tools. This research project is integrated with an educational plan that incorporates undergraduate and graduate teaching and mentoring, developing a new course and a book on machine and human learning, organizing seminars, tutorials and workshops, and sharing all results on a website.
|
0.934 |
2011 — 2014 |
Dyer, Charles Zhu, Xiaojin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Iii: Eager: Discovering Spontaneous Social Events @ University of Wisconsin-Madison
Real-world social events provide a convenient and intuitive way to organize social media content for individuals. Current approaches to event detection from social media: assume that the events to be monitored (and their social media signatures) are known a priori; focus largely on text data and fail to take advantage of other forms of media e.g., images.
Against this background, this project explores a novel approach to discovering spontaneous, a priori unspecified, social events through joint Bayesian non parametric modeling of multi-modal data (including text and images) and using the events thus discovered to foster new social links. The resulting tools for event discovery will be tested in an application involving discovery of wild animal disease outbreaks from twitter text messages and images posted by individuals.
The project brings together an interdisciplinary team of researchers with expertise in image analysis, text mining, and machine learning to advance the state of the art in detection of spontaneous, a priori unspecified events (as they emerge) from social media data. It is expected to yield new scalable nonparametric Bayesian approaches to joint modeling of image and text data, and more generally multi-modal social media data. The resulting tools could potentially transform the way in which people use social media data by empowering them to discover and participate in real world events even as they emerge.
|
0.934 |
2012 — 2017 |
Zhu, Xiaojin Bellmore, Amy (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Iii: Small: Advancing the Scientific Understanding of Bullying Through the Lens of Social Media @ University of Wisconsin-Madison
Bullying has been recognized as a serious national health issue. Traditional approaches to the scientific study of bullying are hindered by data acquisition. For example, the standard approach has been to conduct personal surveys in schools. Due to its relatively small sample size and low temporal resolution, neither the true frequency of bullying over the population nor the evolution of bullying roles can be satisfactorily studied. The traditional approaches are also very labor intensive.
Social media has developed to the point where it contains enough signal about bullying. This project develops novel machine learning models that automatically monitor and analyze publicly available social media data to understand bullying. These machine learning models reconstruct hidden bullying episodes from a sequence of social media posts. They automatically determine who participated in which bullying episode as what role. In addition, this project conducts human studies on bullying in school and in social media in parallel, by collecting self-report surveys by school-aged children and their social media posts simultaneously. Such studies correlate the traditional psychological approach and social media data on bullying. Taken together, the project will provide significant new scientific data toward understanding, intervention, and helping policy-making regarding bullying.
|
0.934 |
2014 — 2017 |
Zhu, Xiaojin Liblit, Benjamin Snyder, Benjamin Reps, Thomas (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: Transforming Natural Language to Programming Languages @ University of Wisconsin-Madison
Just a few years ago, the majority of our computational input was confined to the traditional computer keyboard. Now, with the advent of ubiquitous computational devices in our hands, pockets, televisions, cars, and glasses, more fluid, natural, and less distracting methods of input are desirable. Besides the inherent limitations of the traditional keyboard, a more fundamental cost is the number of injuries sustained by repeated and long-term keyboard use. Carpal Tunnel Sydrome (CTS) alone affected nearly 5 million US workers in 2010. CTS and Cubital Tunnel Syndrome (CBTS) together account for $1 of every $3 spent for workers? compensation. Besides these economic costs, these syndromes severely limit the ability of sufferers to access computational technology, locking them out of professions such as computer programming.
This project aims to develop spoken language interfaces for computer programming. The immediate goal is to create a spoken language dictation system for the popular Java programming language, which removes the need for the programmer to dictate difficult-to-verbalize syntactic elements such as parentheses, brackets, punctuation, and word casing. Instead, the system will employ stochastic models to infer the intended program with high fidelity from the ambiguous speech stream of the user. This project lays the groundwork for a more general framework for relating ambiguous natural human language to the various formal languages and systems that drive the functioning of the computer.
The key technical innovations of this project lie in the development of stochastic models for computer programs. Such models have met with much success in recent years for inferring the structure, meaning, and translations of human languages. While traditional programming languages are designed as deterministic grammars, any speech input in a more natural human language idiom will perforce involve ambiguity. This ambiguity may only be resolved by developing accurate and predictive probability models over computer programs. The models to be explored in this project include traditional n-gram language models as well as syntactic language models that make use of the programming language's grammar in order to more accurately assign likelihood to various interpretations of the speech input. In the long term, this research lays the groundwork for the development of robust speech toolkits for a wide variety of computational languages, such as domain-specific languages for cars and entertainment devices and database query languages.
|
0.934 |
2014 — 2017 |
Banerjee, Suman [⬀] Zhu, Xiaojin Zhang, Xinyu |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ears: a Tv Whitespace Communication System For Connected Vehicles @ University of Wisconsin-Madison
The project is focused on designing a holistic communication stack that leverages TV whitespaces for enabling "connected vehicles" and their many advantages in ushering smarter and safer transportation services. The combination of low cost of the unlicensed TV whitespace spectrum and its longer communication ranges matches the needs of vehicular connectivity quite well. Yet realizing TV whitespace communications faces unique constraints and challenges including, e.g., protecting primary incumbents, scanning through many channel configurations subject to spatial-temporal variations, dealing with asymmetric uplink/downlink transmit power constraints due to FCC regulations concerning mobile and static nodes. The proposed research addresses these challenges through a range of techniques. It augments spectrum sensing techniques from sensors distributed across moving vehicles to enhance the accuracy of spectrum databases. It employs innovative PHY and MAC layer techniques including variable spreading codes, MIMO and directional antennas, complemented with network/transport layer protocols that adapt to uplink/downlink asymmetry. These research tasks integrate machine learning techniques to explore the predictable mobility and other unique advantages of the vehicular whitespace network.
This project is developing a new communication technology for improved and robust wide-area connectivity for vehicles. All developed techniques are expected to be deployed in a practical setting on a realistic vehicular testbed in Madison, WI. The research work engages collaboration with many industry partners and is expected to influence regulatory and standardization bodies. The researchers are also making educational impact through public lectures, developing new curriculum, and motivating women and minorities in future STEM careers through appropriate PI engagement at high school and undergraduate levels.
|
0.934 |
2015 — 2020 |
Rogers, Tim Rau, Martina Zhu, Xiaojin Alibali, Martha (co-PI) [⬀] Nowak, Robert |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nrt-Dese Lucid: a Project-Focused Cross-Disciplinary Graduate Training Program For Data-Enabled Research in Human and Machine Learning and Teaching @ University of Wisconsin-Madison
NRT DESE: Learning, understanding, cognition, intelligence, and data science (LUCID)
In modern life there are many situations requiring people to interact with computers, either so that they may learn from the machine or so that the machine may learn from them. The applications in education, industry, health, robotics, and national security hint at the enormous societal and economic benefits arising from research into the technologies that promote learning in both people and computers. Yet the potential has been difficult to realize because such research requires scientists with expertise in quite different fields of study. While computer scientists receive training in complex computational ideas and methods, they know little about how people learn and behave. This National Science Foundation Research Traineeship (NRT) award to the University of Wisconsin-Madison will prepare trainees with data-enabled science and engineering training to simultaneously understand computational theory and methods, the mechanisms that support human learning and behavior, and the ways these mechanisms behave in complex real-world situations. The traineeship anticipates equipping forty (40) doctoral students with the skills and expertise necessary to advance our understanding of human and machine learning and teaching, through a new training program that focuses on learning, understanding, cognition, intelligence, and data science.
This project will train doctoral students from computer science, engineering, cognitive psychology, and education sciences, with the goal of promoting a common knowledge base that allows these scientists to work productively across traditional boundaries on both basic research questions and practical, real-world problems. The traineeship will include several graduate training innovations: (1) a project-focused "prof-and-peer" mentoring system where scientists work in cross-disciplinary teams to address a shared research problem, (2) close involvement of partners in industry, government, and non-profit sectors to develop research problems with real-world application, (3) an information outreach effort that trains scientists to communicate with the public, industry, and policy-makers through traditional and new media outlets, (4) a flexible development plan that allows each trainee to garner the cross-disciplinary expertise needed to advance a particular research focus, and (5) new mechanisms for recruiting and retaining under-represented groups in STEM research. This training will prepare US scientists to compete globally at the highest levels for positions in science, industry, and government, in a growth sector of the 21st century knowledge economy.
The NSF Research Traineeship (NRT) Program is designed to encourage the development and implementation of bold, new, potentially transformative, and scalable models for STEM graduate education training. The Traineeship Track is dedicated to effective training of STEM graduate students in high priority interdisciplinary research areas, through the comprehensive traineeship model that is innovative, evidence-based, and aligned with changing workforce and research needs.
This award is supported, in part, by the EHR Core Research (ECR) program, specifically the ECR Research in Disabilities Education (RDE) area of special interest. ECR emphasizes fundamental STEM education research that generates foundational knowledge in the field. Investments are made in critical areas that are essential, broad and enduring: STEM learning and STEM learning environments, broadening participation in STEM, and STEM workforce development.
|
0.934 |
2016 — 2019 |
Rau, Martina Zhu, Xiaojin Nowak, Robert |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Exp: Modeling Perceptual Fluency With Visual Representations in An Intelligent Tutoring System For Undergraduate Chemistry @ University of Wisconsin-Madison
The Cyberlearning and Future Learning Technologies Program funds efforts that support envisioning the future of learning technologies and advance what we know about how people learn in technology-rich environments. Cyberlearning Exploration (EXP) Projects design and build new kinds of learning technologies in order to explore their viability, to understand the challenges to using them effectively, and to study their potential for fostering learning. This EXP project aims to help students become visually fluent with visual representations (similar to becoming fluent in a second language). Instructors often use visuals to help students learn (e.g., pie charts of fractions, or ball-and-stick models of chemical molecules) and assume that students can quickly discern relevant information (e.g., whether or not two visuals show the same chemical) once that visual representation has been introduced. But comprehension is not the same as fluency -- students still expend significant mental effort and time interpreting even visuals that they understand conceptually, and the resulting cognitive load can cause them to miss other important information that instructors are imparting. To help improve student fluency with visuals, a series of experiments with undergraduate students and chemistry professors will investigate which visual features they pay attention to and use sophisticated statistical methods to devise example sequences that will most efficiently help students learn to pay attention to relevant visual features. Based on this research, the project team will develop a visual fluency training that will be incorporated into an existing, successful online learning technology for chemistry. The potential educational impact will not be limited to chemistry instruction: given the pervasiveness of visual representations in STEM fields and the number of students who struggle with rapid processing of those visuals, the products of this research could be integrated into other educational technologies.
The PIs will develop a methodology for cognitive modeling of perceptual learning processes that can create adaptive support for perceptual learning tasks. The research will combine machine learning with educational psychology experiments using an Intelligent Tutoring System (ITS) for undergraduate chemistry. In Phase 1, metric learning will assess which visual features of representations novice students and chemistry experts focus on. Applying metric learning to a novice-expert experiment will establish a skill model of student perceptions and perceptual learning goals for the ITS. In Phase 2, the team will use machine learning to develop a cognitive model of perceptual learning. The team will conduct a chemistry learning experiment and apply machine learning to test cognitive models. In Phase 3, the team will use the cognitive model to reverse-engineer optimal sequences of perceptual learning tasks. An experiment will evaluate the effectiveness of these sequences, and the team will build on this analysis to create an adaptive version of perceptual learning tasks. A final experiment will evaluate whether incorporating adaptive perceptual learning tasks with conceptually focused instruction enhances learning. Because educational technologies have traditionally focused on explicit learning processes that lead to conceptual competencies, they cannot currently assess the implicit learning processes that lead to perceptual fluency. Combining educational psychology, cognitive science, and machine learning will yield new cognitive models that could transform the adaptive capabilities of educational technologies to support such perceptual fluency as well as other implicit forms of learning. The project will also yield next-generation computational algorithms to model human similarity judgments and to use adaptive surveying to collect data on perceptual judgments more efficiently.
|
0.934 |
2016 — 2019 |
Zhu, Xiaojin Zhou, Shiyu [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Enabling Cloud-Based Quality-Data Management Systems @ University of Wisconsin-Madison
Cloud-based platforms for accessing, sharing, and visualizing manufacturing-enterprise-level data are becoming available. In a cloud-based quality-data-management system, the quality-characteristics of different devices, products, and facilities are accumulated in a centralized database. These data pertain to multiple machines and multiple facilities, offering opportunities to achieve more effective quality control and productivity improvements. However, most cloud-based platforms are as yet unable to exploit the information contained in such data to make better decisions for production-system control and quality improvement. The objective of this project is to advance a series of methodologies that enable modeling of a large number of quality characteristics, timely change detection, accurate root cause diagnosis, and optimal repair decision-making. The project will also contribute to workforce training by offering students opportunities to engage in interdisciplinary research dealing with manufacturing, computing, sensing, and machine learning.
The reason why cloud-based platforms may not as yet exploit manufacturing-enterprise-level data lies in the dearth of techniques to (1) describe the quality characteristics and their relationships, and (2) make decisions informed by such descriptive models. To enable cloud-based quality-data-management systems of the future, the investigators will first advance methodology needed for a flexible, yet rigorous, hierarchical graphical model, which will describe the inter-relationships among different quality characteristics. The hierarchical structure of the model will enable information sharing across different facilities within an enterprise. Based on this descriptive model, the investigators will next develop methodologies for process monitoring and diagnosis via likelihood based risk-adjustment and Bayesian-factor theory, and for optimal repair decisions via Partially Observable Markov Decision Processes (POMDP) framework. The developed methodologies will be tested on data obtained from an industrial collaborator.
|
0.934 |
2018 — 2022 |
Jha, Somesh [⬀] Zhu, Xiaojin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fmitf: Collaborative Research: Formal Methods For Machine Learning System Design @ University of Wisconsin-Madison
Machine learning (ML) algorithms, fueled by massive amounts of data, are increasingly being utilized in several critical domains, including health care, finance, and transportation. Models produced by ML algorithms, for example deep neural networks, are being deployed in these domains where trustworthiness is a big concern. It has become clear that, for such domains, a high degree of assurance is required regarding the safe and correct operation of ML-based systems. This project seeks to provide a systematic framework for the design of ML systems based on formal methods. The project seeks to review and improve almost every aspect of the design flow of ML systems, including data-set design, learning algorithm selection, training of ML models, analysis and verification, and deployment. The theory and ideas generated during the project will be implemented in a new software toolkit for the design of ML systems in the context of cyber-physical systems.
The project focuses on cyber-physical systems (CPS), which is a rich domain to apply formal methods principles. Moreover, the research ideas from this project can be readily applied to other contexts. A key aspect of this research is the use of a semantic approach to the design and analysis of ML systems, where the semantics of the target application and a formal specification for the full system, comprising the ML component and other components, are cornerstones of the design methodology. The project employs a range of formal methods, including satisfiability solvers, simulation-based verification, model checking, specification analysis, and synthesis to improve all stages of the ML design flow. Formal techniques are also used for the tuning of hyper-parameters and other aspects of the training process, to aid in debugging misclassifications produced by ML models, and to monitor ML systems at run time and ensure that outputs from ML models are used in a manner that ensures safe operation at all times.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.934 |
2020 — 2021 |
Bier, Vicki [⬀] Zhu, Xiaojin Lupyan, Gary (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Rule Induction Games to Explore Differences Between Human and Machine Intelligence @ University of Wisconsin-Madison
This project tackles a previously unexplored problem in the relationship between human and machine learning. Many problems that challenge human intelligence (chess, Go) have yielded to modern computer algorithms. Yet some tasks that are easy for humans, or even animals ? such as flexible locomotion and rapid and robust visual understanding of the surroundings are still at the cutting edge of artificial-intelligence research. Computers calculate without error. Yet, for example, quite a few people who know the difference between odd and even will say that 798 is odd, perhaps because two thirds of its digits are odd. Are there fundamental differences between the way computers learn and way humans learn? Can they be found with a rigorous study of games where the player must learn a rule by trial and error? This project uses games involving the learning of rules to explore similarities and differences between human and machine learning. It will seek new insights into human learning and may improve understanding of machine learning as well. Long-term, it aims to better integrate algorithms and humans for solving real-world problems; humans and computers work together best when they can complement each other, This project will seek generalizable distinctions between rules that are easy for humans and rules that are hard for humans; the special focus is to find problems where the order of difficulty is exactly reversed for machines. Finding the principles behind these reversals will help to triage problems. The long-term goal is hybrid systems, human and machine learning integrated to achieve goals such as medical diagnosis, treatment planning, etc. This project if successful will contribute to rigorously defining how and why some learning problems that seem relatively easier for humans are nonetheless more difficult for machines, and vice versa. With a focus on the specific activity of rule finding, this research may even shed new light on the scientific process, which has been characterized as ?discovering the rules of nature.?
This project explores complementarity between Machine Learning and Human Learning with a rigorously balanced approach, using a ?rule induction? challenge that is presented to both humans and computers. Computers will use state-of-the art deep neural networks, and explore the hypothesis space of rules describable in the project?s coding language. The psychological research investigates crucial problems such as transfer learning across rules, and the role of language and naming in rule discovery. Both human and machine ?players? learn the rules by trial and error. The rule encoding language, reinforcement-learning processes, and scoring systems ensure symmetry of human and machine learners. Performance measures will include discounted reward and convergence to error-free play. Learning curves will be used to measure the difficulty of learning each rule. Experimental conditions will be systematically varied, including not only the rule to be learned, but also parameters such as the minimum and maximum number of different shapes displayed, the maximum number of ?boards? that a user may use in attempting to learn a given rule, and the incentive/reward structure by which players earn rewards for their performance. The research will seek identify pairs of classes of rules such that the class that is easier for humans is more difficult for computers, and vice versa. The project will involve extensive experiments using diverse machine-learning approaches, as well as Amazon Mechanical Turk for data on human learning performance."Comparing the learnability of different rules sheds new light on human learning biases, may prove useful for structuring curricula, and may help identify which gaps in knowledge are most detrimental to human problem solving. The goal is to interpret or explain what distinguishes these anomalous pairs of rule classes from others where the relative degree of difficulty is the same for humans and computers.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.934 |
2022 — 2025 |
Zhu, Xiaojin Rau, Martina |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Digitally Inoculating Viewers Against Visual Misinformation With a Perceptual Training @ University of Wisconsin-Madison
Misinformation impedes people’s ability to make informed decisions in many areas, for example politics, health care, purchasing, or investing. Misinformation can be created by accident or intentionally. Misleading graphs are a particularly dangerous form of misinformation because they can make false information more believable and reach viewers faster. To combat misinformation in graphs, one needs to consider two aspects of graph comprehension: conceptual reasoning and perception. Prior research has focused on conceptual reasoning about graphs. Yet, because perception is automatic it is especially prone to false information in misleading graphs. This project focuses on perception. The investigators will develop a perceptual training method that helps viewers to extract correct information from misleading graphs. The perceptual training method will be provided as a web browser plugin. It will provide feedback as viewers see misleading graphs on the web. The investigators will use machine learning algorithms to design the perceptual training method. The project will advance scientific understanding of perception in graph comprehension. It will also develop machine learning algorithms for educational purposes. The project will provide new tools for addressing issues of misinformation. <br/><br/>Misinformation poses a severe risk to society. Misleading graphs are a type of visual misinformation that can quickly convey false information to viewers. While existing interventions for visual misinformation target conceptual processes, perceptual processes also play an important role. Perceptual processes are automatic and prone to biases. Visual misinformation often targets perceptual over conceptual processing. Therefore, this project directly targets perceptual processes. Investigators will develop a perceptual training method that will teach viewers to extract correct information from misleading graphs so that they become “immune” against visual misinformation. The perceptual training method will be delivered as a web browser plugin and will have two components. First, upon installing the browser plugin, viewers will receive a 2-minute massed training that will serve as the initial “vaccine” against misleading graphs. Second, the browser plugin will deliver a spaced training by giving feedback when viewers encounter misleading graphs on the web, which serves as a “booster” for their immunity. The investigators will use machine learning algorithms to decide which type of feedback the perceptual training should offer and how often such feedback should be provided. Two randomized experiments will evaluate components of the perceptual training method while participants browse the web. This project will advance scientific understanding of perceptual learning, educational applications of machine learning algorithms, and will develop novel approaches to combat misinformation.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.934 |