2002 — 2007 |
Sajda, Paul |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Probabilistic Models For Integrating Biochemical and Morphological Markers For Cancer
0133804 Sajda Under this CAREER Award, a new set of computer-assisted analysis techniques will be developed to improve the noninvasive diagnosis of brain cancer by integrating biochemical and morphological markers from MRSI (magnetic resonance spectroscopy imaging) and MRI (magnetic resonance imaging). MRSI, which allows for characterization and quantification of biochemical metabolites and the construction of metabolite intensity images, combined with MRI provides a biochemical and morphological view of the disease. Using short MRSI echo time techniques, 10-20 dimensional multi-variant feature space will be studied to uncover specific signatures for characterizing cancer. Specific aims include: develop "semi-blind" source separation using a maximum a posteriori framework for recovery of metabolite intensity images in MRSI; characterize the correlations and dependencies between metabolite intensity images and morphological information derived from MRI; develop a hierarchical probabilistic model for integrating metabolite intensity images with MRI for the joint biochemical/morphological characterization of brain tumors; and assess the performance of the models within the context of computer-assisted diagnosis, making comparisons to traditional methods that have relied on fairly elementary relationships, such as the ratio of two metabolite concentrations.
The educational component of the proposal focuses on a program in machine learning for biomedical engineering, including a new course and computer laboratories and efforts that would serve as a basis of an industrial internship program. The course will introduce students to the mathematical theory behind machine learning and probabilistic models, their application to the biomedical sciences, and techniques for evaluating and validating their performance.
|
1 |
2004 — 2007 |
Sajda, Paul |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) R33Activity Code Description: The R33 award is to provide a second phase for the support for innovative exploratory and development research activities initiated under the R21 mechanism. Although only R21 awardees are generally eligible to apply for R33 support, specific program initiatives may establish eligibility criteria under which applications could be accepted from applicants demonstrating progress equivalent to that expected under R33. |
A Non-Invasive Single-Trial in-Vivo Neuroimaging System @ Columbia Univ New York Morningside
DESCRIPTION (provided by applicant): The overall goal of this project is to develop an integrated single-trial system for neuroimaging which combines high-density electroencephalography (EEG) with simultaneous functional magnetic resonance imaging (fMRI), and to use this system to investigate variability in neural processing. The high-temporal resolution of EEG will enable the detection of signal variability in single-trial events, and this information will be used as the input function for analysis of simultaneously acquired event-related fMRI (efMRl). We hypothesize that using single-trial EEG derived regressors for efMRI (stEEG/fMRI) will yield high spatial and high temporal resolution information about the functional neuroanatomy involved in cognitive processing. This will enable construction of unique EEG derived tMRI activation maps which are not based on pre-defined labels or observed behavioral responses but rather on task and subject specific electrophysiological source variability. The broad impact of this work will be development of a new non-invasive imaging system (stEEG/fMRI) for the cognitive neurosciences as well as a clinical tool for diagnosis and monitoring of a broad spectrum of neurological diseases. The R21 effort focuses on development of a high density (64 channels) EEG/fMRI integrated system for single-trial analysis, and characterization of possible differences between the EEG recorded in an MR environment and that recorded in a standard environment. The R33 will then demonstrate the use of stEEG/fMRI in a pilot study of cognitive aging. R21Aims: 1. Develop an in-magnet 64 channel EEG system for single-trial analysis of event-related potentials recorded concurrently with fMRI. 2. Assess the quality of EEG collected inside the MR scanner compared to that collected in a shielded EEG room, using a series of predefined protocols for characterizing the effects of the auditory and magnetic environments on EEG and ERP wave forms. 3. Validate that EEG recorded simultaneously with fMRI is of a high enough quality to detect task relevant single-trial signatures using supervised machine learning. R33 Aims: 1.Use single-trial EEG-derived regressors, constructed via supervised machine learning, to construct efMRI activation maps (stEEG/fMRI activation maps) for auditory oddball and Eriksen flanker tasks. 2.Use alpha power as a complementary regressor within stEEG/fMRI for capturing additional single-trial variance in the hemodynamic response. 3.Demonstrate that stEEG/tMRI activations maps yield new information for discriminating young and old adult populations, as compared to traditional efMRI and P3 and ERN ERP analysis.
|
0.939 |
2009 — 2013 |
Sajda, Paul |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multimodal Neuroimaging For Mapping Decision Making in the Human Brain @ Columbia Univ New York Morningside
DESCRIPTION (provided by applicant): Perceptual decision making is one of the most basic forms of cognition, for it is how sensory input is mapped to specific behavior. Substantial effort has focused on uncovering the constituent cortical networks involved in this early form of cognition using both single and multi-unit recordings in primates and, more recently, functional neuroimaging in humans. These neuroimaging studies, typically utilizing functional magnetic resonance imaging (fMRI), have identified areas in frontal, parietal, and thalamic cortex in which metabolic activity correlates with decision related variables. However, decision making is a dynamic process, and the localized activations found with fMRI must be a part of cortical networks defined by the relative timing of these activations and their causality. The overall goal of this project is to couple high temporal resolution, single-trial analysis of electroencephalography (EEG) with simultaneously acquired fMRI to infer the constituent cortical networks of perceptual decision making in the human brain. Specific aims are 1) to replicate and systematically expand upon our prior results showing neural components correlate with task-relevant decision making variables, but for the case of EEG acquired simultaneously with fMRI, 2) to link trial-to-trial variability of EEG components, identified for perceptual decision making, with spatial area simultaneously imaged with fMRI, and 3) to extend our perceptual decision making paradigm from brief stimulus presentation to prolonged and dynamic stimuli and use single-trial analysis of simultaneous EEG/fMRI to differentiate cortical networks involved in evidence accumulation. This project will significantly advance our understanding of decision making in the human brain by providing a more precise cortical network "diagram" which could be used to better compare differences observed between primate and human data. Finally, this research could lead to a better understanding of cortical processing underlying basic cognitive deficits, linking spatial and temporal changes in activations to specific neurological diseases and disease states. PUBLIC HEALTH RELEVANCE In this project we will use state-of-the art neuroimaging to map the neural networks underlying decision making in the human brain. This project will both shed light on basic neuroscience questions related to decision making in humans as well as lead to a better understanding of cognitive deficits and neurological disease in which decision making is affected.
|
0.939 |
2009 — 2011 |
Chang, Shih-Fu [⬀] Sajda, Paul |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop On Hybrid Neuro-Computer Vision Systems
Human visual systems represent the most complex information processing machines, with unique capabilities in recognizing objects at a glance, under varying poses, illuminations and scales. Recent advances in neuroimaging and sensing device makes this an exciting time in vision research. We can now see the brain "in action" while it performs complex visual recognition and scene understanding. Modalities such as EEG, MEG, fMRI, and eye tracking have yielded identifying neural correlates underlying information processing strategies used by the human visual processor. Recent efforts of computer vision research have also yielded some promising successes, though mostly for constrained problems.
This workshop brings together world-leaders in the fields of visual neuroscience, neural computing, and computer vision to discuss our current understanding of how the brain is able to rapidly recognize objects and analyze a visual scene relative to the capabilities of state-of-the-art computer vision systems. Emphasis is placed on identifying synergies between human and machine vision and potential neural interfaces that could be used to create hybrid vision systems. An additional focus is on rapid scene analysis and recognition, rather than reasoning and higher level cognition. The workshop aims at broader impacts in facilitating fruitful interaction among experts from three separate fields. The major outcome includes a report that documents in detail the grand research challenges, opportunities, and concrete recommendations of actions for NSF and other funding agencies.
|
1 |
2015 — 2018 |
Sajda, Paul Kender, John [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Medium: Assessing Speaker and Teacher Effectiveness Through Gestural Analysis, Eeg Recordings, and Eye Tracking
This project helps speakers and teachers to measure and improve their impact on their audiences. It uses visual observations of body, head, and hand gestures of the communicator, plus recordings of brain activity and eye movements of the audience. Together, these determine which sections of a presentation elicit the most audience engagement. The project is developing new methods to capture and calibrate electroencephalogram and eye-tracking data from listeners and from students. It is determining new ways to relate this subject information to what a speaker or teacher can be seen to be doing while developing an argument or reviewing a concept. The project produces analyses of when and how the communicator is most effective. This system is being ported to the Columbia Video Network distance education facility, for their use in improving the online delivery of Columbia University Master's level technical courses. This project continues a research effort that has involved women, minorities, disabled students, and undergrads.
This research investigates the degree to which certain speaker gestures can convey significant information that are correlated to audience engagement, in speeches and in classroom lectures. The project develops and validates a catalog of gestural attributes derived from pose and movements of body, head, and hand, and automatically extracts these attributes from videos. It demonstrates correlations between gesture attributes and an objective method of measuring audience engagement: electroencephalography (EEG). The project leverages a multi-disciplinary approach, with neural engineers and computer/media scientists collaborating to build a system that identifies and tracks physiological measures of engagement, and relates these to features in the video as well as information content. It records subjects' high-density EEG, and tracks their eyes and pupillary responses while they are watching video lectures. It uses machine learning, specifically novel methods which expand upon canonical correlation analysis, to relate inter- and intra-subject correlations, between the physiological changes and the gestural features derived from the video by using enhanced computer vision techniques. These measures are further integrated with pupillary measures, which have been shown to correlate with arousal, as well as with gaze measures, which are indicative of attention. The project is producing an analysis of body, head, and hand gestures useful in persuasion and in education, and a catalog of their influence on engagement and speaker effectiveness.
|
1 |
2016 — 2019 |
Sajda, Paul Allen, Peter [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nri: Collaborative Research: Multimodal Brain Computer Interface For Human-Robot Interaction
Human Robot Interaction (HRI) is research that is a key component in making robots part of our everyday life. Current interface modalities such as video, keyboard, tactile, audio, and speech can all contribute to an HRI interface. However, an emerging area is the use of Brain-Computer Interfaces (BCI) for communication and information exchange between humans and robots. BCIs can provide another channel of communication with more direct access to physiological changes in the brain. BCIs vary widely in their capabilities, particularly with respect to spatial resolution, temporal resolution and noise. This project is aimed at exploring the use of multimodal BCIs for HRI. Multimodal BCIs, also referred to as hybrid BCIs (hBCI), have been shown to improve performance over single modality interfaces. This project is focused on using a novel suite of sensors (Electroencephalography (EEG), eye-tracking, pupillary size, computer vision, and functional Near Infrared Spectroscopy (fNIRS)) to improve current HRI systems. Each of these sensing modalities can reinforce and complement each other, and when used together, can address a major shortcoming of current BCIs which is the determination of the user state or situational awareness (SA). SA is a necessary component of any complex interaction between agents, as each agent has its own expectations and assumptions about the environment. Traditional BCI systems have difficulty recognizing state and context, and accordingly can become confusing and unreliable. This project will develop techniques to recognize state from multiple modalities, and will also allow the robot and human to learn about each other's state and expectations using the hBCI we are developing. The goal is to build a usable hBCI for real physical robot environments, with noise, real-time constraints, and added complexity.
The technical contributions of this project include: 1. Characterization of a novel hBCI interface for visual recognition and labeling tasks with real physical data and environments. 2. Integration of fNIRS sensing with EEG and other modalities in human robot interaction tasks. We will test our ability in the temporal domain to determine at what timescale we can correctly classify movement components that would predict a correct (rewarding) trial or non-rewarding/incorrect movement. 3. Analysis and validation of the hBCI in complex robotic tele-operation tasks with human subject operators such as open door, grasp object on table, pick up item off floor etc. 4. Use of hBCI to characterize human/robot state and create a learning method to recognize state over time. 5. Use of augmented reality for HRI decision making. 6. Further develop hBCI for tracking cognitive states related to reward, motivation, attention and value. A new class of HRI interfaces will be developed that can expand the ability of humans to work with robots; promote the use and acceptance of robot agent systems in everyday life; expand the use of hBCIs in areas other than robotics for human-machine interaction; further the development of hBCIs as our system will be tapping into reward modulated activity that will be used via reinforcement learning to autonomously update the learning machinery; and bridge the educational divide between Engineering/Computer Science and Neuroscience.
|
1 |
2018 — 2021 |
Sajda, Paul |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Optimizing Human-Machine Performance Via Neurofeedback and Adaptive Autonomy
Our society is being fundamentally transformed by increased interaction between humans and autonomous artificial intelligence (AI) systems. However, the addition of autonomy to our lives will not be successful unless we understand how smart machines and humans should best interact and communicate. Human-machine communication today is almost entirely linguistic, using spoken language for systems such as Siri or Alexa, or typed text for chatbots. However, humans communicate extremely efficiently with each other byu sing much more than just words; for example by being sensitive to facial expression, gestures, gait, and intonation. In fact, great teams, whether sports teams or military combat teams, are excellent at predicting teammates behavior and state of mind. In this project, the investigators consider both basic science and technology questions with respect to how to communicate that cognitive and physiological state of a human that is co-operating with an autonomous AI. The project has very broad implications since it addresses fundamental questions related to the interactions between humans and smart machines.
The project investigates the hypothesis that adaptive autonomy together with coordinated neurofeedback can be employed in the same system to optimize human-machine performance. Investigators will develop a framework and investigate the hypothesis within the context of boundary avoidance tasks, or BAT, which is a class of tasks in which task critical boundaries surround the optimal operating point of the control system. These tasks are particularly interesting when considering human control because they typically result in a positive feedback loop that systematically increases the arousal state of the human subject, resulting in increasingly poor task performance and ultimate task failure, consistent with the Yerkes-Dodson law. Our framework uses a brain-computer interface (BCI) to both engage autonomy as well as being a source for neurofeedback that can shift human subjects to their performance 'sweet-spot'. This project will advance the science and technology development of how human-machine systems can be optimally integrated, specifically when both 1) the machine has access to ongoing changes in human cognitive and physiological state during performance of the task and 2) the human is made aware of their own state via appropriate neurofeedback.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2019 — 2021 |
Rubenstein, Dan [⬀] Sajda, Paul Ochsner, Kevin (co-PI) [⬀] Jennings, Charles |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Gcr: Emotionally Responsive Computation and Communication
The objective of this Growing Convergence Research project is to develop emotionally-responsive computing systems that can detect human emotions and automatically deploy resources or prioritize services to meet human needs in disaster or crises situations. This research is motivated by the need to more rapidly assess the massive data flows that occur during crisis events to better respond to the situations. Through the convergence of computer and computational sciences, neurotechnology, and psychological sciences, this project will develop technologies that can monitor humans' emotions and process the emotional states to strategically deploy resources to address human needs. Emotionally-responsive computing systems could be used by 911 call-centers to prioritize calls or by first-responders responding to a public threat.
This exploratory research will a) develop and validate computational assessments of human behavior in controlled virtual reality simulation, b) develop an integrated physiological measurement package targeting electrodermal activity, heart rate, electroencephalography, and pupillometry, and c) assess whether systems that adapt to emotional states must be trained toward the individual or group types. Key research questions focus on understanding the timescale at which emotions can and should be sensed in reaction to situational changes and the diversity of human emotional responses and group dynamics.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |