1984 — 1988 |
Khosla, Pradeep (co-PI) [⬀] Kanade, Takeo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sensing and Computation For Dexterous Robot Control @ Carnegie-Mellon University |
1 |
1985 — 1988 |
Kanade, Takeo Shafer, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Optical Modeling in Image Understanding: Color, Gloss and Shadows (Computer and Information Science) @ Carnegie-Mellon University |
1 |
1985 — 1987 |
Raibert, Marc Sanderson, Arthur Kanade, Takeo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
International Workshop On Sensing and Control For Mobile Robots, Hawaii, March 1986 @ Carnegie-Mellon University |
1 |
1986 — 1989 |
Thorpe, Charles (co-PI) [⬀] Kanade, Takeo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Understanding 3-D Dynamic Natural Scenes With Range Data (Computer and Information Science) @ Carnegie-Mellon University |
1 |
1990 — 1995 |
Kanade, Takeo Carley, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Three Dimensional Imaging System Integrating Parallel Analog Signal Processing and Ic Sensors @ Carnegie-Mellon University
Carley The topic of this research is the design and use of a smart sensor system for light-stripe range finding. A plane of light is swept over a three-dimensional object that is imaged on a two- dimensional array of pixels. (The array of pixels and the plane of light are parallel planes.) At the time that a point on the object is illuminated, the corresponding pixel will receive its maximum light intensity. By recording the times at which pixels receive their maximum light intensities, the three-dimensional structure of the object can be determined. The sensor uses photodiodes integrated on a chip with analog circuitry at each pixel that can determine the time of maximum illumination. This circuitry is augmented with inter-pixel signal processing circuitry to increase the accuracy of the sensor, and with A/D converters and multiplexors to allow communication with a computer. A 28 x 32 sensor array has been fabricated and demonstrated.
|
1 |
1992 — 1994 |
Kanade, Takeo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Proposal For Research in Parallel Languages and Environmentsfor Computer Vision @ Carnegie-Mellon University
This award is to support a postdoctoral associate to work in experimental computer science. The associate, J. Ross Beveridge, will be working with Dr. Takeo Kanade at Carnegie Mellon University in the Robotics Institute. He will be working in the area of parallel languages and software development environments for computer vision. The research will focus on the needs of computer vision. In particular low-level and intermediate-level computer vision. The goal is the development of a coherent parallel environment that is independent of any particular parallel architecture. The intellectual contribution of this research will be better understanding of how to support computer vision in parallel environments. A concrete product of this research will be a pilot system implemented on a new Sony parallel computer.
|
1 |
1993 — 1994 |
Kanade, Takeo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
International Symposium of Robotics Research: October 2-5, 1993, Pittsburgh, Pa @ Carnegie-Mellon University
9320016 Kanade This award provides partial funding for the sixth International Symposium on Robotics Research, to be held in Pittsburgh, PA on October 2-5, 1993. The symposium brings together some of the most active robotics researchers from academia, government, and industry to examine the state of the art and future research directions in robotics. A book will be produced which contains papers representing reviews of established research areas as well as papers reporting on new areas.
|
1 |
1993 — 1996 |
Kanade, Takeo Carley, Larry Pomerleau, Dean Gruss, Andrew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Alvinn-On-a-Chip: a Computational Sensor For Road Following @ Carnegie-Mellon University
Kanade This project is building and deploying an intelligent imaging sensor for road following. The sensor generates the heading information required to steer a robotic vehicle by watching the road. On-chip processing is performed by a neural network trained to drive autonomously on public highways. The circuitry which performs the neural computations is integrated with a photosensor array in order to directly sense road-image information. The photosensor array includes analog signal processing in each cell and binary optics for better photon statistics, decreased transducer size, and less interference.
|
1 |
1994 — 1998 |
Kanade, Takeo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Informedia: Integrated Speech, Image and Language Understanding For Creation and Exploration of Digital Video Libraries @ Carnegie-Mellon University
This project - the Informedial Project - establishes a large on- line digital video library by developing intelligent, automatic mechanisms to populate the library which allows for full content- and knowledge-based search and retrieval via desktop computer and metropolitan area networks. The distinguishing feature of the technical approach is the integrated application of speech, language and image understanding technologies for efficient creation (acquisition, recognition, segmentation, and indexing) and exploration (query, search, retrieval, and display) of the library. Also, a network billing server will be implemented to study the economics of charging strategies and incorporate mechanisms to ensure privacy and security.
|
1 |
2001 — 2005 |
Kanade, Takeo Wactlar, Howard [⬀] Christel, Michael Hauptmann, Alexander (co-PI) [⬀] Derthick, Mark |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr/Im: Capturing, Coordinating and Remembering Human Experience @ Carnegie-Mellon University
This work will develop algorithms and systems enabling people to query and communicate a synthesized record of human experiences derived from individual perspectives captured during selected personal and group activities. For this research, an experience is defined through what you see, what you hear, where you are, and associated sensor data and electronic communications. The research will transform this record into a meaningful, accessible information resource, available contemporaneously and retrospectively. We will validate our vision with two societally relevant applications: (1) providing memory aids as a personal prosthetic or behavioral monitor for the elderly; and (2) coordinating emergency response activity in disaster scenarios.
This project assumes that within ten years technology will be capable of creating a continuously recorded, digital, high fidelity record of a person's activities and observations in video form. This research will prototype personal experience capture units to record audio, video, location and sensory data, and electronic communications. Each constituent unit captures, manages, secures and associates information from its unique point of view. Each operates as a portable, interoperable, information system, allowing search and retrieval by both its human operator and remote collaborating systems. An individual cannot see everything, nor remember everything that was seen or heard. The integration of multiple points of view provides more comprehensive coverage of an event, especially when coupled with support for vastly improving the memory from each perspective. The research thus enables the following technological advances:
* Enhanced memory for individuals from an intelligent assistant using an automatically analyzed and fully indexed archive of captured personal experiences.
* Coordination of distributed group activity, such as management of an emergency response team in a disaster relief situation, utilizing multiple synchronized streams of incoming observation data to construct a "collective experience."
* Expertise synthesized across individuals and maintained over generations, retrieved and summarized on demand to enable example-based training and retrospective analysis.
* Understanding of privacy, security and other societal implications of ubiquitous experience collection.
The foundation for this work, the Informedia Digital Video Library, has demonstrated the successful application of speech, image, and natural language processing in automatically creating a rich, indexed, searchable multimedia information resource for broadcast-quality video. The proposed work builds from these technologies, moving well beyond a digital video library into new information spaces composed of unedited personal experience video augmented with additional sensory and position data. Tools will be created to analyze large amounts of continuously captured digital experience data in order to extract salient features, describe scenes and characterize events. The research will address summarization and collaboration of multiple simultaneous experiences integrated across time, space and people.
|
1 |
2002 — 2006 |
Kanade, Takeo Messner, William [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Integrated Modeling, Control, and Guidance For Full-Envelope Flight of Robotic Helicopters @ Carnegie-Mellon University
This project will develop an integrated framework for modeling, control, and guidance of robotic helicopters, which will enable these vehicles to exploit their full operating capabilities to fly fast, precise, and reliable missions in a variety of operations in urban and remote environments. Potential applications include search and rescue, surveillance, law enforcement, inspection, aerial mapping, wildlife observation, and cinematography. The integrated framework consists of three interrelated activities: (1) the development of a modeling technique for high-fidelity low-order dynamics modes, (2) the use of linear robust multiviariable control theory (H_infinity loop shaping), gain scheduling, and high-fidelity models for the design and simulation of high-bandwidth full-flight-envelope controllers, and (3) the use of optimal feedforward methods (model predictive control) for the design of guidance systems that rely on the performance and robustness of the closed-loop helicopter dynamics.A key aspect of the project will be the flight test validation and refinement of the framework on Carnegie Mellon's Yamaha R-50 and RMAX helicopters. Flight validation will include a complex mission in a known environment. The mission will consist segments of standard maneuvers (e.g. hurdle-hop, dash/quick stop, coordinated turn, slalom, rearward flight, S-turn, etc.). The robotic helicopters will fly the missions in several different ways (e.g for aggressiveness, precision, fuel economy, etc.) according to the mission specification.
|
1 |
2002 — 2007 |
Bharucha, Ashok Kanade, Takeo Stevens, Scott Wactlar, Howard [⬀] Hauptmann, Alexander (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Caremedia: Automated Video and Sensor Analysis For Geriatric Care @ Carnegie-Mellon University
CareMedia provides automated video and sensor analysis for geriatric care. Through activity and environmental monitoring in a skilled nursing facility, a continuous, voluminous audio and video record is captured. Through work in information extraction, behavior analysis and synthesis, this record is transformed into an information asset whose efficient, secure presentation empowers geriatric care specialists with greater insights into problems, effectiveness of treatments, and determination of environmental and social influences. CareMedia allows the behavior of senile dementia patients to be more accurately interpreted through intelligent browsing tools and filtered audiovisual evidence, leading to treatment that reduces agitation while allowing awareness and responsiveness. The research begins with disruptive vocalization, a particular behavior noted across senile dementia assessment scales. The coverage is then broadened ambitiously to integrate sensor and visual data for behavioral analysis and summarization in support of OBRA regulations requiring behavior management strategies that are not just chemical restraints. This effort includes automatic techniques to recognize disruptive vocalizations, more complex behavioral occurrences such as falls or physical aggression, and circadian patterns of activity. This research builds on key Carnegie Mellon research efforts in digital video analysis, wearable mobile computers, computer-based vision systems, and information retrieval systems for multimedia metadata.
|
1 |
2003 — 2009 |
Thrun, Sebastian (co-PI) [⬀] Simpson, Richard Kanade, Takeo Cooper, Rory Atkeson, Christopher |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Igert: Interdisciplinary Research Training in Assistive Technology @ Carnegie-Mellon University
This IGERT will support PhD students at Carnegie Mellon University (CMU) and at the University of Pittsburgh (Pitt) pursuing training and research in assistive technology. The focus of this IGERT is to support training and research that provides both a deep understanding of human needs and what technology can do to provide for those needs. This IGERT brings together a number of research institutes in CMU's School of Computer Science (Robotics, Human-Computer Interaction, Language Technology, and the Center for Automated Learning and Discovery) and departments within Pitt (Rehabilitation Science and Technology, Occupational Therapy, Physical Therapy, Nursing, Physical Medicine and Rehabilitation, Bioengineering, and Communication Science and Disorders). A key feature of this IGERT is that, after appropriate course work and training, each student will (1a) engage in a full time clinical internship program for at least one semester or summer or (1b) produce a conference quality paper describing a clinical study that student performed, and (2) produce a conference quality paper describing the design, implementation, assessment, and/or refinement of an assistive technology. This requirement will necessarily mean that technological students have a substantial clinical experience, and clinical students have a substantial technical experience. Additional cross-over clinical and technical experiences will be encouraged.
The intellectual merit of the proposed activity includes getting technical and clinical departments to talk to each other, understand each other's thinking, and create a truly joint educational program. The IGERT program ensures that students gain exposure to basic technological research as well as the translation of research to clinical applications. Collaboration with Pitt will provide opportunities for CMU students and faculty to get exposure to real clients with real problems in real contexts, rather than the usual 2nd or 3rd hand problem descriptions isolated from context. Collaboration with CMU will provide opportunities for Pitt students and faculty to get exposure to state of the art technology, and exposure to a wider range of students and faculty who could serve as sources of expertise and collaborators. This IGERT will facilitate communication and understanding among rehabilitation and technology disciplines by bringing together students and faculty working on assistive technology in several different CMU institutes and departments at Pitt. Participants in this IGERT will work to develop online courses and open source software to make our learning resources accessible world wide.
The broader impact of increased research and training in assistive technology is to improve the lives of people with disabilities, the elderly, children with developmental disorders, and ultimately help make everyone more perceptive, smarter, and more capable. Our definition of assistive technology is quite broad, and thus we expect to have a wide impact. There are huge needs and opportunities for assistive technology. Each year the number and percentage of senior citizens in our society increases. Many need assistance to live independently as long as possible. Nursing-home care can be improved in many ways with assistive technology. The number of diagnoses of developmental disorders in children is increasing, and technology can assist these children to develop and participate in our society more fully. The use of technology to reduce the effect of disabilities in perception, reasoning, memory, and movement is rapidly increasing. The Americans with Disabilities Act mandates greater integration of people with disabilities. Rehabilitation engineering and assistive technology will need to play a substantial role in the order for the goal to be reached. Central to our research and education is outreach to individuals and groups with needs or disabilities that technology can help. Pitt's Department of Rehabilitation Science and Technology (RST) has one of the highest concentrations of people with disabilities as faculty, staff, and students of any academic program in the world. In addition, we will model our diversity and outreach programs on the successful recruitment of women by Pitt's RST and CMU's School of Computer Science.
IGERT is an NSF-wide program intended to meet the challenges of educating U.S. Ph.D. scientists and engineers with the interdisciplinary background, deep knowledge in a chosen discipline, and the technical, professional, and personal skills needed for the career demands of the future. The program is intended to catalyze a cultural change in graduate education by establishing innovative new models for graduate education and training in a fertile environment for collaborative research that transcends traditional disciplinary boundaries. In this sixth year of the program, awards are being made to institutions for programs that collectively span the areas of science and engineering supported by NSF.
|
1 |
2003 — 2005 |
Kanade, Takeo Osborn, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Iarp Workshop On Medical Robotics, Hidden Valley, Pa, November 5-8, 2003 @ Carnegie-Mellon University
0334863 Kanade A workshop, Medical Robotics at the Cutting Edge, organized and hosted by Carnegie Mellon's Medical Robotics Technology Center on behalf of the International Advanced Robotics Project (IARP) is to be held at The Conference Center, Hidden Valley, PA, near Pittsburgh, November 5-8, 2003. The objectives of the workshop are multi-fold: 1) assess and benchmark the present state-of-art and state-of-practice, 2) forecast the future, 3) identify technological barriers and gaps between present and future, and 4) capture and summarize workshop findings for the benefit of IARP member organizations and other stakeholders. In meeting the objectives of the workshop the technical program gives special emphasis to the following topics: 1) Surgical Robots (telesurgery; active surgical robots; semi-active surgical robots), 2) Computer-Assisted and Image-Guided Surgery (surgical navigation technologies; image guidance technologies, and 3) Rehabilitation Robotics and Robotic Prosthetics (robotic exercisers; motion assist devices; prosthetics).
The workshop proceedings includes a final report, presentation slides and movies, talk abstracts, selected background materials, and other media suggested by the participants. Dissemination of the proceedings is to be via the Web (http://www.ri.cmu.edu), hardcopy, CD's, and. Each attendee will receive both a hard copy of the report and a CD version; each IARP country representative will receive 50 copies of the CD and a master copy of the printed report.
|
1 |
2003 — 2007 |
Kanade, Takeo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Algorithm: Scalable Algorithms For Regularized Tomography Via Decoupling @ Carnegie-Mellon University
X-ray computerized tomography (CT) and related imaging modalities (e.g., PET) are notorious for their excessive computational demands. While early CT algorithms such as filtered backprojection are now trivial in two-dimensions and scalable in three-dimensions, the more noise-resistant probabilistic methods such as regularized tomography are still prohibitive.
The basic idea of regularization is to compute a smooth image whose simulated projections (line integrals) approximate the observed (but noisy) X-ray projections. The computational expense in previous methods stems from explicitly applying a large sparse projection matrix (to compute line integrals of the image) and its transpose to enforce these smoothness and data approximation constraints during each of many iterations of the algorithm.
We propose to study a new formulation of regularized tomography in which the smoothness constraint is analytically transformed from the image to the projection domain, before any computations begin. As a result, iterations take place entirely in the projection domain, avoiding the repeated sparse matrix-vector products. A more surprising benefit is the decoupling of a large system of regularization equations into many small systems of simpler equations. The computation thus becomes ``embarassingly parallel'', so that latency tolerant and ideally scalable parallel computations are possible, as our preliminary results show in 2-d. We propose to apply this technique to modalities other than CT, to implement it in three-dimensions, and to embellish the probability models. Further, the network-friendly nature of this method will allow us to study the feasibility of harnessing the increasingly wasted desktop compute power in a typical hospital. We see decoupled regularization as an exciting development in tomography, benefiting society by providing images to doctors, patients, and scientists with fewer artifacts, at higher resolutions, and with greater interactivity.
|
1 |
2006 — 2015 |
Kanade, Takeo Siewiorek, Daniel [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Quality of Life Technology Engineering Research Center @ Carnegie-Mellon University
Lead Institution: Carnegie Mellon University; Principal Investigator: Takeo Kanade Core Partner Institution: University of Pittsburgh; Co-Principal Investigator: Rory A. Cooper Affiliated Outreach Institutions: Florida/Georgia LSAMP, Howard Univ., Lincoln Univ.
The Quality of Life Technologies Engineering Research Center (QoLT ERC) will transform lives in a large and growing segment of the population - people with reduced functional capabilities due to aging or disability. Future compassionate intelligent QoLT systems, either individual devices or technology-embedded environments, will monitor and communicate with a person, understand daily needs and tasks, and provide reliable and happily-accepted assistance by compensating and substituting for diminished capabilities.
Intellectual Merit: The QoLT ERC will create the scientific and engineering knowledge base that enables systematic development of human-centered intelligent systems that co-exist and co-work with people, particularly people with impairments. These QoLT systems may be an individual device that a person carries or wears, a mobility system that a person rides or that accompanies the person, an environment that is instrumented, or a combination of these. The QoLT ERC research will build upon recent advances in intelligent system technologies, including machine perception, robotics, learning, communication, and miniaturization, many of which have been created and applied to date mainly to industry, military, and entertainment. The QoLT ERC will transform these advances and develop new technologies for perceiving, reasoning with, and affecting people improving their lives. Many previous attempts to use sophisticated technology to enhance function for people with disabilities failed. One reason for those failures was a limited understanding of the human with disability and a lack of tight integration of technical and clinical expertise with users' needs. The QoLT ERC will overcome these barriers through partnership of Carnegie Mellon and the University of Pittsburgh in four thrust areas: Monitoring & Modeling, Mobility & Manipulation, Human-System Interface, and Person & Society, and by working closely with user groups throughout design, development, test, and deployment phases. The team has technical strengths in intelligent systems, rehabilitation engineering, and related clinical areas, and ample access to real-world testbeds.
Broader Impacts: The technologies that the QoLT ERC develops will enable people with disabilities to independently perform activities of daily living. By restoring and preserving independence they can pursue individual goals and more fully participate in society. To have more people gainfully employed and to reduce the need for or delay the onset of institutionalization will have the ultimate impact on national economy. The QoLT ERC will expand the pools of talented students in two fronts: the pool of engineering students, with substantial clinical and socio-economic training and experiences that will motive them to create technologies for quality of life; and the pool of clinically-oriented students with engineering training and experiences that will help them understand how best to integrate technology into their practices. It will also teach students how to collaborate effectively one of the most recognized and yet difficult-to-overcome challenges in the development and implementation of systems for people's use. The fact that our ERC team includes a significant number of women faculty and faculty with disabilities will have a major impact on diversity. They serve as role models, and encourage extensive participation of women and people with disabilities in the ERC as faculty, students, advisors, and clients. The membership of the QoLT ERC industry consortium includes a wide spectrum of companies pertaining to all aspects of daily life: medical devices, assistive technology, information technology, consumer electronics, healthcare, and insurance. The QoLT ERC will catalyze a large and technologically sophisticated industry sector that ultimately will help all of us to function more capably, perceptively, and intelligently.
|
1 |
2009 — 2012 |
Kanade, Takeo Sheikh, Yaser (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ii-En the Human Virtualization Studio: From Distributed Sensor to Interactive Audiovisual Environment @ Carnegie-Mellon University
The Virtualization Studio spearheads research to reconstruct, record, and render dynamic events in 3D. The studio creates a ?full-body? interactive environment where multiple users are simultaneously given a visceral sense of three dimensional space, through vision and sound, and are able to interact, through action and speech, unencumbered by 3D glasses, head-mounted displays or special clothing. The studio pursues the thesis that robust sensors for hard problems, in this case audiovisual reconstruction of highly dynamic multiple actors/speakers, can be constructed by using a large number of sensors running simple, parallelized algorithms. High fidelity reconstructions are created using a grid of 1132 cameras, and a 128-node multi-speaker microphone array to localize and associate multiple sound sources in the event space. In addition, a multi-viewer lenticular display screen, consisting of 48 projectors, and a front surround sound speaker are used to render interactive environments. The reconstruction algorithms are parallelized and a cluster is used to process the data and respond to behaviors in the event space in realtime.
Audiovisual reconstruction and rendering of scenes containing multiple users will revolutionize research into collaborative interfaces, and will allow digital preservation of culturally significant events, like theatrical performances, sports events, and key speeches. In addition to these core research objectives, the Virtualization Studio will act as a gathering place for multidisciplinary research, bringing together researchers from interactive art, human behavior analysis, computer graphics, computer vision, psychology, big data research, and speech processing. The infrastructure will be used to develop a new course on Human Virtualization, and will be used as a pedagogical tool in several existing courses and outreach projects, introducing the next generation of students to the power of interdisciplinary research in computer science.
|
1 |
2010 — 2016 |
Dey, Anind [⬀] Sheikh, Yaser (co-PI) [⬀] Kanade, Takeo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Computational Behavioral Science: Modeling, Analysis, and Visualization of Social and Communicative Behavior @ Carnegie-Mellon University
Computational Behavioral Science: Modeling, Analysis, and Visualization of Social and Communicative Behavior Lead PI/Institution: James M. Rehg, Georgia Institute of Technology This Expedition will develop novel computational methods for measuring and analyzing the behavior of children and adults during face-to-face social interactions. Social behavior plays a key role in the acquisition of social and communicative skills during childhood. Children with developmental disorders, such as autism, face great challenges in acquiring these skills, resulting in substantial lifetime risks. Current best practices for evaluating behavior and assessing risk are based on direct observation by highly-trained specialists, and cannot be easily scaled to the large number of individuals who need evaluation and treatment. For example, autism affects 1 in 110 children in the U.S., with a lifetime cost of care of $3.2 million per person. By developing methods to automatically collect fine-grained behavioral data, this project will enable large-scale objective screening and more effective delivery and assessment of therapy. Going beyond the treatment of disorders, this technology will make it possible to automatically measure behavior over long periods of time for large numbers of individuals in a wide range of settings. Many disciplines, such as education, advertising, and customer relations, could benefit from a quantitative, data-drive approach to behavioral analysis. Human behavior is inherently multi-modal, and individuals use eye gaze, hand gestures, facial expressions, body posture, and tone of voice along with speech to convey engagement and regulate social interactions. This project will develop multiple sensing technologies, including vision, speech, and wearable sensors, to obtain a comprehensive, integrated portrait of expressed behavior. Cameras and microphones provide an inexpensive, noninvasive means for measuring eye, face, and body movements along with speech and nonspeech utterances. Wearable sensors can measure physiological variables such as heart-rate and skin conductivity, which contain important cues about levels of internal stress and arousal that are linked to expressed behavior. This project is developing unique capabilities for synchronizing multiple sensor streams, correlating these streams to measure behavioral variables such as affect and attention, and modeling extended interactions between two or more individuals. In addition, novel behavior visualization methods are being developed to enable real-time decision support for interventions and the effective use of repositories of behavioral data. Methods are also under development for reflecting the capture and analysis process to users of the technology. The long-term goal of this project is the creation of a new scientific discipline of computational behavioral science, which draws equally from computer science and psychology in order to transform the study of human behavior. A comprehensive education plan supports this goal through the creation of an interdisciplinary summer school for young researchers and the development of new courses in computational behavior. Outreach activities include significant and on-going collaborations with major autism research centers in Atlanta, Boston, Pittsburgh, Urbana-Champaign, and Los Angeles.
|
1 |