1998 — 2002 |
Large, Edward |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: a Dynamical Approach to Attending @ Florida Atlantic University
What happens when we pay attention to a complex, dynamically changing sound such as a sentence or a symphony? What makes some music and some speech easier to attend to than others? Previous research has shown that temporal structure (i.e. rhythm) is an important determinant of the ability to attend to auditory events. This collaborative research project addresses the question of why this is so. Briefly, our theory states that rhythm is important determinant of attention because attending is a fundamentally rhythmic process. More specifically, attentional oscillations entrain to the rhythms of complex events, allowing listeners to predict when important events will occur. Thus, rhythms whose structures facilitate entrainment should be easy to attend to, while events with irregular rhythms should be more difficult to process. The current research project develops this approach along two fronts. First, we are developing mathematical models and computer simulations of auditory attending that make specific predictions about responses to different types of events. Second, we are investigating the response of listeners to the same auditory events, allowing us to evaluate the model's predictions. In particular, we test how people attentionally follow simple auditory patterns that contain unexpected changes in timing. Analyses of these behaviors, together with our mathematical models, will allow use to determine whether or not internal oscillations underlie the ability to attend to auditory events. If so we expect to be able to say, with mathematical precision, a great deal about how attending in time works.
|
1 |
2001 — 2007 |
Large, Edward |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Dynamics of Audition: Rhythm, Music, and Attending @ Florida Atlantic University
What does the neural representation of a complex, temporally structured auditory event look like? When we listen to a symphony or a speech, what neural processes allow us to maintain a stable attentional focus? How do the requisite auditory representations form, how do they adapt to unexpected nuances, and how do they reorganize to accommodate structural change? This research will test a theory of auditory perception and attention that focuses on complex, temporally structured events, such as speech and music. The theory holds that the mental representation of an auditory event is a self-organized, dynamic structure whose neural correlate is a spatiotemporal pattern of neural activity. The primary function of this hypothesized spatiotemporal structure is attentional: it enables anticipation of future events and thus the targeting of perception and the coordination of action with external events. The stability and flexibility properties of attention in this theory both arise through nonlinearities in the underlying pattern-forming dynamics. Furthermore, the hypothesized dynamic representations in the theory function in auditory communication. It is known empirically that transient stimulus fluctuations, such as intonation and rate changes observed in both speech and musical performance, communicate intention, emotion, and structural information. The theory holds that these communicative gestures are recognized as deviations from temporal expectations embodied in the attentional structure.
This theory explains how people maintain a stable attentional focus over temporally extended events while adapting flexibly to transient temporal fluctuations. It provides mathematical models of dynamic structural representation, using the tools of nonlinear dynamical systems. It makes predictions about neural correlates of auditory representation, attention, and communication. Finally, it applies to complex, temporally structured event sequences, explaining how people respond to the auditory complexity of the real world.
The experiments will use music, as well as simpler music-like sequences, to model temporally extended events. Theoretical predictions will be tested using behavioral and neuroimaging techniques. Behavioral experiments will assess predictions in four areas 1) formation and stability of structural representations, 2) real-time tracking of temporally structured sequences, 3) the role of rhythm in attention, and 4) the role of expectancy in auditory communication. Neuroimaging techniques (EEG, MEG) will measure temporal and spatial aspects of neural function in auditory perception and attention, to further assess theoretical predictions. Modifications to the theory will be based on comparison of experimental results with predictions of computer simulations, and extensions to the general theory will be developed.
This research will advance our basic understanding of auditory perception and attention by enhancing our knowledge of the role of structure in perceiving and attending to complex events. The results have potentially wide applicability from the development of more robust computer algorithms for speech recognition and music processing, to deeper clinical understanding of recovery from neural trauma, such as aphasic stroke.
|
1 |
2006 |
Large, Edward W. |
T32Activity Code Description: To enable institutions to make National Research Service Awards to individuals selected by them for predoctoral and postdoctoral research training in specified shortage areas. |
Training Program: Complex Systems and Brain Sciences @ Florida Atlantic University
DESCRIPTION (provided by applicant): This application seeks continued support for The PhD Program in Complex Systems and Brain Sciences initiated in 1995 under the joint auspices of NIMH and Florida Atlantic University. The need for such a program was recognized by a previous review and may be paraphrased as follows: "As neuroscientists learn more and more about the subcellular and biophysical properties of the nervous system, the complex systems approach seeks to characterize the behavior of integrated units, including neural networks, intact and isolated portions of the spinal cord, whole brain functioning and behavior. The approach is truly interdisciplinary. . . [and] the time is ripe for a fully integrated Training Program in this area." Among the reasons why this Training Program warrants continued support are the following developments that have occurred in the last budget period: 1) The number of well qualified applicants has grown, and far outstretches the number of fellowships available (a ratio of about 10:1); 2) The number of outstanding Core Faculty recruited specifically for this PhD Program has increased by four in the last 3 years alone, enhancing both the depth and breadth of the curriculum; 3) The level of research productivity of past and current fellows is outstanding, and reflects an unusual degree of collaboration between both faculty and students, many of whom come originally from different disciplines; 4) Ten PhDs have been awarded with two more scheduled to finish before September 2001; 5) Graduates of this Program now occupy excellent positions in Universities and other institutions, attesting to the broad range of skills and high level of training provided; 6) A new 20,000 sq. ft. facility to house the Training Program has been constructed that, due to a special collaboration with the private and corporate sector, includes capabilities for cutting-edge, real-time imaging of the human brain (fMRI); 7) The level of University support for this Program -the only one of its kind on campus- is considerable, and includes tuition waivers for both in-state and out-of state fellows, and the provision of at least a matching number of assistantships; and 8) the assignment of a new endowed Chair in Complex Systems and Brain Sciences to this Training Program, which, along with commitments to make a number of junior hires will further enhance training opportunities for pre-and postdoctoral fellows.
|
0.958 |
2010 — 2014 |
Large, Edward |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Neurodynamics of Tonality @ Florida Atlantic University
Music is a high-level cognitive capacity, a form of communication that relies on highly structured temporal sequences comparable in complexity to language. Music is found among all human cultures, and musical "languages" vary among cultures and depend upon learning. For example, European melodies use different kinds of note combinations than Indian melodies, making it difficult for Westerners to understand Indian music, and vice versa. Unlike language, however, music rarely refers to the external world. It consists of self-contained patterns of sound, aspects of which are found universally among musical cultures. Therefore, while an understanding of the brain processes underlying language is still a distant goal, discovering the general principles of neural dynamics that underlie music may now be possible. Tonality refers to the stability relationships that are perceived among notes in a musical language. Although there are different kinds of tonality, tonality itself is a universal feature of music, found in virtually every musical language. The hypothesis of this research is that neural oscillation underlies tonal cognition and perception. Neural oscillation is periodic neural activity that, in the auditory system, becomes time-locked to incoming sounds. Neural oscillations can be complex, but there are now powerful mathematical tools for analyzing them. Mathematical analyses of time-locking auditory dynamics suggest constraints on what sorts of tonal relationships should be possible. They predict that fundamental principles of neural dynamics combined with fundamental principles of neural plasticity constrain what musical languages can be learned.
To make detailed predictions, a sophisticated computer model of the auditory system will be built, based on the organization of the auditory system and general neurodynamic principles. Two simulations will be trained through passive exposure to European and North Indian melodies. These two are chosen because they represent two very different musical languages that are each relatively well-studied. The computer model will be used to predict neurophysiological and perceptual observations about music perception that have been collected over the past thirty-five years or so. Success of this model would imply the existence of a musical universal grammar. Universals predicted by intrinsic neurodynamics would provide a direct link to neurophysiology, and explain how brain changes during learning can establish different musical languages. This could lead to fundamental paradigm shifts in music theory, music cognition and related fields. The success of this model would be equally influential in cognitive neuroscience. It would imply that high-level cognition and perception can arise from the interaction of acoustic signals with the physics of the auditory system. No neurodynamic approach has ever successfully captured such a high-level cognitive capacity. Researchers are currently struggling with the question of how to reconcile cognitive theories with neurodynamic principles and observations, and success in the musical domain could lead to new insights. This research will elucidate fundamental mechanisms of hearing and communication, and holds significant promise for understanding auditory system development. Identification of innate constraints shaping human communication behavior may have further implications for language learning. This research has implications for understanding a wide range of hearing and communication disorders. It has potential applicability to improving the design of neural prostheses, enhancing the perception of music and other sounds in cochlear implant patients.
|
1 |