
cached image
Mounya Elhilali - US grants
Affiliations: | Johns Hopkins University, Baltimore, MD |
Area:
Auditory systemWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Mounya Elhilali is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2009 — 2015 | Elhilali, Mounya | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Cognitive Auditory Systems For Processing of Complex Acoustic Scenes @ Johns Hopkins University Performance of hearing systems and speech technologies can benefit greatly from a deeper appreciation and knowledge of how the brain processes and perceives sounds. While most current systems invoke operations akin to the peripheral auditory system, they stop shy of incorporating promising capabilities of the central auditory system, most importantly its ability to adapt to the demands of an ever-changing acoustic environment. Recent physiological findings are amending existing dogmas of processing in auditory cortex; replacing conventional views of "static" processing in sensory cortex with a more "active" and malleable mapping that rapidly adapts to behavioral tasks and listening conditions. Hence, a new architecture for sound processing based on cognitive and adaptive processes promises to open a revolutionary frontier for hearing and speech technologies. |
1 |
2010 | Elhilali, Mounya | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Overcoming the Cocktail Party Problem: a Multi-Scale Perspective On the Neurobio @ Johns Hopkins University DESCRIPTION (provided by applicant): A Multi-scale Perspective on the Neurobiology of Auditory Scene Analysis A. Aims and Significance Despite the enormous advances in computing technology over the last decades, there are stills many tasks that are easy for a child, yet difficult for advanced computer systems. A particular challenge to most existing systems is dealing with complex acoustic environments, background noises and competing talkers: A challenge often experienced in cocktail parties (Cherry, 1953) and formally referred to as auditory scene analysis (Bregman, 1990). Progress in this field has tremendous implications and long- term benefits covering the medical, industrial, military and robotics domains;as well as improving communication aids (hearing aids, cochlear implants, speech-based human-computer interfaces) for the sensory-impaired and aging brains. Despite its importance for both engineering and perceptual sciences, the study of the neural underpinnings of auditory scene analysis remains in its infancy. This field is particularly challenged by the lack of integrative theories which incorporate our knowledge of the perceptual bases of scene analysis with the neural mechanisms along various stages of the auditory pathway. Because of the nature of the problem, the neural circuitry at play is intricate and multi-scale by design. The objective of the proposed research is to provide a systems view to modeling scene analysis which integrates mechanisms at the single neuron level, population level and across area interactions. The intellectual merit of the proposed theory is to elucidate the specific mechanisms and computational rules at play;facilitate its integration in engineering systems and enable generating novel testable predictions. The proposal investigates the key hypothesis that attention to a feature of a complex sound instantiates all elements that are coherent with this feature, thus binding them together as one perceptual "object" or stream. This "binding hypothesis" requires three scales of analyses: a micro-level mapping of complex sounds into a multidimensional cortical feature representation;a meso-level coherence analysis correlating activity in populations of cortical neurons;and macro-level feedback processes of attention and expectations that mediate auditory object formation. We shall formulate this hypothesis within a multi-scale computational framework that provides a unified theory for the neural underpinnings of auditory scene analysis. The three core research aims of this project explore all facets of this model employing computational and physiological approaches: Aim I. A multi-scale coherence model: The main goal is to formulate the "binding hypothesis" as a unified biologically plausible theory of auditory streaming, integrating multi-scale sensory with cognitive cortical mechanisms. This computational effort will incorporate findings from experiments in Aims II and III, generate testable predictions, as well as provide effective algorithmic implementations to tackle the "cocktail party problem" in biomedical applications; Aim II. Physiological investigations of the multi-scale coherence theory: Our aim is to use an animal model to record single-unit (micro-level, meso-level) and across area (macro-level) physiological activity in both primary auditory and prefrontal cortex, while presenting sufficiently complex acoustic environments so as to test and refine the computational model; Aim III. Refinement of the coherence theory with physiological and perceptual testing in humans: The objective is to directly test predictions from the model in human subjects, using magnetoencephalography (MEG) and psychoacoustic experiments. We shall particularly focus on the role of cortical mechanisms in scene analysis in normal and aging brains. The proposed research draws upon the expertise of a cross-disciplinary team integrating neurobiology and engineering. It is unique in that it is the first effort to postulate a role for coherence in the scene analysis problem, and to investigate the "binding hypothesis" integrating cortical and attention mechanisms in auditory streaming experiments. In addition, by testing the theory directly on human subjects and comparing normal and aging brains (known to face perceptual difficulties in cocktail party settings), we hope to better understand the neural underpinnings of scene analysis under their normal and malfunctioning states, hence enhancing the translational potential of the model. The broader impact of this effort is to provide versatile and tractable models of auditory stream segregation, significantly facilitating the integration of such capabilities in engineering systems. PUBLIC HEALTH RELEVANCE: A Multi-scale Perspective on the Neurobiology of Auditory Scene Analysis Project Relevance: The question of how complex acoustic scenes are parsed by the auditory system into auditory objects and streams is one of the most fundamental questions in perceptual science. Despite its importance, the study of its underlying neural mechanisms remains in its infancy. We believe that significant progress in this area can be achieved by combining sophisticated computational modeling and psychophysical techniques with recently available methods for neural recording from awake behaving animals in interdisciplinary efforts, such as the one described in this proposal. In addition, by testing the theory directly on human subjects and comparing normal and aging brains (known to face perceptual difficulties in cocktail party settings), we hope to better understand the neural underpinnings of scene analysis under their normal and malfunctioning states, hence enhancing the translational potential of the model. The broader impact of this effort is to provide versatile and tractable models of auditory stream segregation, significantly facilitating the integration of such capabilities in engineering systems;as well as improving communication aids (hearing aids, cochlear implants, speech-based human-computer interfaces) for the sensory-impaired and aging brains. |
1 |
2011 — 2014 | Elhilali, Mounya | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Cocktail Party Problem: Perspective On Neurobiology of Auditory Scene Analysis @ Johns Hopkins University DESCRIPTION (provided by applicant): A Multi-scale Perspective on the Neurobiology of Auditory Scene Analysis A. Aims and Significance Despite the enormous advances in computing technology over the last decades, there are stills many tasks that are easy for a child, yet difficult for advanced computer systems. A particular challenge to most existing systems is dealing with complex acoustic environments, background noises and competing talkers: A challenge often experienced in cocktail parties (Cherry, 1953) and formally referred to as auditory scene analysis (Bregman, 1990). Progress in this field has tremendous implications and long- term benefits covering the medical, industrial, military and robotics domains; as well as improving communication aids (hearing aids, cochlear implants, speech-based human-computer interfaces) for the sensory-impaired and aging brains. Despite its importance for both engineering and perceptual sciences, the study of the neural underpinnings of auditory scene analysis remains in its infancy. This field is particularly challenged by the lack of integrative theories which incorporate our knowledge of the perceptual bases of scene analysis with the neural mechanisms along various stages of the auditory pathway. Because of the nature of the problem, the neural circuitry at play is intricate and multi-scale by design. The objective of the proposed research is to provide a systems view to modeling scene analysis which integrates mechanisms at the single neuron level, population level and across area interactions. The intellectual merit of the proposed theory is to elucidate the specific mechanisms and computational rules at play; facilitate its integration in engineering systems and enable generating novel testable predictions. The proposal investigates the key hypothesis that attention to a feature of a complex sound instantiates all elements that are coherent with this feature, thus binding them together as one perceptual object or stream. This binding hypothesis requires three scales of analyses: a micro-level mapping of complex sounds into a multidimensional cortical feature representation; a meso-level coherence analysis correlating activity in populations of cortical neurons; and macro-level feedback processes of attention and expectations that mediate auditory object formation. We shall formulate this hypothesis within a multi-scale computational framework that provides a unified theory for the neural underpinnings of auditory scene analysis. The three core research aims of this project explore all facets of this model employing computational and physiological approaches: Aim I. A multi-scale coherence model: The main goal is to formulate the binding hypothesis as a unified biologically plausible theory of auditory streaming, integrating multi-scale sensory with cognitive cortical mechanisms. This computational effort will incorporate findings from experiments in Aims II and III, generate testable predictions, as well as provide effective algorithmic implementations to tackle the cocktail party problem in biomedical applications; Aim II. Physiological investigations of the multi-scale coherence theory: Our aim is to use an animal model to record single-unit (micro-level, meso-level) and across area (macro-level) physiological activity in both primary auditory and prefrontal cortex, while presenting sufficiently complex acoustic environments so as to test and refine the computational model; Aim III. Refinement of the coherence theory with physiological and perceptual testing in humans: The objective is to directly test predictions from the model in human subjects, using magnetoencephalography (MEG) and psychoacoustic experiments. We shall particularly focus on the role of cortical mechanisms in scene analysis in normal and aging brains. The proposed research draws upon the expertise of a cross-disciplinary team integrating neurobiology and engineering. It is unique in that it is the first effort to postulate a role for coherence in the scene analysis problem, and to investigate the binding hypothesis integrating cortical and attention mechanisms in auditory streaming experiments. In addition, by testing the theory directly on human subjects and comparing normal and aging brains (known to face perceptual difficulties in cocktail party settings), we hope to better understand the neural underpinnings of scene analysis under their normal and malfunctioning states, hence enhancing the translational potential of the model. The broader impact of this effort is to provide versatile and tractable models of auditory stream segregation, significantly facilitating the integration of such capabilities in engineering systems. |
1 |
2016 — 2019 | Elhilali, Mounya | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Smart Stethoscope For Monitoring and Diagnosis of Lung Diseases @ Johns Hopkins University Project summary The use of chest auscultations to ?listen? to and diagnose lung infections has been in practice since the invention of the stethoscope in the early 1800s. While it is a versatile tool that is universally used to complement clinical observation and other diagnosis methods (e.g. chest palpation, X-rays), it remains an outdated technology that has not evolved much beyond its early design. Its use is limited by subjectivity and inconsistency in interpreting chest sounds, inter-listener variability, need for advanced medical expertise as well as vulnerability to ambient noise that masks the presence of sound patterns of interest. In the current project, we propose to design a novel smart stethoscope to automate diagnosis of chest auscultations; especially for pediatric use. Over 2 million children die every year of acute lower respiratory infections (ALRI), the leading cause of childhood mortality worldwide. Our hypothesis is that if lung sounds are robustly acquired and analyzed, they are sufficiently informative to result in quantifiable improvements in detection accuracy of lung pathologies. By improving diagnosis capability using a low-cost technology, the proposed smart stethoscope will enhance resource and case management of ALRI, especially in impoverished settings that lack alternative diagnosis tools such as X-rays. This proposal focuses on two key components for improving efficacy of lung auscultation diagnosis: Aim 1: Designing the smart stethoscope technology. This effort takes a different engineering direction than devices currently on the market by employing novel transducer and microphone arrays in a layout that mitigates issues with ambient noise and signal stability. The expected outcome is to provide medical practitioners with a low-cost device that offers noise-control, signal amplification and stable recordings. We will test this technology at the Johns Hopkins Pediatric Emergency Hospital. Aim 2: Augmenting the smart stethoscope with computer-aided diagnosis. We propose adaptive signal processing methods for analyzing lung signals to enable differentiating normal from pathological cases. The expected outcome is to improve the specificity and sensitivity of lung diagnosis using computer-aided analyses, and help inform clinical decisions and ALRI case management. The efficacy of the algorithm is directly evaluated in a study at a children's hospital in Peru, using radiographic pneumonia as benchmark. The site is chosen as representative of applicability of the proposed device in a low-resource setting. The proposal is a multidisciplinary effort that draws upon the expertise of engineers and medical experts, with close interaction and ongoing validation in patient populations. Its overarching goal is to improve sensitivity and specificity of pulmonary diagnosis using auscultations. The overall outcome of this effort is a point-of-care technology that is effective, low-cost, and deployable for pulmonary-health monitoring in hospitals, clinics, low resource community centers, and potentially home-based monitoring. |
1 |
2017 — 2021 | Mittal, Rajat Elhilali, Mounya Moss, Cynthia [⬀] Sterbing-D'angelo, Susanne |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ncs-Fo: Active Listening and Attention in 3d Natural Scenes @ Johns Hopkins University As humans and other animals move around, the distance and direction between their bodies and objects in their environment are constantly changing. Judging the position of objects, and readjusting body movements to steer around the objects, requires a constantly updated map of three-dimensional space in the brain. Generating this map, and keeping it updated during movement, requires dynamic interaction between visual or auditory cues, attention, and behavioral output. An understanding of how spatial perception is generated in the brain comes from decades of research using visual or auditory stimuli under restricted conditions. Far less is known about the dynamics of how natural scenes are represented in freely moving animals. This project will bridge this gap by studying how freely flying bats navigate through their environment using echolocation. Specifically, a team of engineers and neuroscientists will investigate how the bat brain processes information associated with flight navigation. The project team will provide education and training in engineering and science to public school, undergraduate and graduate students, and to postdoctoral researchers. This research will also contribute to a rich library of materials, including videos and a website, which will be available to educators and scientists working in both the private and public sectors. |
1 |
2018 — 2020 | Elhilali, Mounya | U01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multiscale Modeling of the Cocktail Party Problem @ Johns Hopkins University Project summary At every instant of our lives, a cacophony of sounds impinges on our ears and challenges our brain to make sense of the complex acoustic environment in which we live ? a phenomenon referred to as the cocktail party problem (CPP). Up till now, efforts to understand this phenomenon focused on the role of acoustic cues in shaping sensory encoding of auditory objects in the brain. Yet, listening is not the same as hearing. It engages both sensory and cognitive processes to enable the brain to adapt its computational primitives and neural encoding to the changing soundscape and shifting demands to attend to various sounds in the scene. The current proposal puts forth an adaptive theory of auditory perception which integrates the role of both sensory mechanisms and cognitive control in a unified multiscale theory that combines neural processes at the level of single neurons, neural populations and across brain areas. Central to this hypothesis is the role of rapid neural plasticity that reshapes brain responses to acoustic stimuli according to the statistical structure of the soundscape, guided by feedback mechanisms from memory and attention. The research plan translates this hypothesis into a unified multiscale model employing a distributed inference architecture (Aim 1). This scheme employs hierarchical dynamical systems that track the statistical structure of the stimulus at different resolutions and time-scales, and adapt their responses based on both memory and attentional priors. This architecture is used as springboard to predict the interaction between sensory and cognitive mechanisms at play during the CPP. It also affords a general solution to the scene analysis problem that will be interfaced with existing sound technologies (e.g. speech recognition, medical diagnosis, target tracking and surveillance). This computational effort is informed and validated with empirical data (Aim 2) from experiments in human subjects, using psychoacoustics and EEG; as well as single-unit electrophysiology in behaving ferrets. The experiments shed light of neural processes underlying the CPP using rich stimuli that manipulate the statistical structure as well as attentional focus of subjects (humans/animals). The final integrated theory is refined in perceptual studies in young and aging adults whose perception is highly challenged by complex listening soundscapes (Aim 3). This effort generates testable predictions about failures in auditory perception in multisource environments especially in aging adults and pinpoints possible malfunctions due to sensory or cognitive factors. By shedding light on the functional principles and neural underpinnings underlying the sensory and cognitive interaction during the CPP, the research will have a big impact on our understanding of auditory perception in cluttered scenes. In addition, it has direct relevance to health and wellbeing, particularly for improving communication aids for the sensory impaired and aging populations; as well as affording adaptive processing to sound technologies (e.g. speech recognition, audio analytics) which remain for the most part static and hard-wired. 1 |
1 |
2019 | Busch-Vishniac, Ilene Joy Elhilali, Mounya |
R43Activity Code Description: To support projects, limited in time and amount, to establish the technical merit and feasibility of R&D ideas which may ultimately lead to a commercial product(s) or service(s). |
Feelix@Home: a Smart Stethoscope to Improve Pediatric Asthma Management For Urban Minority Families @ Sonavi Labs, Inc. Project Summary The goal of this project is to further develop an existing smart stethoscope in order to be capable of monitoring pediatric patients at home who suffer from asthma. Lung diseases impose a serious burden on healthcare systems, individuals and governments. WHO recognizes asthma as the leading chronic disease in children and estimates that 235 million people suffer from the disease worldwide, with over 380,000 deaths from the disease in 2015. In the United States, asthma prevalence and disease burden disproportionately affect Blacks or African-Americans compared with White Americans. From 2008 to 2010, the annual US asthma prevalence (11.9% for Black Americans versus 8.1% in Caucasians), mortality rate (0.23 versus 0.13 per 1000 patients per year), and emergency department visits (18.4 versus 6.1 visits per 100 patients per year) were all worse among Blacks or African-Americans. The fundamental causes of health disparities in relation to asthma are well understood (urban air pollution, housing, poor diet, poverty, and social and/or geographical isolation) but remain very difficult to solve. Early technological and mobile applications for remote management have attempted to address these problems, with somewhat positive results, but require patient self-assessment and do not include objective monitoring of lung status. A small number of wheeze detectors and pulmonary monitors have been approved for marketing by the FDA, but face several technological limitations and are not commercially available in the US. We reasoned that a long-term monitoring solution that can be used in the home by untrained patients, or family members of patients, could detect and monitor severity of airway inflammation in patients, provide insight into reasons for worsening or improved symptoms, provide tailored educational content and direct patients to medical follow before the situation becomes acute, thus reducing trips to emergency departments and readmission rates to hospitals. We find that several challenges exist when considering long term auscultatory monitoring solutions in non-traditional clinical settings: (1) unpredictable ambient noise, (2) the need for medical expertise to interpret lung sounds, (3) subjectivity in the analysis, and (4) difficulty using and placing the stethoscope. In order to overcome many of these challenges, the research team developed a smart stethoscope that was originally intended for use in low-resource countries by community health workers to differentiate between pediatric patients with crackles and wheezes. This smart stethoscope address all the challenges above by including (1) adaptive noise suppression that has been objectively and subjectively proven to be superior in all types of noise environments than traditional or other electronic stethoscopes, (2) on-board analysis algorithms that can detect crackles and wheezes in pediatric patients with an accuracy that matches that of a specialist, and (3) a uniform pickup surface that removes the requirement for exact placement of the device to get an accurate recording. In this project, we will validate that the device can be correctly used by parents of children with asthma through monitoring over a 6-week period following an ED visit through daily recordings. We then plan to confirm that our existing detection algorithms can be used or modified to track changes in the lung sound severity, followed by correlating these algorithm outputs with patient reported outcomes and environmental data. Simultaneously, we will be using patient feedback to iterate on the device design to create a version that minority and underserved patients are comfortable using in their home. |
0.912 |
2020 | Elhilali, Mounya | R13Activity Code Description: To support recipient sponsored and directed international, national or regional meetings, conferences and workshops. |
Coghear: Cognitive Hearing Workshop Series @ Johns Hopkins University Abstract The proposal outlines a plan to organize a series of workshops on cognitively-controlled listening devices. The meetings aim to advance scientific inquiry and computational tools for decoding brain activity and assessing cognitive functions in order to control assistive listening devices used in everyday environments. The field pushes forth current limitations in technologies such as hearing aids which take a bottom-up, hearing-centric approach. Instead, we explore scientific and engineering solutions to leverage both hearing and cognitive functions to deliver improved communication capabilities. The format of these workshops is aimed as a hands-on structure where interdisciplinary teams including senior researchers and younger trainees and students work together during 4 days to pilot new ideas, exchange approaches and compare methodologies around themes relevant to auditory cognition. The workshops, to be held every 18 months, aim to foster a nascent community of researchers spanning hearing scientists, cognitive and brain scientists, engineers and computer scientists. Benefits from the cross-training and large-scale collaborations throughout these fields will facilitate translating readouts of the mental state of listeners into effective feedback mechanisms to control hearing prosthetics and shape the listening experience. The workshop aims to engage a wider community and particularly cultivate participation of women and under-represented minorities as well as giving a voice to junior researchers. Beyond the scientific and engineering scholarly contributions, the proposed workshops will provide know-how to develop systems able to implement cognitive-aware listening devices and as such can expand the industry of cognitive robotics and assistive devices. A close partnership between the research community and industry present valuable opportunities for translational impact that benefit the field in general. |
1 |
2020 | Busch-Vishniac, Ilene Joy Elhilali, Mounya |
R42Activity Code Description: To support in - depth development of cooperative R&D projects between small business concerns and research institutions, limited in time and amount, whose feasibility has been established in Phase I and that have potential for commercialization. Awards are made to small business concerns only. |
@ Sonavi Labs, Inc. Project Summary The goal of this project is to further develop an existing smart stethoscope in order to be capable of monitoring pediatric patients at home who suffer from asthma as well as adults with COPD. Lung diseases impose a serious burden on healthcare systems, individuals and governments. The World Health Organization (WHO) found that chronic obstructive pulmonary disease (COPD) and lower respiratory infections (LRIs) ranked third and fourth as the leading causes of death in 2016, each claiming 3 million lives annually. LRIs accounted for 14.9% of pediatric deaths, making it the leading cause of infant mortality after pre-term birth. Asthma?a condition for which, like COPD, there is no cure?is also the leading chronic disease in children and an estimated 235 million people suffer from the disease worldwide, with over 380,000 deaths from the disease in 2015. Asthma and COPD costed the United States approximately $56 billion and $72 billion last year, respectively. The burden of these diseases and the health disparities across populations is only slated to get worse in the coming decade, as respiratory diseases are expected to increase by 155% due to an aging population and increased pollution, while there is expected to a large shortage of pulmonary specialists, with an expect 7% decline by 2030. We reasoned that a long-term monitoring solution that can be used in the home by untrained patients, or family members of patients, that could detect and monitor severity of airway inflammation in patients, provide insight into reasons for worsening or improved symptoms, push tailored educational content, and direct patients to medical follow before the situation becomes acute, would empower patients with chronic conditions while also reducing trips to emergency departments and readmission rates to hospitals. We find that several challenges exist when considering long term auscultatory monitoring solutions in non-traditional clinical settings: (1) unpredictable ambient noise, (2) the need for medical expertise to interpret lung sounds, (3) subjectivity in the analysis, and (4) difficulty using and placing the stethoscope. The research team developed a smart stethoscope that was originally intended for use in low-resource countries by community health workers to differentiate between pediatric patients with crackles and wheezes that overcomes many of these challenges. This smart stethoscope address all the challenges above by including (1) adaptive noise suppression that has been objectively and subjectively proven to be superior in all types of noise environments than traditional or other electronic stethoscopes, (2) on-board analysis algorithms that can detect crackles and wheezes in pediatric patients with an accuracy that matches that of a specialist, and (3) a uniform pickup surface that removes the requirement for exact placement of the device to get an accurate recording. In this project, we will validate that the device can be correctly used by parents of children with asthma and accurate recordings can be taken that are similar in quality to those that would be taken by a medical professional. Simultaneously, we will be using patient feedback to iterate on the device and mobile app design to create a version that patients are comfortable using in their home. Once the device and app have been validated in Phase I, we plan to move into directly into Phase II where the device will enter in a second phase of investigation that will include a first-time longitudinal study from parents of pediatric patients taking daily recordings in their home. This data will then be used for the development of algorithms to determine lung sound severities with metrics that can be tracked and predicted over time. In parallel to the this clinical study and algorithm development, recordings will be taken of adult patients with COPD to expand the usability of the device beyond pediatrics. |
0.912 |