2002 — 2004 |
Tong, Frank |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Mechanisms of Human Visual Perception
DESCRIPTION (provided by applicant): Human visual perception relies on both selective and constructive neural mechanisms to organize and to interpret visual information. The selectivity of perception can be seen in binocular rivalry, during which each eye views a different monocular pattern. Under these conditions, perception selectively alternates between one monocular image versus the other image every few seconds. Conversely, the constructive nature of perception may be best exemplified by perceptual filling-in of the blind spot in which vivid impressions occur in a visual location that lacks any input. Exactly how the brain mediates these complementary processes of perceptual selection and construction remains poorly understood. This project will use functional magnetic resonance imaging (fMRI) to investigate the neural basis of binocular rivalry and perceptual filling-in within human visual cortex. Our central hypothesis is that selective perception during rivalry and constructive perception during filling-in involve separate neural mechanisms that operate at different levels of the visual system. To investigate these issues, we have developed special behavioral and fMRI techniques to localize the cortical representation of the blind spot quickly and reliably, in previous studies, we have shown that fMRI activity in the monocular VI representation of the blind spot is tightly linked to perceptual awareness during rivalry, suggesting that rivalry results from early competition between monocular VI neurons. In contrast, when we stimulate the retinal region immediately surrounding the blind spot, we find evidence of a "hole" in visual activity in Vi but not in V2, perhaps suggesting that perceptual filling-in occurs in higher visual areas such as V2. The proposed research will characterize the neural mechanisms and visual areas responsible for rivalry and filling-in. More important, it will address scientific debates regarding whether: i) binocular rivalry arises from interocular competition versus pattern competition, and ii) perceptual filling-in arises from active neural completion versus passive remapping of visual inputs. This project will advance our knowledge of the neural organization of selective and constructive mechanisms in human visual perception. Such research is important given that vision serves as a primary sense for acquiring information from the environment to guide judgments and actions. The proposed studies will not only address the neural basis of human visual perception but will also inform research on visual dysfunctions and neurological disorders, including strabismus, amblyopia (suppressed vision in one eye), and the neural consequences of visual-field loss resulting from retinal or cortical injury.
|
1 |
2007 — 2011 |
Tong, Frank |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Representation of Features in the Human Visual Cortex
DESCRIPTION (provided by applicant): The primate visual system analyzes incoming visual information according to a sparse set of fundamental features. Feature-selective neurons provide a detailed analysis of the local features in visual scenes, such as information about stimulus orientation, motion direction, and so forth. The neural representation of visual features has been studied extensively in non-human primates but has proven challenging to study in humans due to the limited spatial resolution of noninvasive neuroimaging methods. My lab has developed novel analysis techniques to measure the feature tuning properties of the human visual system using functional magnetic resonance imaging (fMRI). Our preliminary studies show that different stimulus orientations and motion directions evoke distinct patterns of ensemble fMRI activity in the human visual cortex that can be reliably classified by statistical algorithms. This project will apply this novel pattern analysis approach to investigate the neural representations of orientation and motion direction in the human cortex and the role of visual attention in feature perception. The proposed studies will evaluate whether activity in early human visual areas corresponds with visual feature perception across changes in the surface properties of the stimulus. Specific Aim 1 will investigate the orientation-selective properties of early visual areas, and determine whether these areas show evidence of cue-invariant orientation selectivity that can effectively generalize across changes in stimulus form. Specific Aim 2 will assess direction selectivity in visual areas V1 through V4 and MT+, and test for cue-invariant direction selectivity and sensitivity to perceived global motion. Specific Aim 3 will explore the role of visual attention in selecting and stabilizing the representations of visual features. The results from this project will provide important new insights into the human neural bases of visual feature perception, and help provide a bridge between animal and human studies. The ability to measure the feature-selective properties of an individual's brain may also have high clinical significance. This approach may lead to effective new tools to investigate, characterize, or diagnose impairments in cortical visual function resulting from disease or injury, or methods to evaluate the cortical effects of medical treatment or recovery of function.
|
1 |
2007 — 2012 |
Tong, Frank |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Neural Representations of Objects Across the Human Visual Pathway
People excel at recognizing objects across changes in position, size, viewpoint, lighting and general form, whereas computer recognition systems perform poorly when faced with such variable, unpredictable situations. Exactly how the human brain solves the computational challenges of object recognition is not well understood. To recognize an object, one must integrate the features, contours, and parts of an object into an organized whole, and then match these representations of an object's shape to items stored in memory. Somehow, the brain can solve this computational problem by extracting the stable, invariant properties of objects while disregarding superficial variations in the retinal image. With support from the National Science Foundation, Dr. Frank Tong and his colleagues at Vanderbilt University will investigate the neural bases of object recognition using functional magnetic resonance imaging (fMRI) and novel pattern classification methods adapted from machine learning. These studies will determine what types of information about objects are represented by cortical activity patterns across the human visual pathway, ranging from low-level visual areas that respond best to basic features, to high-level areas that respond best to complex objects. Rather than focusing exclusively on the high-level object-selective areas, this project emphasizes a different approach to understand how invariant representations of objects are formed. Studies will characterize the neural representation of objects at each stage of the visual pathway, from the primary visual cortex to anterior inferotemporal areas, to determine how object representations are transformed from one processing stage to the next.
This research will help reveal how the brain solves the problem of object recognition, by transforming the raw retinal input into increasingly more flexible representations of the object through a process spanning many successive levels of the visual pathway. Results from these studies will provide inform current theories of object recognition. Understanding these neural bases is necessary to comprehend what can go wrong with object recognition in cases of learning disability, developmental disorder or brain injury (e.g., developmental or acquired dyslexia). It is likely that computer algorithms for recognizing objects will also be improved.
|
1 |
2012 — 2016 |
Tong, Frank |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cortical Representations of Visually Specific Information in Working Memory
Visual working memory is a core cognitive function that allows people to actively maintain and manipulate information about stimuli that are no longer in view. With funding from the National Science Foundation, Dr. Frank Tong of Vanderbilt University is investigating how the human visual system actively maintains information about visual features and objects over delay periods of many seconds. This project is evaluating the hypothesis that visually specific content is maintained in working memory as a result of top-down feedback to early visual areas of the brain as well as to higher-level object-selective areas of the brain. The question is to what extent early visual brain areas can retain visually precise information about an actively remembered object. Dr. Tong is developing advanced methods to decode patterns of human brain activity for the purposes of reading out information about what item a person is maintaining in working memory. Experiments are investigating how multiple visual areas represent information about simple visual patterns and complex objects. Additional experiments are for testing whether these working memory representations are robust to visual interference, and whether their contents can be dynamically manipulated and modified based on the goals of the participant. The results are designed to provide new insights into the neural bases of visual working memory, its robustness to interference, and its capacity for flexible manipulation of remembered visual content.
The visual working memory system provides an essential link between immediate perception and higher-level cognitive processes, and is important for mental imagery, vision-based learning, visuospatial planning, and the maintenance of an updated representation of the objects in one's environment. Understanding the neural bases of visual working memory is important for advancing knowledge of human brain function and is also relevant to developing better methods to improve learning in educational settings. Research on visual working memory is also a prerequisite to developing better methods to diagnose individuals with impairments in visual working memory, and it may eventually lead to treatment interventions. The methodological advances from this project are also highly relevant to research on brain-computer interface, and they constitute an advance in the ability to decode specific mental content from patterns of human brain activity.
|
1 |
2017 — 2018 |
Kunda, Maithilee Warren, Zachary Tong, Frank Stassun, Keivan [⬀] Sarkar, Nilanjan (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Convergence Htf: a Workshop Shaping Research On Human-Technology Partnerships to Enhance Stem Workforce Engagement
The landscape of jobs and work is changing rapidly, driven by the development of new technologies. Intelligent, automated machines and services are a growing part of jobs and the workplace. New technologies are enabling new forms of learning, skills assessments, and job training. The potential benefits of these technologies include increased productivity and satisfaction, and more job opportunities. The workshop supported by this award aims to harness these innovations to enhance the science, technology, engineering, and mathematics (STEM) job opportunities and workforce engagement of individuals with autism spectrum disorder (ASD), and related developmental disabilities. The workshop will promote the convergence of psychology, data science, computer science, engineering, learning science, special education, organizational behavior, and business to define key challenges and research imperatives at the nexus of humans, technology, and work. This convergence workshop will employ deep integration of knowledge, theories, methods, and data from multiple fields to form new and expanded frameworks for addressing scientific and societal challenges and opportunities. The results of the workshop will include the identification and sharing of new research directions and tools to enhance STEM workforce engagement of individuals with ASD and related developmental disabilities. This convergence workshop addresses the future of work at the human-technology frontier.
The workshop will explore tools and approaches to enhance retention, engagement, and productivity in STEM jobs, and specifically to harness unique capabilities and accommodate for individual needs of individuals with ASD. The workshop will develop a convergence research agenda around four topics, including 1) human-technology partnerships to support success in K-12 STEM education, 2) tools for characterizing individual capabilities and affinities and mapping these to STEM workforce needs, 3) artificial-intelligence and visual-cognition tools for human interaction with data, and 4) technologies to accommodate for unique needs and capabilites in the workplace. These topics will integrate previously disparate disciplines and research approaches, with speakers encompassing a wide range of subject matter expertise; from engineers and technologists who are developing human-technology interfaces and devices, to psychologists who are harnessing human-technology partnerships to better understand unique human capabilities for STEM, to computer scientists who are studying and developing novel data-visualization approaches patterned on autistic visual thinking, to organizational scientists developing innovative employment models for the creation of STEM sector employment spaces and technologies that leverage and support autistic individuals in the workforce. The conclusions and recommendations from the workshop will be disseminated via a white paper, and will be used to design a research agenda to help leverage human-technology advances to maximize workforce opportunities and productivity.
|
1 |
2018 — 2021 |
Tong, Frank |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perceptual Functions of the Human Lateral Geniculate Nucleus
The lateral geniculate nucleus (LGN) is classically portrayed as a relay station that simply serves to transfer signals from the retina to the primary visual cortex. According to this account, the LGN passively provides the necessary feedforward input to the visual cortex, but has no direct involvement in more complex perceptual processes. However, such an account fails to explain why the LGN receives far more afferents from the visual cortex than from the retina; moreover, it ignores the possibility that top-down feedback signals from the visual cortex to the LGN could have an important role in perceptual coding and in shaping the complex topography of responses that arise from the early visual system. According to neural theories of predictive coding, neurons in higher visual areas with large receptive fields can process more global information and send predictions about the input they receive to the lower visual area providing input. Any local errors in these globally informed predictions are then computed as residual error signals in the lower area. According to this account, portions of a visual scene that appear irregular or less expected, such as a figure that differs in featural content from its surround, may be highlighted at this lower site by additional residual processing. A far-reaching implication of this theory is that these top-down predictions may propagate to the lowest possible site of the visual hierarchy, modulating the response of the LGN to figural regions that differ in appearance from the adjacent background. This project will provide a novel evaluation of the functional role of the LGN in figure-ground processing, characterizing the impact of feedback modulation at the earliest possible site of the human visual pathway. We will use high-resolution fMRI at 7 Tesla to investigate multiple aspects of figure-ground processing in the LGN and V1. In Specific Aim 1, we will determine whether figure-selective enhancement in the early visual system depends on automatic perceptual processes or a mechanism of spatial attentional feedback. In Specific Aim 2, we will apply population coding models and multivariate regression techniques to characterize the spatial profile of figure-ground processes in the LGN and V1, and test for distinct mechanisms of boundary detection and figure enhancement. In Specific Aim 3, we will evaluate whether modulatory figure-ground effects in the LGN can be attributed to top-down feedback from binocularly sensitive visual cortex, and provide fine-grained characterization of the tuning profile of this feedback modulation. The results of this project will provide new insights into the perceptual functions of the human LGN, which are poorly understood, and yield critical new data to inform current models of predictive coding and figure-ground processing. The development of high- resolution fMRI methods to characterize and reconstruct LGN and V1 responses in image space is also of considerable health relevance. Future applications of this approach could be used to construct detailed visual- field maps of LGN and V1 responses associated with damage to the peripheral retina, impairments of central visual processing such as amblyopia, as well as the impact of clinical interventions.
|
1 |
2019 — 2024 |
Wallace, Mark [⬀] Sarkar, Nilanjan (co-PI) [⬀] Stassun, Keivan (co-PI) [⬀] Tong, Frank Kunda, Maithilee |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nrt-Fw-Htf: Neurodiversity Inspired Science and Engineering (Nise)
Neurodiversity is an emerging concept through which certain neurological differences - Autism, Attention Deficit Hyperactivity Disorder, Dyslexia, and others - are considered a natural part of human neurocognitive variation, associated not only with impairments but also with unique strengths. Indeed, many neurodiverse people have capabilities that are in high demand across many sectors, yet their potential remains vastly underutilized. This National Science Foundation Research Traineeship (NRT) award to Vanderbilt University will address this potential by training graduate students in a new interdisciplinary field of Neurodiversity Inspired Science and Engineering (NISE), which links human-technology frontiers (HTF) research and education across STEM disciplines through a cohesive focus on autism. The project anticipates providing a unique and comprehensive training opportunity for one hundred fifty (150) MS and PhD students, including forty-five (45) funded trainees, from computer science, mechanical engineering, data science, psychology, organizational science, and neuroscience. Students will engage in research that has as its goals: (i) understanding the unique capabilities associated with autism and learning to match these capabilities to 21st-century workforce needs, (ii) prototyping assistive technologies to enable employment and workplace success, and (iii) exploring organizational practices that help leverage the talents of autistic individuals and enhance organizational innovation.
The NISE NRT project seeks to train a new type of engineer and scientist, one who can devise innovations that support workforce engagement of individuals with autism and/or that are inspired by autistic capabilities. Building on the strengths of Vanderbilt's new Frist Center for Autism & Innovation, this NRT project will engage trainees in the development, deployment, and commercialization of HTF approaches and devices, providing broadly applicable skills in artificial intelligence, data science, robotics, virtual reality, and inclusive design. Collaboration with practitioners in clinical psychology, special education, and business, will ensure relevance of trainees' projects to the clinical, educational, and/or commercial domains. Trainees will participate in the Vanderbilt NSF I-Corps program as well as invention disclosure and patents. Research projects will also impact K-12 students and teachers in the communities where NRT trainees conduct their work. A central part of the program's plan to recruit, mentor, and advance women, underrepresented minorities, and persons with disabilities is the Fisk-Vanderbilt Masters-to-PhD Bridge Program, a national exemplar in STEM graduate diversity. Trainees will undertake, in addition to their regular graduate program requirements, a common core of three new NISE courses, summer school, workshops, and internships, culminating in a graduate certificate in NISE.
The NSF Research Traineeship (NRT) Program is designed to encourage the development and implementation of bold, new potentially transformative models for STEM graduate education training. The program is dedicated to effective training of STEM graduate students in high priority interdisciplinary or convergent research areas through comprehensive traineeship models that are innovative, evidence-based, and aligned with changing workforce and research needs.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2020 — 2021 |
Tong, Frank |
P30Activity Code Description: To support shared resources and facilities for categorical research by a number of investigators from different disciplines who provide a multidisciplinary approach to a joint research effort or from the same discipline who focus on a common research problem. The core grant is integrated with the center's component projects or program projects, though funded independently from them. This support, by providing more accessible resources, is expected to assure a greater productivity than from the separate projects and program projects. |
Computation Core @ Vanderbilt University Medical Center
PROJECT SUMMARY: COMPUTATION SERVICE MODULE The Vanderbilt Vision Research Center (VVRC) includes faculty investigators with strong interest in high- level imaging of visual perception and visual cognition in humans, neurophysiology, visual behavior in awake- behaving non-human primates, and computational modeling of human and non-human primate vision. These cognitive- and systems-level investigations require expertise in computer systems administration, data collection, analysis, and storage, and web-based applications for experimentation. As well, other VVRC investigators require access to hardware and software maintenance and specialty programming, including web-based content. The purpose of the VVRC Computation Module is to provide a comprehensive service for computer hardware and software that support the wide range of empirical studies our investigators conduct. The Computation Module provides computer technology support for research needed to solve more complex challenges that face computer-dependent laboratory science. This module is a VVRC-intrinsic core and is not part of a VUMC institutional facility; therefore, the service is provided to VVRC members by request and not through the VUMC Office of Research scholarship platform. In the current funding period the computation module contributed resources in support of 12 investigators with140 publications resulting from use of the service, excluding website maintenance. These are indicated as such in our Progress Report Core Publications by Investigator document. Availability of this module during the current period saved VVRC investigators $409,910 in programmer and administrator costs. A survey of researcher plans indicates that the use of this service will increase, with moderate to extensive use by 19 of 36 VVRC investigators. The computation module, housed in approximately 500 sq ft of office, server and storage space in Wilson Hall proximal to VVRC investigators is directed by VVRC Investigator Thomas Palmeri, PhD. Using this space and personnel supported by this Core mechanism, the VVRC Computation Module will: (1) provide hardware and software support of VVRC investigations, (2) provide data pipeline, archiving, and storage solutions, (3) provide custom programming solutions, and (4) facilitate web-based content and interfacing. These services and resources will enhance the scope of experimentation NEI-funded VVRC investigators conduct, promote innovation through the provision of custom hardware and software resources, and enhance collaboration by providing computation support to those who otherwise would not have such capabilities, including early-career vision scientists, clinician-scientists competing for extramural funding for their laboratories, and VVRC investigators without access to computer expertise beyond basic internet technology services.
|
1 |
2020 — 2021 |
Tong, Frank |
P30Activity Code Description: To support shared resources and facilities for categorical research by a number of investigators from different disciplines who provide a multidisciplinary approach to a joint research effort or from the same discipline who focus on a common research problem. The core grant is integrated with the center's component projects or program projects, though funded independently from them. This support, by providing more accessible resources, is expected to assure a greater productivity than from the separate projects and program projects. |
In Vivo Imaging Core @ Vanderbilt University Medical Center
PROJECT SUMMARY: IN VIVO IMAGING MODULE The Vanderbilt Vision Research Center (VVRC) includes faculty investigators with a strong interest in discerning structure-function and function-cognition relationships in the visual pathways in awake animals, including non-human primates, and human subjects. The purpose of the VVRC In Vivo Imaging Module is to provide a comprehensive resource for all non-invasive imaging research that utilizes animals, including non- human primates, or human subjects. This module gives investigators and their staff access to state-of-the-art live animal and human imaging facilities, offline analysis, and technical expertise through subsidized scholarship use of the Vanderbilt University Institute of Imaging Science (VUIIS). The VUIIS has a core program of research related to developing new imaging technology based on advances in physics, engineering, and computer science. In addition to high-field MRI and MR spectroscopy, ultrasound, optical and other modalities in human subjects, the VUIIS offers state-of-the-art options for small animal and non-human primate imaging in all modalities. These resources are provided through the Center for Human Imaging (Directed by Module Director Seth Smith) and the Center for Small Animal Imaging (CSAI). The scholarship system is implemented by the VUMC Office of Research and is utilized instead of a discount or co-pay via the VUMC ILab accounting system. In the current funding cycle, the In Vivo Imaging Module was used by 11 investigators who authored 89 publications using the service, and saved our investigators $2229,325 through issuance and utilization of 35 scholarships. In the next cycle, we expect moderate to extensive use by 17 of 36 investigators. The In Vivo Imaging Module, housed in centralized locations sufficient for numerous independent non-invasive imaging platforms, is directed by Associate Professor Seth Smith, PhD. Using resources and personnel supported in part by this Core mechanism, the VVRC IN Vivo Imaging Module will (1) develop and deploy advanced, multi-modal imaging technologies to the study of human vision, its processing, and coupled neuroscience, (2) develop and deploy a multi-modal set of imaging tools for the study of vision in animal models, and (3) develop Infrastructure for imaging informatics, artificial intelligence, and machine learning for state-of-the-art analysis of ?big vision data?. These services and resources will enhance the scope of experimentation NEI-funded VVRC investigators conduct, expand the training of students and fellows involved in vision science, and promote collaboration by providing sophisticated, high-resolution and diverse imaging platforms to those who otherwise would not have such capabilities, including early-career vision scientists and clinician-scientists competing for extramural funding for their laboratories.
|
1 |
2020 — 2022 |
Biswas, Gautam (co-PI) [⬀] Stassun, Keivan (co-PI) [⬀] Tong, Frank Kunda, Maithilee Vogus, Timothy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nsf2026: Eager: Collaborative Research: Enhancing Employment For Neurodiverse Individuals Through Next-Generation, Ai-Enabled Assessments of Visuospatial Cognition
Each year in the United States, approximately 70,000 new adults on the autism spectrum will seek employment. At the same time, employers in technology, finance, healthcare, and many other critical job sectors seek highly skilled and highly trained individuals to fill specialized positions. With support from the DRK-12 Program in the Division of Research on Learning and the NSF 2026 Fund Program in the Office of Integrated Activities, this research will investigate new tools and methods for matching individual job-seekers on the autism spectrum to employment opportunities that leverage their unique cognitive skills, with a focus on visuospatial cognitive skills. Numerous jobs require strong visuospatial cognitive skills, such as visual inspection and quality control, process monitoring, document review, surveillance, software testing, and data visualization, to name a few. Many people on the autism spectrum show strengths in visuospatial cognitive skills, but these strengths are not fully understood, including how they differ from person to person and how they map onto workplace-relevant capabilities. Understanding visuospatial cognitive skills in individuals on the autism spectrum or other neurodiverse conditions has high potential impact for enhancing the neurodiversity of the workforce by enabling more effective programs for the recruitment, selection, and retention of such candidates in the public and private sectors.
This NSF2026 EAGER project enriches the NSF2026 Idea Machine winning entry Harnessing the Human Diversity of Mind. It seeks to develop and evaluate integrated, AI-enabled technologies for measuring a person?s visuospatial cognitive skills in new ways and then using these measurements to predict performance on workplace-relevant tasks. The research conducted during this two-year project will include conducting a large pilot study with individuals on the autism spectrum and neurotypical individuals, in which participants will be given several visuospatial tests, and detailed data about their actions will be recorded using sensors such as eye trackers and cameras. Then, data mining and machine learning techniques will be used to extract meaningful patterns from these rich streams of behavioral data, and analyses will be conducted to examine how these patterns in foundational behaviors map onto individual skills and interests in realistic, workplace-relevant activities. This research will also gather and analyze detailed feedback from industry partners to identify specific job types and sectors that would benefit from recruiting employees who are strong in visuospatial cognitive skills. In addition, this project will involve neurodiverse students and staff in many of its activities, in particular by involving graduate trainees supported by the NSF Research Traineeship in Neurodiversity Inspired Science & Engineering (NISE) and by leveraging the skills of neurodiverse interns at the Frist Center for Autism & Innovation at Vanderbilt University's School of Engineering.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2021 |
Tong, Frank |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Learning the Visual and Cognitive Bases of Lung Nodule Detection
Project Summary/Abstract Lung cancer is the most frequent cause of cancer death in the United States among both men and women. If lung nodules can be detected with greater reliability at an early stage, significant improvements in survival rate would be achievable. Chest radiographs are among the most common diagnostic tool used in radiology, and can reveal unexpected incidences of lung cancer. However, even expert radiologists may fail to detect the presence of a subtle low-contrast pulmonary nodule against the high-contrast anatomical background of a chest X-ray, with estimated rates of missed detection of 20-30%. What are the perceptual mechanisms, cognitive mechanisms, and critical learning experiences that determine how well a person can perform this challenging task of lung nodule detection? The PI and Co-Investigator have formed a synergistic collaboration that brings together expertise in human vision, computational modeling and neuroscience (Dr. Tong) in concert with thoracic imaging and biomedical engineering (Dr. Donnelly) to address this longstanding problem with high clinical relevance. This project will develop a validated computational approach for generating a diverse set of visually realistic simulated nodules to achieve the following goals. These are: 1) to characterize radiologist performance on an image-by-image basis in an ecologically valid manner, 2) to develop a novel image- computable model that accounts for expert performance, and 3) to develop a novel learning-based paradigm to characterize the perceptual and cognitive mechanisms of nodule detection, initially in non-expert participants, with the long-term goal of developing a protocol to enhance clinical training. The project will incorporate sophisticated 2D image-based computational methods as well as data from 3D CT segmented nodules to generate a diverse set of simulated nodule examples, each placed in a unique chest X-ray. Success will be evaluated by the following outcome measures. First, radiologists should find it very difficult to tell apart real from simulated nodules. Moreover, their performance accuracy at detecting/localizing simulated nodules should be predictive of their accuracy for real nodules. Second, if the simulated nodules suitably capture the variations of real nodule appearance, then non-expert participants who receive multiple sessions of training with simulated nodules should show improved performance for both simulated and real nodules. This learning- based paradigm will allow for characterization of the perceptual, cognitive, and learning-based factors that govern nodule detection performance. Third, development and refinement of this learning-based paradigm should have the potential to improve nodule detection performance in radiology residents. Finally, the behavioral data gathered from radiologists and other top-performing participants will be used to develop an image-computable model of nodule detection performance. As a whole, this project will lead to a more rigorous understanding of the perceptual and cognitive bases of lung nodule detection, and spur the development of a new learning-based protocol to enhance the training of radiology residents and other medical professionals.
|
1 |