2005 — 2006 |
Zhang, Zhao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sger: Fast and Scalable Simulation For Multicore and Multithreaded Processor by Using Commodity Fpga Boards
Intellectual Merit
We propose an exploratory project to build a fast and scalable architectural simulation platform using clusters of commodity FPGA boards. It is inherently parallel: A FPGA board simulates a processor core, a group of FPGA boards simulates a multicore processor chip, and a cluster of such groups simulates a full computing cluster. By careful analysis using the performance data of Xilinx FPGA board, we show that the slowdown will be less than two orders of magnitude, which makes full-scale evaluation feasible.
By comparison, the slowdown of a software simulator will be at least five orders of magnitude. The system is scalable in that one can increase the system size by adding more FPGA boards. The platform is also affordable, flexible, and easy to replicate, and can be widely used by industry and academic researchers.
Boarder Impact
High Performance computing applications have deep impact in our daily life and their computing demand would never be fully satisfied. However, without a fast simulation platform, multicore and multithreaded processors are not being evaluated by large-scale parallel programs until they have been manufactured. The proposed research removes this limit. It will prompt early consideration and evaluation of system-level methods for improving processor efficiency, providing important input to computer architects.
Thus, it may help reduce the trials and errors that would be very expensive and would delay the improvement of high performance computing systems. The proposed experiment support will benefit other researchers in both computer architecture and high-performance computing communities, and help them see the needs of each other before the real machines are available.
|
0.961 |
2006 — 2009 |
Zhang, Zhao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Memory Access Throttling For Highly Multi-Threaded Processors
Multicore and multithreaded processors are replacing single-threaded processors in high-performance computer clusters, servers, workstations and high-end personal computers. The memory demand of those processors increases proportionally with the degree of multithreading. The proposed research aims to alleviate the memory bandwidth pressure by using new methods of memory access scheduling and by adjusting multithreaded execution in accordance with memory bandwidth pressure. A set of interdependent and complementary approaches are proposed: Urgency and confidence levels of memory accesses are used to guide the memory access scheduling; memory load index is used to control the progress of multithreaded execution; memory accesses from multiple threads are smoothed out to avoid bandwidth congestion; and several prediction-based techniques are proposed to reduce cache miss penalty in deep cache hierarchies. All together, both the processor computing power and the memory bandwidth will be better utilized; and higher system throughput can be achieved. The proposed research will help high-performance computing applications benefit from the emergence of highly multithreaded processors by alleviating the crucial bottleneck of off-chip memory bandwidth. It will also deepen the understanding of complex interactions between highly multithreaded processors and their memory subsystems, which will complement education in computer architecture and parallel computing.
|
0.961 |
2007 — 2008 |
Zhang, Zhao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Csr --- Sma: Thermal Modeling, Simulation and Management of Memory Subsystems For Multi-Core Systems
With the demand for high memory performance from multi-core processors, memory subsystem has become a new thermal concern after processor and hard drive. Smooth and efficient memory thermal management schemes must be developed to meet the challenge. There also lack memory thermal models and simulation tools in the public domain for research and education. This project proposes memory thermal models and thermal simulators for DRAM memory subsystems as well as efficient DTM (dynamic thermal management) methods. The investigators develop a simple and accurate dynamic thermal model based on fully buffered DIMM and an accurate and fast two-level simulator estimating the thermal behavior of a memory subsystem. They also study several new, system-level DTM schemes that coordinate DRAM thermal management with processor performance throttling. Several new DTM methods are to be developed. The first method, Adaptive Core Gating, adjusts the number of active cores according to the memory thermal status. The second method, Coordinated DVFS (dynamic voltage and frequency scaling), proactively scales down the processor frequency and voltage upon memory thermal emergency, reducing both the DRAM heat generation and the processor power consumption. Furthermore, thermal-aware OS job scheduling smoothes memory traffic and DRAM heat generation by mixing jobs with different memory demands appropriately. The thermal model is validated to execution on hardware platforms; and the proposed methods are evaluated on real systems.
|
0.961 |
2008 — 2012 |
Zhang, Zhao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Csr-Psce, Sm: Memory Thermal Management For Multi-Core Systems
With the increasing demand on memory performance by multi-core processors, the memory subsystem has become a new thermal concern along with the processor and the hard disk drive. To address this emerging issue, this project addresses several new, system-level Dynamic Thermal Management (DTM) schemes that coordinate the DRAM thermal management with the processor performance throttling, such as dynamically adjusting the number of active processor cores or scaling the processor's frequency and voltage level based on the memory thermal status. The project also studies coordinated thermal management schemes that consider the thermal requirements from both the processor and the memory subsystem. Thermal-aware OS job scheduling is further considered to smooth memory traffic and DRAM heat generation over time by mixing jobs with different memory demands appropriately. In addition, thermal-aware page allocation is proposed to avoid unbalanced overheating from some memory chips by considering the location of each chip and the memory access demand of each application. These schemes will first be evaluated using simulation and then implemented in OS kernels and evaluated on real systems. To support the memory thermal studies, a simple and accurate thermal model is proposed to estimate the dynamic temperature changes of DRAM memory subsystems. A two-level simulator will be developed to emulate the thermal behavior of memory subsystems. Successfully addressing the thermal concern of memory subsystems will not only ensure safe system operations but also improve the overall system performance, reduce the system manufacturing cost, and improve the system power efficiency.
|
0.961 |
2008 — 2012 |
Zhang, Zhao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Csr-Psce, Tm: Effective Resource Sharing and Coordination Inside Multicore Processors For High Throughput Computing
One of the main challenges in multi-core processor resource management is that existing operating systems (either conventional OS for single core or OS for SMP) are not able to effectively handle the new complexities in multi-core processors. In order to address this challenge, the collaborators will conduct three closely related projects. (1) A hybrid system design and implementation for OS-based cache partitioning will provide efficient software management of shared caches with minimum hardware complexity, and will well define hardware/software interface of shared cache management. (2) The collaborators will design and implement scheduling algorithms in OS kernels to effectively allocate CPU, caches and memory bandwidth resources to multiprogramming jobs in multi-core processors. (3) A data object locality-aware cache partitioning design and implementation will distinguish the locality strengths of objects and make effective cache allocation decisions. The intellectual challenges of this project are threefold: (1) Hybrid system design involves complex interactions between hardware and the underlined operating system, and demands insightful understanding of existing system structures and innovation to enhance both architecture and the OS kernels. (2) OS-based scheduling in multi-core processors is a fundamental and complex problem in system research. (3) System implementation of proposed algorithms for scheduling and object coordination demands a lot of creative ideas for their seamless integration in the kernels. The broader impact of this project is expected to be significant. Solutions to address critical issues for significant performance improvement in multi-core processors are timely demanded in many application areas. The research training to both undergraduate and graduate students will address the concerns of lacking strong system professionals in IT and computer industries.
|
0.961 |
2008 — 2011 |
Shelley, Mack (co-PI) [⬀] Rover, Diane Zhang, Zhao Nguyen, Tien [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Improving Embedded System Education With Software Engineering Methodologies
Engineering - Electrical (55)
The project is exploring a unique education paradigm that systematically integrates software engineering practice into a series of embedded systems courses. It addresses the need to educate students in using software engineering methods in complex embedded software projects, a topic that is not addressed adequately in either software or embedded systems courses. The project involves introductory-, intermediate- and advanced-level embedded systems courses. The most important software engineering methods appropriate to each level are introduced in the corresponding course and the software engineering practices are integrated into the course laboratories and projects. Commonly used software engineering tools will be introduced along with embedded systems development environments. The project will improve student learning and teaching effectiveness in both areas. Furthermore, a short course abstracted from the materials is benefiting other engineering disciplines that develop domain-specific embedded systems. The teaching materials are being designed so that they can be re-organized to serve students and engineers from other disciplines to meet their need for training in Software Engineering. The evaluation effort, under the guidance of an expert from the campus's institute for studies of education, is using validated sample survey instruments, institutional data on achievement and growth, focus groups, and individual interviews to monitor the project's effectiveness, impact, and unexpected outcomes. Investigators are disseminating their methods and results through their website, journal and conference venues, and specialty outlets, including the Shared-Software Infrastructure Program and the Field-tested Learning Assessment Guide. Broader impacts include the dissemination of the material, including their assessment tools, and their outreach to other engineering disciplines.
|
0.961 |
2011 — 2016 |
Zhang, Zhao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Small: Software Cache Memory Managements With Reconfigurable Hardware Emulators
As multi-core systems become the major computing platform, efficient cache management is even more crucial to system performance and power efficiency than before. An effective approach is to use software cache management (SCM) with hardware supports to manage shared last-level cache, because sophisticated SCM may adapt to the complex scenarios of cache usage on multi-core processors.
A critical and unsolved issue in SCM is the lack of rich and relevant information for software to reason about cache performance under different configurations. The project investigates the use of lightweight and Reconfigurable hardware Cache Emulators (RCEs) to extend the capability of SCM. With this new hardware support, sophisticated SCM algorithms that constantly monitor cache usage through RCEs are developed. Those SCM algorithms aim to improve cache power efficiency by turning off unused cache portion, optimize cache partitioning for multi-core processors, and improve software-controlled cache mapping to improve cache utilization.
The research will improve system performance, power efficiency, and performance predictability for laptop, desktop and server computers using multi-core processors of large shared caches. It may make an impact on industry processor design to include lightweight RCEs as well as enrich SCM algorithms. It will also introduce new educational materials for students to study multicore cache management through hand-on experiments.
|
0.961 |
2015 — 2016 |
Zhang, Zhao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Medium:Collaborative Research: Architectural and System Support For Building Versatile Memory Systems
Computer memory design is moving into a new era with emerging NVM (non-volatile memory) technologies for increasingly data-intensive applications. This project will investigate a novel memory architecture called versatile memory system that hosts heterogeneous memory technologies to provide a powerful in-memory computing engine, addressing the increasing demands for memory performance, energy efficiency, and reliability from data-intensive applications. It is based on a holistic approach, from low-layer hardware design to high-layer OS management, to address the challenges of complexity and efficiency arising from the integration of heterogeneous NVM technologies. It allows the memory system to be self-adaptive to meet varying application demands on performance, energy efficiency, and reliability. The project will study the framework, critical hardware support, and feasible and meaningful functionalities of the versatile memory system, aiming at improving the performance, energy efficiency, reliability, and manageability of computing systems from mobile to server platforms. It is expected that the outcome of the project will have broader impact on the design of modern computer memory systems both in academia and industry.
|
0.961 |
2016 — 2019 |
Zhang, Zhao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Medium:Collaborative Research: Architectural and System Support For Building Versatile Memory Systems @ University of Illinois At Chicago
Computer memory design is moving into a new era with emerging NVM (non-volatile memory) technologies for increasingly data-intensive applications. This project will investigate a novel memory architecture called versatile memory system that hosts heterogeneous memory technologies to provide a powerful in-memory computing engine, addressing the increasing demands for memory performance, energy efficiency, and reliability from data-intensive applications. It is based on a holistic approach, from low-layer hardware design to high-layer OS management, to address the challenges of complexity and efficiency arising from the integration of heterogeneous NVM technologies. It allows the memory system to be self-adaptive to meet varying application demands on performance, energy efficiency, and reliability. The project will study the framework, critical hardware support, and feasible and meaningful functionalities of the versatile memory system, aiming at improving the performance, energy efficiency, reliability, and manageability of computing systems from mobile to server platforms. It is expected that the outcome of the project will have broader impact on the design of modern computer memory systems both in academia and industry.
|
0.961 |
2016 — 2019 |
Somani, Arun [⬀] Zhang, Zhao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: Enhancing Memory System Dependability by Integrity Checking
Memory system dependability is increasingly a concern as memory cell density and total capacity continue to increase. Recent field studies have shown that memory error rates are rising and memory errors have demonstrated correlation patterns. With these two trends, current memory error protection schemes are no longer sufficient for server computers. This project explores a unique error protection scheme called MemGuard, which is based on memory integrity checking, to enhance memory error protections for server computers as well as to provide a cost- and energy-efficient solution for personal and mobile computers. The research may significantly improve the dependability of computer systems without incurring high cost or energy overhead. The education and outreach activities will encourage minority and women students to get involved in the research, and will include interactions with middle/high school students and teachers.
The MemGuard scheme checks the consistency between memory reads and memory writes using hash-based signatures to detect memory errors. It can detect memory cell errors with a negligible rate of false negative. Compared to SECDED (single error correcting double error detection) ECC and SDDC (single data device correction) schemes, it is much stronger in multi-bit error detection and with negligible cost and energy overhead. It does not correct errors immediately as the other two schemes do; instead, it may reply on OS checkpointing or program restarting for error recovery. The project will fully investigate the design of MemGuard, evaluate the strength of MemGuard with realistic DRAM error modes, extend it to multiprocessors and I/O rich environments, develop a similar integrity-based scheme for processor/memory communication error protection, and combine MemGuard with existing error protection schemes. The project will also optimize the design and implementation of the hash functions of MemGuard, combine MemGuard with selection error protection, and explore efficient checkpointing strategies for improved efficiency.
|
0.961 |
2019 — 2022 |
Barth, William [⬀] Zhang, Zhao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Frameworks: Designing Next-Generation Mpi Libraries For Emerging Dense Gpu Systems @ University of Texas At Austin
The extremely high compute and communication capabilities offered by modern Graphics Processing Units (GPUs) and high-performance interconnects have led to the creation of High-Performance Computing (HPC) platforms with multiple GPUs and high-performance interconnects per node. Unfortunately, state-of-the-art production quality implementations of the popular Message Passing Interface (MPI) programming model do not have the appropriate support to deliver the best performance and scalability for applications on such dense GPU systems. These developments in High-End Computing (HEC) technologies and associated middleware issues lead to the following broad challenge: How can existing production quality MPI middleware be enhanced to take advantage of emerging networking technologies to deliver the best possible scale-up and scale-out for HPC and Deep Learning (DL) applications on emerging dense GPU systems? A synergistic and comprehensive research plan, involving computer scientists from The Ohio State University (OSU) and Ohio Supercomputer Center (OSC) and computational scientists from the Texas Advanced Computing Center (TACC), and San Diego Supercomputer Center (SDSC) and University of California San Diego (UCSD), is proposed to address the above broad challenges with innovative solutions. The proposed framework will be made available to collaborators and the broader scientific community to understand the impact of the proposed innovations on next-generation HPC and DL frameworks and applications in various science domains. Multiple graduate and undergraduate students will be trained under this project as future scientists and engineers in HPC. The proposed work will enable curriculum advancements via research in pedagogy for key courses in the new Data Science programs at OSU, SDSC and TACC. The established national-scale training and outreach programs at TACC, SDSC and OSC will be used to disseminate the results of this research to XSEDE users. Tutorials and workshops will be organized at PEARC, SC and other conferences to share the research results and experience with the community. The project is aligned with the National Strategic Computing Initiative (NSCI) to advance US leadership in HPC and the recent initiative of the US Government to maintain leadership in Artificial Intelligence (AI.)
The proposed innovations include: 1) Designing high-performance and scalable point-to-point, and collective communication operations that fully utilize multiple network adapters and advanced in-network computing features for GPU and CPU buffers within and across nodes; 2) Designing novel datatype processing and unified memory management to improve application performance; 3) Designing CUDA-aware I/O subsystem to accelerate MPI I/O and checkpoint-restart for HPC and DL applications; 4) Designing support for containerized environments to better enable easy deployment of proposed solutions on modern cloud environments; and 5) Carry out integrated development and evaluation to ensure proper integration of proposed designs with the driving applications. The proposed designs will be integrated into the widely-used MVAPICH2 library and made available. The project team members will work closely with internal and external collaborators to facilitate wide deployment and adoption of released software. The proposed solutions will be targeted to enable scale-up and scale-out of the driving science domains (molecular dynamics, lattice QCD, seismology, image classification, and fusion research) on emerging dense GPU platforms. The transformative impact of the proposed development effort is to achieve scalability, performance, and portability out of HPC and DL frameworks and applications to take advantage of emerging dense GPU platforms and hence, leading to significant advancements in science and engineering.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.961 |
2020 — 2022 |
Zhang, Zhao Huang, Lei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Oac Core: Small: Efficient and Policy-Driven Burst Buffer Sharing @ University of Texas At Austin
Modern scientific research heavily relies on supercomputers. Supercomputing applications, such as traditional numerical simulations (HPC), data intensive applications (Big Data), and most recently, deep learning (DL) applications, are increasingly run on supercomputers to obtain timely results and explore new research methods that combine multiple application types. However, a bottleneck in their design reduces the potential performance of modern supercomputers. This project, bbThemis, addresses this problem by enabling efficient and policy-driven sharing of an intermediate storage layer known as a "burst buffer", so that more scientists and applications can leverage state-of-the-art storage techniques to significantly reduce their runtime and enhance the productivity of their research. This project will deliver substantial gains to almost every research area that uses HPC resources, leading to improved science and engineering methods and products in all fields. This research will have an immediate and significant impact on existing scientific applications and on deriving guidelines for next-generation HPC system design, deployment, and utilization. The project will also contribute to educational outcomes. In addition to students working directly on project goals, results developed in the project will be used in tutorial and training sessions at Texas Advanced Computing Center?s summer institute in deep learning and other major conferences, and in University of Illinois Urbana-Champaign student projects. The project is aligned with the National Strategic Computing Initiative (NSCI) to advance US leadership in HPC.
This project, bbThemis (https://github.com/bbThemis), leverages a suite of technologies, such as disassociation of I/O processing from control logic, time-sliced intra I/O node sharing, function interception for low overhead POSIX I/O, and metadata and data placement for optimal individual application performance. It is investigating how to best apply these technologies, by: 1) Identifying optimal burst buffer configurations for a suite of representative supercomputing applications; 2) Proposing, prototyping, and verifying different design options to address intra-node and inter-node I/O performance sharing; and 3) Designing and evaluating a set of sharing policies, such as fair sharing and priority sharing, with real applications and I/O traces. This project will dramatically increase the sharing capacity of existing burst buffers and enhance domain scientists? productivity at a large scale. It explores various sharing policies that permit efficient sharing of I/O resources and that meet the requirements of computing centers. The results will enable the provisioning of I/O resources, where users can request specific IOPS or bandwidth for a period of time. The prototype burst buffer sharing framework will immediately increase the capacity of existing supercomputers with enhanced I/O performance. The lessons learned will guide next-generation I/O system design for large scale systems. The general improvement of HPC, Big Data, and DL applications will also increase the coherence of the hardware and software used for data analytics computing and modeling and simulation.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.961 |
2021 — 2024 |
Zhang, Zhao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Oac Core: Scadl: New Approaches to Scaling Deep Learning For Science Applications On Supercomputers @ University of Texas At Austin
Today's deep learning (DL) revolution is enabled by efficient deep neural network (DNN) training methods that capture important patterns within large quantities of data in compact, easily usable DNN models. DL methods are applied routinely to tasks like natural language translation and image labeling--and, in science and engineering, to problems as diverse as drug design, environmental monitoring, and fusion energy. Yet as data sizes increase and DL methods grow in sophistication, the time required to train new models often emerges as a major challenge. The Scalable Deep Learning (ScaDL) project will address this challenge by making it possible to use specialized high-performance computing (HPC) systems to train bigger models more rapidly. Efficient use of the thousands of powerful processors in modern HPC systems for DNN training has previously been stymied by communication costs that grow rapidly with the number of processors used. ScaDL will overcome this obstacle by developing new DNN training methods that reduce communication requirements by performing additional computation, by validating the effectiveness of these new methods in a range of scientific applications that use DL in different ways, and by integrating the new methods into scalable DL software for use by domain scientists, computer scientists, and engineers supporting DL application in HPC centers. By permitting the use of powerful HPC systems to train DNN models thousands of times faster than on a single computer, ScaDL will enable advances in many areas of science and engineering. The project will also contribute to educational outcomes by engaging PhD students in project goals, by using ScaDL tools in a new DL systems engineering class at the University of Chicago, and by enlisting participants in summer schools at the Texas Advanced Computing Center (TACC) and U. Chicago, both of which target recruitment of students from underserved communities at the graduate, undergraduate, and high-school levels, to apply the tools to scientific problems. ScaDL's focus on science applications and education aligns the project with NSF's mission of promoting the progress of science.
The ScaDL project contributes to science in two ways. First, it explores new techniques for enhancing the speed and scalability of commonly used optimization methods without losing model performance, by: 1) exploiting scalable algorithms for second-order information approximation; 2) developing methods for adapting to different computer hardware by tuning computation and communication to maximize training speed; 3) exploring compression techniques to reduce communication overheads; 4) using well-known benchmark applications to evaluate the convergence of ScaDL; and 5) applying its new algorithms and systems to science applications. Second, it will release an open-source implementation of the proposed algorithms and system. The implementation will be available on a variety of hardware platforms and capable of choosing the ratio of computation and communication required to make efficient use of the computation and communication hardware on a particular HPC system. The resulting algorithms and system will help disseminate ScaDL research results to a wide spectrum of research domains and users, and promote the adoption of the new methods in practical settings.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.961 |
2021 |
Zhang, Zhao |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Quantitative Characterization of Neuronal Trans-Snare Complexes Using Dna Origami @ University of Wisconsin-Madison
Project Summary/Abstract A key step in neurotransmission is the fusion of the synaptic vesicle (SV) membrane with neuronal plasma membrane (PM), to release neurotransmitters into the synaptic cleft where they bind and activate post synaptic receptors. A protein complex called SNARE is believed to play a central role since its assembly can generate enough energy to drive fusion. The current hypothesis that describes SNARE-mediated fusion is referred to as 'SNARE zippering': a v-SNARE protein on SV binds to a t-SNARE protein heterodimer on PM in a zipper-like fashion, forming a trans-SNARE complex (i.e. v- and t-SNARE transmembrane domains are embedded in separate membranes); the released energy eventually overcomes the repulsive forces between SV and PM and pulls the two membranes together, where trans-SNARE complexes transform into cis-SNARE complexes (i.e. v- and t-SNAREs locate on a single membrane). At present most of what is known concerning neuronal SNARE structure and dynamics stems from analysis of cis-SNARE, but the 'real hero' trans-SNARE that provides the driving force for membrane fusion remains elusive. A main technical challenge here is to capture partially assembled trans-SNARE complexes that form during the fast process of exocytosis (<1 ms). In this proposal, we offer a solution by combining the power of nanoscale programmability from DNA nanotechnology and the ability of restricting fusion pore expansion by using nanodisc (ND). A V-shaped DNA origami structure is used for hosting two binding moieties; one moiety comprises v-SNAREs that have been reconstituted in NDs, while the other comprises NDs with the cognate t-SNAREs. Our platform significantly improved previous methods in revealing true information of neuronal trans-SNARE assembly by studying: (1) full-length SNARE proteins rather than truncations or mutations, as the disruption of zippering solely arises from distance control; (2) SNAREs in lipid bilayers, which represent their native environment. In Specific Aim 1, a set of partially-assembled neuronal trans-SNARE complexes residing in bilayers are produced, which mimic the progressive quaternary core in synaptic fusion machinery. Then various clostridial neurotoxins (CNTs) are added into the complex set, and the relation between SNARE assembly completeness and CNTs' proteolytic activity could be systematically examined. In Specific Aim 2, a modified V-origami functions as a force spectrometer to investigate the energy landscape of neuronal trans-SNARE assembly in the context of bilayers. Importantly, we will examine the effect of disease-associated SNARE mutations on trans-complex assembly energy, which would help elucidate their impact on psychiatric disorders. In brief, we strive to build a novel and powerful platform to revisit one of the central yet elusive machinery in neuroscience: the neuronal trans-SNARE complex. Important knowledge concerning widely-used CNTs and disease-relevant mutants are expected to acquire in this study, potentially benefiting both basic research and clinical practices. Such DNA-based technology may also be used to study other membrane proteins in vitro.
|
0.961 |