1974 — 1976 |
Berger, Toby |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Channels With Compound Error Mechanisms |
1 |
1977 — 1979 |
Berger, Toby |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Data Compression Studies |
1 |
1980 — 1983 |
Berger, Toby |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Studies in Multiterminal Communication, Information and Decision Theory |
1 |
1983 — 1987 |
Berger, Toby |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Information Theory and Random Fields |
1 |
1986 — 1990 |
Berger, Toby |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Multisite Information Theory |
1 |
1989 — 1993 |
Berger, Toby |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Information Fields and Communication Topologies
Classical information theory is concerned primarily with random sequences. Recently, researchers have directed attention to random fields over rectangular grids in two and higher dimensions. This project involves a study of information fields over other types of regular structure, in particular over binary trees. Coding theorems for tree fields are likely to find application in data structure theory in computer science and in genetics. Another part of the project is a study of some aspects of communication topologies. One aspect is the design of communication networks using hierarchical techniques for optimally agglomerating individual nodes via concentrators and interconnection of concentrators in a robust manner so that the network can sustain the failure of any single link. Another aspect concerns the optimal embedding of a graph into a rectangular grid, with applications to the design of efficient VLSI planar and multilayer layouts; such as in switched processor networks or programmable gate arrays.
|
1 |
1990 — 1997 |
Berger, Toby Kiefer, Nicholas [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Coordination of Distributed Information and Decisions
Recognition of the importance of multiagent coordination has mounted over recent years, to the degree that it has now assumed a genuine sense of urgency. In the engineering literature, attempts have been made to address this issue in areas as varied as multiuser information theory, the theory of distributed estimation and detection, and decentralized control. Economists have also had a long standing concern with these issues. Coordination of distributed means, desires and information using market mechanisms is a central area of economic research. Computer scientists working on distributed computing also have begun to address coordination issues over the past decade. A glance at any popular science magazine will reveal the increasingly distributed nature of information in today's world and the timeliness of addressing this trend intelligently. In particular, it is of considerable importance to combine ideas form such diverse areas as engineering, economics and computational sciences, and fashion them into a coherent theory of coordination. The proposed work on multiagent coordination theory can be roughly classified as (i) establishing fundamental bounds on the costs of enabling coordination, (ii) understanding the optimal mechanisms of coordination, and (iii) investigation aggregation methods for situations where large numbers of agents are involved in the coordination task.
|
1 |
1993 — 1996 |
Berger, Toby |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Lossy Data Compression
This research addresses practical and theoretical issues encountered in the lossy compression of information. The problem of successive refinement, or progressive transmission, concerns efficient approaches to incremental specification of data produced by an information source in order to render it in more detail at each of several encoding stages. A prime theoretical issue is the determination of conditions under which this can be done in such a way the total data rate required is no less than that needed by someone tasked with only the final rendering in the sequence without the requirement of generating the intermediate versions. Practical algorithms inspired by theoretical advances in universal lossless data compression, especially the Lempe-Ziv and arithmetic coding formalisms, have profoundly affected the way text and data files are compressed for transmission and storage. Analogous theoretical issues and practical techniques are sought for universal lossy data compression based on searching the imprecisely described past for an approximate rather than an exact match of the next segment of the yet-to-be-encoded data. the results should significantly impact the way actual data sources, especially images and video, get encoded in practice.
|
1 |
2000 — 2003 |
Veeravalli, Venugopal Berger, Toby |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Design Principles For Wideband Wireless Communications
Abstract
Broadband wireless communications has been identified as a national imperative; it is deemed essential for maintaining the country's premier position in information technology and accelerating the advance of the information age. This research is aimed at enhancing the performance of wideband wireless multiaccess systems by optimizing tradeoffs between coding and spreading, capitalizing on advantages afforded by spatial diversity, and developing techniques for accommodating multirate users.
A goal of this research is to establish, through a careful modeling of time dispersion and angular dispersion effects in wideband channels, that signals spread continuously over the available time interval and frequency band can send data with high reliability. When spatial macrodiversity also is available, the investigators contend that signals should be coded over spatial and spectral dimensions and their energy spread "continuously" over time, frequency and space.
Linear signal space separation schemes have received considerable attention as approaches for providing good tradeoffs between throughput and system complexity in single-cell multiuser environments. The efficacy of linear signal space separation schemes diminishes substantially, however, when one generalizes to multiple-cell environments. The goal in such realistic multicell interference channels is to maximize a certain extension of classical spectral efficiency herein called area-spectral efficiency. This research investigates the validity of the contention that signals widely dispersed in time, frequency and space (TFS-wide signals) are nearly optimum in such environments and do not require the use of linear signal separation at the receiver.
The investigators study various techniques for creating TFS-wide signals in the wideband wireless environment, with an emphasis on multicarrier signaling approaches. Provided the forward channel does not fade quickly relative to the capacity of the channel available for feeding back information from the receiver(s) to the transmitter(s), DMT (discrete multitone) with dynamic bit loading should prove to be a viable data transmission technique. Combined with spreading that enables multiple access, DMT can be made to provide both user separation and Shannon-optimal power distribution relative to the short-term spectrum of the wideband channel with fading and interference seen by each user. For cases where dynamic bit loading proves infeasible, OFDM (i.e., DMT with an equal number of bits in each subchannel) is being explored as a candidate modulation scheme for broadband wireless signaling.
|
1 |
2003 — 2006 |
Wicker, Stephen (co-PI) [⬀] Tong, Lang (co-PI) [⬀] Servetto, Sergio [⬀] Berger, Toby |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sensors: the Reachback Channel in Wireless Sensor Networks
ABSTRACT 0330059 Sergio Servetto Cornell University
the most pressing need after a major urban disaster---earthquakes, fires, (accidental or induced) infrastructure collapse---is for information needed to deal with the situation. Such information includes, for example, the presence of toxic chemicals, the location and condition of survivors trapped by debris, or stress forces acting on an unstable damaged structure that may lead to further collapse. A similar need takes place when the goal is to prevent such disasters: in this case information needed includes, for example, time evolution of the concentration per unit-area of pollutant/toxic agents, of the temperatures in a forest, or of damage to the ozone layer.
In all the examples in which we envision the use of sensors to measure and then act upon the state of some physical process, there are two well defined stages. In an initial stage there is a distributed signal acquisition problem, in which each sensor gets to observe a spatially localized function of the process. Each one of these sensors will have autonomous computation, communication, and power generation capabilities. In a second stage, after processing the raw sensor measurements, some representation of all the data collected needs to be relayed to a collection point, where this data is to be analyzed and acted upon. (These actions could consist of, for example, direct rescue efforts to appropriate places, evacuate unstable structures, direct fire fighters to appropriate forest locations at the onset of a fire, etc.) These collection points can take the form of fixed and mobile receivers placed outside of the sensor coverage (and thus out of the way of potential harm). Collection points may be permanently dedicated facilities, as with monitoring points for the critical infrastructure. They may also be mobile, as with aircraft overflying the site of a disaster. This research program focuses on the engineering and, primarily, on the information-theoretic issues of the *sensor reachback* problem, i.e., the problem of establishing reliable communication between the sensor array and the collection point.
|
1 |
2011 — 2014 |
Berger, Toby Wilson, Stephen [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Small: Efficient Satellite Relaying @ University of Virginia Main Campus
Data networking via satellite relays remains an important means of linking globally-distributed network terminals for both commercial and governmental applications, especially in remote regions. Modern applications demand both spectrum and power efficiency in the network. The archetype network model in such networks is one with two terminals wishing to exchange data via a single satellite transponder. Relative to traditional time-sharing or frequency-sharing for the bidirectional communication paths, information-theory reveals that spectrum efficiency gains of up to 100% can be obtained for a given set of link power resources. These gains are possible when non-orthogonal transmission methods are adopted, and the decoders exploit side-knowledge on previously-transmitted information.
The project codifies various protocols appropriate to this two-terminal data exchange model, including amplify-forward, as well as protocols that involve satellite decoding/re-encoding. The possible gains depend on link resources as well as the desired bidirectional rate targets. Existing research for this problem presumes perfect synchronization and side-information at both terminals, but practical issues of large round-trip delay, carrier phase/frequency synchronization, and symbol synchronization are important obstacles to achieving the promise of information theory. So the investigators develop realistic synchronization protocol designs that approach the ideal information-theoretic limits. In addition, the project studies a new decode-and-forward relaying protocol based on nested LDPC coding on downlinks that is flexible in terms of rate-asymmetry.
|
0.957 |
2012 — 2017 |
Levy, William (co-PI) [⬀] Berger, Toby |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Medium: Energy-Efficient Encoding of Information @ University of Virginia Main Campus
The wireless revolution in communication and computing has imbued electrical engineers with a steadily deepening appreciation for the importance of energy-efficient design. On the other hand, energy efficiency has for eons been a paramount consideration of living systems.
Statistical thermodynamics says that kTln2 is a lower bound for how many joules of energy must be expended in order either to encode/transmit/decode or to write/read/erase one bit of information in an environment at T degrees Kelvin, k being Boltzmann's constant. Biological encoding and decoding that performs close to the thermodynamic limit uses continuous time differential pulse position modulation (CTDPPM), arguably the most energy-efficient way to encode and transmit information. The co-PIs are applying their interdisciplinary expertise in neuroscience and information theory to address research topics such as: (i) Performance trade-offs in their bits/joule maximizing exact solution of a 1915 model by Schrodinger now known to be applicable to thresholding neurons, (ii) Extension of the Schrodinger-style optimization to short term dynamics of bits/joule maximizing CTDPPM systems; the present solution provides statistically optimum behavior only in the long term, (iii) Steps toward extension to networks containing multi-compartment elements, emphasizing hierarchical architectures with feedback governing short-term distribution of energy resources, and (iv) Intriguing connections with the burgeoning research discipline of network coding.
|
0.957 |