Several applications in wireless networking and broad- casting involve receivers which may already know some part of the messages that are being transmitted. Such prior knowledge or ‘side information’ may be obtained by overhearing previous wireless transmissions, by predownloading partial content to local cache at the receivers, or through parallel/orthogonal communication channels. Advanced coding techniques, known as ‘Index Coding’, can be utilized to exploit receiver side information and achieve higher transmission rates in these communication scenarios than what is naively possible. The available results and literature on index coding are rich and view the problem from a network layer perspective, where the communication links are assumed to be noise free. Tools from algebraic network coding, graph theory and matroids have been traditionally used to approach this network layer problem. However, practical channels have inherent uncertainties such as noise and signal fading that a coding scheme has to additionally contend with. The existing communication schemes rely on the technique of using a network layer index code and a separate PHY layer channel code for these scenarios which is information-theoretically suboptimal. The goal of this tutorial is to show how algebraic techniques can be utilized to construct efficient coding schemes that can exploit receiver side information at the PHY layer. Such techniques allow us to use side information to not only combat channel noise but also increase transmission rates. We deliver a tutorial introduction to the powerful mathematical concepts of rings and lattices. We then discuss the coding theoretic aspects of designing index codes for a noisy channel and show that the tools from rings and lattice theory can be used to construct modulation and coding schemes that can transform receiver side information into additional coding gains. The emphasis of the tutorial will be on constellations for the wireless channels that have beautiful algebraic and geometric properties. No prior mathematical background will be assumed.
Prof. Viterbo is Professor in the ECSE Department and Associate Dean Research Training of the Faculty of Engineering at Monash University, Melbourne, Australia. In 1997-98 he was a research fellow in the In- formation Sciences Research Center of AT&T Research, Florham Park, NJ, USA. He became first Assistant Professor (1998) then Associate Professor (2005) in Dipartimento di Elettronica at Politecnico di Torino, Turin, Italy. In 2006 he became Full Professor in DEIS at Uni- versity of Calabria, Italy. Prof. Emanuele Viterbo is a Fellow of the IEEE, a ISI Highly Cited Researcher and Member of the Board of Governors of the IEEE Information Theory Society (2011-2013 and 2014-2016). He served as Associate Editor of IEEE Transactions on Information Theory, European Transactions on Telecommunications and Journal of Communications and Networks, Guest Editor for IEEE Journal of Selected Topics in Signal Processing: Special Issue Managing Complexity in Multiuser MIMO Systems, and Editor of Foundations and Trends in Communications and Information Theory. Prof. Viterbo was awarded a NATO Advanced Fellowship in 1997 from the Italian National Research Council, the 2012-13 Australia-India Fellowship from the Australian Academy of Science, and the 2013 Invitation Fellowship for Research in Japan from the Japan Society for the Promotion of Science
Optical fibers, thin strands of ultra-transparent silica glass not much thicker than a human hair and spanning many thousands of kilometers, carry the bulk of the world’s telecommunications. At the high power densities created by the lasers used in fiber-optic transmission, the Kerr effect, which causes changes in refractive index in response to the electric field of a propagating wave, becomes significant. In optical fibers, this phenomenon, along with chromatic dispersion, is described by a partial differential equation known as the nonlinear Schrödinger (NLS) equation, giving rise to a channel model that challenges information-theoretic analysis. This lecture will serve as an introduction to this area, and provide some current information-theoretic perspectives and open problems.
Prof. Kschischang is the Distinguished Professor of Digital Communication in the Department of Electrical and Computer Engineering at the University of Toronto. During 1997-98, he was a visiting scientist at MIT, Cambridge, MA; in 2005 he was a visiting professor at the ETH, Zurich, and in 2011 and again in 2012-13 he was a visiting Hans Fischer Senior Fellow at the Institute for Advanced Study at the Technical University of Munich. In 1999 he was a recipient of the Ontario Premier’s Excellence Research Award and in 2001 (renewed in 2008) he was awarded the Tier I Canada Research Chair in Communication Algorithms at the University of Toronto. In 2010 he was awarded the Killam Research Fellowship by the Canada Council for the Arts. Jointly with Ralf Koetter he received the 2010 Communications Society and Information Theory Society Joint Paper Award. He is a recipient of the 2012 Canadian Award in Telecommunications Research. He is a Fellow of the IEEE and currently serves as the Editor-in-Chief for the IEEE Transactions on Information Theory. He also served as technical program co-chair for the 2004 IEEE International Symposium on Information Theory (ISIT), Chicago, and as general co-chair for ISIT 2008, Toronto. He served as the 2010 President of the IEEE In- formation Theory Society. He was an IEEE Information Theory Society Distinguished Lecturer for 2012-2013.
Error-correcting codes in distributed storage are forced to address a new challenge, namely that of recovery of of one or more individual symbols of a codeword as opposed to the customary recovery of the entire codeword. This is because in a storage system, different code symbols are stored on different nodes of the storage network and each node is a storage unit that is prone to failure or else is otherwise unavailable at the time when access to the node’s contents is desired.
Two classes of codes have sprung up to meet this challenge, namely regenerating codes and codes with locality. This talk will provide an overview of these two classes of codes, from their early beginnings to current developments. Quite apart from their application to storage, these two classes of codes also have served to enrich coding theory by giving rise to new classes of codes possessing additional and desirable properties.
Prof. Kumar obtained his PhD from the University of Southern California (USC). Between 1983-2003, he was a full-time faculty in the EE department of USC. Since 2003, he has been a Professor in the ECE department of the Indian Institute of Science (IISc) Bangalore, where he is currently department Chairman and Tata Chem Chaired Professor of IISc. He also holds the position of Adjunct Research Professor at USC. He is an ISI highly cited author, an IEEE Fellow and a Fellow of the In- dian National Academy of Engineering. He is also co- recipient of the 1995 IEEE Information Theory Society Prize-Paper award, a Best-Paper award at the DCOSS 2008 conference on sensor networks and the IEEE Data Storage Best-Paper Award of 2011/2012. A pseudo- random sequence family designed in a 1996 paper co- authored by him now forms the short scrambling code of the 3G WCDMA cellular standard. He received the USC School of Engineering’s Senior Research Award in 1994 and the Rustum Choksi Award for Excellence in Research in Engineering in 2013 at IISc. He has been on the Board of Governors of the IEEE Information Theory Society since 2013, was a plenary speaker at ISIT 2014 and TPC Co-Chair of ISIT 2015.
Many of the recent advances in coding theory have, at their root, ideas that originated between 1954 and 1960. In this lecture, we give a tutorial description of these ideas. We begin by introducing factor graphs and belief propagation (BP) as tools for understanding large systems of dependent random variables. Then, we describe briefly how, over the past 20 years, these techniques have revolutionized error-correcting codes, compressed sensing, and random satisfiability. Next, we use the example of low-density parity-check (LDPC) codes on the binary erasure channel to introduce the density-evolution analysis of BP decoding. A key result is that BP decoding of LDPC codes has a noise threshold, below which, decoding succeeds with high probability. The first part of the lecture concludes by discussing how extrinsic-information transfer (EXIT) functions can be used to connect the performance of BP decoding and optimal decoding.
In the second part of the lecture, we introduce spatially-coupled (SC) codes. In 2010, it was shown that a regular LDPC ensemble can be spatially coupled so that the BP noise threshold of the coupled ensemble increases up to the optimal noise threshold of the uncoupled ensemble. Based on this observation, we discuss a few communication scenarios where this property allows SC-LDPC codes to achieve optimal performance in cases where optimized irregular LDPC codes cannot. Finally, we describe some recent results that bring us back to 1954. We conclude by describing some interesting open problems.
Henry D. Pfister received his Ph.D. in electrical engineering in 2003 from the University of California, San Diego and he is currently an associate professor in the electrical and computer engineering department of Duke University. Prior to that, he was a professor at Texas A&M University (2006-2014), a post-doctoral fellow at the École Polytechnique Fédérale de Lausanne (2005- 2006), and a senior engineer at Qualcomm Corporate R&D in San Diego (2003-2004).
He received the NSF Career Award in 2008, the Texas A&M ECE Department Outstanding Professor Award in 2010, and was a coauthor of the 2007 IEEE COMSOC best paper in Signal Processing and Coding for Data Storage. He is currently an associate editor in coding theory for the IEEE Transactions on Information Theory (2013-2016) and a Distinguished Lecturer of the IEEE Information Theory Society (2015-2016).
His current research interests include information the- ory, communications, probabilistic graphical models, and machine learning.
The goal of the talk is to teach information theory for two problems: wiretap channels and biometric (or physical) identifiers. For wiretap channels, we focus on a security measure called “effective” secrecy that includes the concept of stealth in addition to secrecy. The security portion of the coding theorem is established by using a simple proof whose key step is applying Jensen’s inequality to the logarithm. The converse follows by a short proof that uses a telescoping identity. An operational meaning for stealth is developed by using binary hypothesis testing concepts. Examples illustrate the impact of the stealth requirement. For identifier channels, a model is studied where the randomness source is hid- den from the encoder during enrollment. The capacity region is derived by using standard tools. The impact of having the source hidden is discussed via several examples.
The talk is based on joint work with Jie Hou, Onur Günlü, and Matthieu Bloch.
Prof. Kramer is Alexander von Humboldt Professor and Head of the Institute for Communications Engineering at the Technische Universität München (TUM). He re- ceived the Dr. sc. techn. (Doktor der technischen Wissenschaften) degree from the Swiss Federal Institute of Technology (ETH), Zurich, Switzerland, in 1998. From 1998 to 2000, he was with Endora Tech AG, Basel, Switzerland, as a communications engineering consultant. From 2000 to 2008 he was with the Math Center, Bell Laboratories, Alcatel-Lucent, Murray Hill, NJ, as a Member of Technical Staff. He joined USC in 2009 and TUM in 2010. Prof. Kramer is an IEEE Fellow. He received several awards for his work, including an ETH Medal for his doctoral dissertation in 1999, a Bell Labs President’s Gold Award in 2003, the IEEE Communications Society Stephen O. Rice Prize Paper Award in 2005, an Alexander von Humboldt Professorship in 2010, the Vodafone Innovations Prize in 2011, a Thomas Alva Edison Patent Award in 2012, and a EURASIP Best Paper Award in 2014. He was elected a Full Member of the Bavarian Academy of Sciences and Humanities in 2015. He is an IEEE Information Theory Society Distinguished Lecturer for 2015-2016.
Traditional results on the fundamental limits of data compression and data transmission through noisy channels apply to the asymptotic regime as delay (or block- length) goes to infinity. Motivated by modern practical applications in which limited delay is a key design constraint, there is considerable current interest on non-asymptotics in information theory. (For example, state-of-the-art 4G wireless standards employ error-correcting codes with blocklengths 100-20000.)
This tutorial covers the elements of the current approaches to the analysis of the fundamental limits as a function of blocklength. Going beyond traditional refinements to the fundamental asymptotic information the- oretic limits, we investigate the backoff from capacity (in channel coding) and the overhead over entropy (in lossless compression) and the rate-distortion function (in lossy source coding) incurred by coding at a given blocklength.
The tutorial covers the single-shot approach in information theory in which computable upper/lower bounds are obtained without imposing assumptions such as memorylessness or stationarity, and which does not hinge either on typicality or on combinatorial methods. The achievability and converse bounds are tight enough to reduce the uncertainty on the non-asymptotic fundamental limit to a level that is negligible compared to the gap to the long-blocklength asymptotics. The bounds are also powerful enough to prove the traditional coding theorems in the widest known generality. A prototypical result is: Over the BSC of crossover probability 0.11, the (yet to be discovered) best possible code of blocklength 500 can transmit between 190 and 193 information bits decodable with error 10^-3.
In addition, the tutorial shows how to obtain analytical approximations to the bounds, which are accurate even for short blocklengths, and which, in addition to Shannon’s fundamental limits only depend on the dispersion of sources and channels, a key performance measure which determines the time required for the source or channel to behave typically.
Prof. Polyanskiy is an Associate Professor of Electrical Engineering and Computer Science in the Department of EECS at MIT and a member of LIDS. He received the MSc degree in applied mathematics and physics from the Moscow Institute of Physics and Technology, Moscow, Russia in 2005 and the PhD degree in electrical engineering from Princeton University, Princeton, NJ in 2010. In 2000-2005 he lead the development of the embedded software in the Department of Surface Oilfield Equipment, Borets Company LLC (Moscow). Prof. Polyanskiy won the 2013 NSF CAREER award and 2011 IEEE Information Theory Society Paper Award.