Lectures

Jump to other IT Society Websites:

Joint State Sensing and Communication: Theory and Applications

Mari Kobayashi (CentraleSupélec)

Presentation slides

Abstract

We consider a communication setup where transmitters wish to simultaneously sense network states and convey messages to intended receivers. The scenario is motivated by joint radar and vehicular communications where the radar and data applications share the same bandwidth.  In the first part of the talk, I review well-known results from information theory, including channels with state and feedback. Then, I present recent results on the tradeoff between the capacity and the distortion for the case of a memoryless discrete channel with i.i.d. state sequences. We demonstrate through some examples the benefits of joint sensing and communication compared to a resource sharing approach. In the second part of the talk, I provide some application examples on joint radar and communication using multi-carrier transmission (OFDM and OTFS). I conclude the tutorial by providing some challenges towards joint radar and vehicular communications. 

Biography

Mari Kobayashi received the B.E. degree in electrical engineering from Keio University, Yokohama, Japan, in 1999, and the M.S. degree in mobile radio and the Ph.D. degree from École Nationale Supérieure des Télécommunications, Paris, France, in 2000 and 2005, respectively. From November 2005 to March 2007, she was a postdoctoral researcher at the Centre Tecnològic de Telecomunicacions de Catalunya, Barcelona, Spain. In May 2007, she joined the Telecommunications department at CentraleSupélec, Gif-sur-Yvette, France, where she is now a professor. She is the recipient of the Newcom++ Best Paper Award in 2010, and the Joint Information Theory/Communications Society Best Paper Award in 2011. Since September 2017, she is on a sabbatical leave at Technical University of Munich (TUM) as an Alexander von Humboldt Experienced Research Fellow. 

A tutorial on polar codes

Ido Tal (Technion)

Lecture notes

Abstract

In this short tutorial on polar codes, we will first introduce the polar transform as a general operation applied on a process (X_i,Y_i) to form a new process (F_i,G_i). We will explain why the new process is "polarized". We will then show how the polarization phenomenon can be used in many important settings, primary of which is channel coding. Specifically, we will first describe how to code for a memoryless symmetric channel, then a memoryless channel which is not symmetric, and finally a channel that is neither memoryless nor symmetric.

Biography

Ido Tal was born in Haifa, Israel, in 1975. He received the B.Sc., M.Sc., and Ph.D. degrees in computer science from Technion-Israel Institute of Technology, Haifa, Israel, in 1998, 2003 and 2009, respectively. During 2010–2012 he was a postdoctoral scholar at the University of California at San Diego. In 2012 he joined the Electrical Engineering Department at Technion. His research interests include constrained coding and error-control coding. He received the IEEE Joint Communications Society/Information Theory Society Paper Award (jointly with Alexander Vardy) for the year 2017.

Information-Theoretic Security: From Information-Theoretic Primitives to Algorithms

Matthieu Bloch (Georgia Institute of Technology)

Abstract

While the foundations of information-theoretic security can be traced back to the works of Shannon (1949), Wyner (1975), Maurer (1993), Ahslwede and Csiszár (1993), the past decade of research on the topic has enabled conceptual simplifications and generalizations, spanning both information and coding theory. This three-part tutorial will leverage these recent advances to provide a modern perspective on the topic. In the first part, we will develop four information-theoretic primitives that will prove central to the study of information theoretic security. In the second part, we will introduce canonical information-theoretic security models and combine information-theoretic primitives to establish their capacity. In the last part, we will show how to translate the insights from information-theory to actual codes and algorithms with manageable complexity. 

Biography

Matthieu R. Bloch is an Associate Professor in the School of Electrical and Computer Engineering, the Georgia Institute of Technology. He received the Engineering degree from Supélec, Gif-sur-Yvette, France, the M.S. degree in Electrical Engineering from the Georgia Institute of Technology, Atlanta, in 2003, the Ph.D. degree in Engineering Science from the Université de Franche-Comté, Besançon, France, in 2006, and the Ph.D. degree in Electrical Engineering from the Georgia Institute of Technology in 2008. In 2008-2009, he was a postdoctoral research associate at the University of Notre Dame, South Bend, IN. Since July 2009, Dr. Bloch has been on the faculty of the School of Electrical and Computer Engineering, and from 2009 to 2013 Dr. Bloch was based at Georgia Tech Lorraine. His research interests are in the areas of information theory, error-control coding, wireless communications, and cryptography. Dr. Bloch has served on the organizing committee of several international conferences; he was the chair of the Online Committee of the IEEE Information Theory Society from 2011 to 2014, and he has been on the Board of Governors of the IEEE Information Theory Society and an Associate Editor for the IEEE Transactions on Information since 2016. He is the co- recipient of the IEEE Communications Society and IEEE Information Theory Society 2011 Joint Paper Award and the co-author of the textbook Physical- Layer Security: From Information Theory to Security Engineering published by Cambridge University Press.

Joint Source and Channel Coding: Fundamental Bounds and Connections to Machine Learning

Deniz Gündüz (Imperial College London)

Abstract

Machine learning and communications are intrinsically connected. The fundamental problem of communications, as stated by Shannon, “is that of reproducing at one point either exactly or approximately a message selected at another point,” can be considered as a classification problem. With this connection in mind, I will focus on the fundamental joint source-channel coding (JSCC) problem; starting from the basic information theoretic performance bounds. I will then move to practical implementation of JSCC for wireless image transmission. I will first show some “surprising” performance results with uncoded "analog” transmission, which will then be used to motivate unsupervised learning for JSCC. I will present a "deep joint source-channel encoder” architecture, which behaves similarly to analog transmission, and not only improves upon state-of-the-art digital transmission schemes, but also achieves graceful degradation with channel quality.

In the second part of the lecture, I will talk about another connection between information theory and machine learning, focusing on distributed machine learning, particularly targeting wireless edge networks, and show how ideas from coding theory can help improve the performance of distributed learning across unreliable servers. I will introduce both coded and uncoded distributed stochastic gradient descent algorithms, and study the trade-off between their average computation time and the communication load. Finally, I will introduce the novel concept of "over-the-air stochastic gradient descent" for wireless edge learning, and show that it significantly improves the efficiency of machine learning across bandwidth and power limited wireless devices compared to the standard digital approach that separates computation and communication. This will close the circle, illustrating how JSCC can be used to speed up machine learning at the wireless edge.

Biography

Deniz Gündüz received the B.S. degree from METU, Turkey in 2002, and the M.S. and Ph.D. degrees from NYU Polytechnic School of Engineering (formerly Polytechnic University) in 2004 and 2007, respectively. After his PhD, he served as a postdoctoral research associate at Princeton University, and as a consulting assistant professor at Stanford University. He was a research associate at CTTC in Barcelona, Spain until September 2012, when he joined the Electrical and Electronic Engineering Department of Imperial College London, UK, where he is currently a Reader (Associate Professor) in information theory and communications, and leads the Information Processing and Communications Laboratory (IPC-Lab).

His research interests lie in the areas of communications and information theory, machine learning, and privacy. Dr. Gündüz is an Editor of the IEEE Transactions on Green Communications and Networking, and a Guest Editor of the IEEE Journal on Selected Areas in Communications, Special Issue on Machine Learning in Wireless Communication. He served as an Editor of the IEEE Transactions on Communications from 2013 until 2018. He is the recipient of the IEEE Communications Society - Communication Theory Technical Committee (CTTC) Early Achievement Award in 2017, a Starting Grant of the European Research Council (ERC) in 2016, IEEE Communications Society Best Young Researcher Award for the Europe, Middle East, and Africa Region in 2014, Best Paper Award at the 2016 IEEE WCNC, and the Best Student Paper Awards at the 2018 IEEE WCNC and 2007 ISIT. He is serving/ served as the General Co-chair of the 2019 London Symposium on Information Theory, 2016 IEEE Information Theory Workshop, and the 2012 European School of Information Theory.

Sequential Acquisition of Information: From Active Hypothesis Testing to Active Learning to Empirical Function Optimization

Tara Javidi (University of California, San Diego)

Presentation slides

Abstract

This lecture explores an often overlooked connection between the problems of communications with feedback in information theory and that of active hypothesis testing in statistics. This connection, we argue, has significant implications for next generation machine learning algorithms where data is collected actively and/or by cooperative yet local agents. 

Consider a decision maker who is responsible to actively and dynamically collect data/samples so as to enhance the information about an underlying phenomena of interest while accounting for the cost of communication, sensing, or data collection. The decision maker must rely on his current information state to constantly (re-)evaluate the trade-off between the precision and the cost of various actions. An important example of this problem is that of channel coding with feedback whose solution, in terms of Extrinsic Jensen-Shannon divergence and posterior matching,provides critical insights for the design of the generation machine learning algorithms. 

In the first part of the talk, we discuss the history of the problem and seminal contributions by Blackwell, Chernoff, De Groot, and Stein. In the second part of the talk, we discuss the information theoretic notions of acquisition rate and reliability (and their fundamental trade-off),Extrinsic Jensen-Shannon divergence, and posterior matching. We will use these notions and insights for three important problems: 1) active Bayesian learning, 1) empirical function optimization, and 2) fully decentralized federated learning.

Biography

Tara Javidi received her BS in electrical engineering at Sharif University of Technology, Tehran, Iran. She received her MS degrees in electrical engineering (systems) and in applied mathematics (stochastic analysis) from the University of Michigan, Ann Arbor, in 1998 and 1999, respectively. She received her Ph.D. in electrical engineering and computer science from the University of Michigan, Ann Arbor, in 2002.

From 2002 to 2004, Tara Javidi was an assistant professor at the Electrical Engineering Department, University of Washington, Seattle. In 2005, she joined the University of California, San Diego, where she is currently a professor of electrical and computer engineering. In 2013-2014, she spent her sabbatical at Stanford University as a visiting faculty. At the University of California, San Diego, Tara Javidi is a founding co-director of the Center for Machine-Integrated Computing and Security, is the principal investigator of DetecDrone Project and is a faculty member of the Centers of Information Theory and Applications (ITA), Wireless Communications (CWC), Contextual Robotics Institute (CRI) and Networked Systems (CNS). She is also a founding faculty member of Halicioglu Data Science Institute (HDSI) at UCSD. At UCSD, she is an affiliate faculty member in the departments of Computer Science and Engineering as well as Ethnic Studies. She is also a member of Board of Governors of the IEEE Information Theory Society (2017/18/19).

 


Short Lectures:

 

Information Rates for Phase Noise Channels

Luca Barletta (Politecnico di Milano)

Presentation slides

Abstract

As the transmission bandwidths, oscillator frequencies and constellation densities increase to chase ever-growing demand for data rates, phase noise invariably emerges as the crucial performance limiting factor in many communication systems. In this talk we discuss models that capture the effect of the memory introduced by phase noise, and show how to build discrete-time models starting from continuous-time waveforms. Finally, we introduce some bounding techniques for getting capacity results for channel models with memory.

Biography

Luca Barletta received the M.S. degree (with honors) in telecommunications engineering and the Ph.D. degree in information engineering from Politecnico di Milano, Milan, Italy, in 2007 and 2011, respectively. He currently is an Assistant Professor at the Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Milan, Italy. From 2011 to 2012 he was a Post-doctoral Researcher at Politecnico di Milano, and in 2012 he was a Visiting Researcher at Bell Labs, Alcatel-Lucent, Holmdel, NJ, USA. From 2012 to 2015 he was a Senior Researcher at the Institute for Communications Engineering and at the  Institute for Advanced Study, Technische Universitaet Muenchen, Munich, Germany. Luca Barletta's research interests are mainly in information and communications theory, with applications to wireless and fiber-optic communications, and random access protocols.

Design of Energy-Efficient LDPC Codes and Decoders

Elsa Dupraz (IMT Atlantique)

Presentation slides

Abstract

A key aspect in the design of future nano-electronic devices will be their energy consumption. Lowering this energy consumption is fundamental to both increase battery life and to reduce environmental footprint. The energy consumption of electronic chips can be reduced by aggressively lowering their power supply, which may introduce faults in the computation operations realized on these chips. In this lecture, We discuss the construction of energy-efficient Low Density Parity Check (LDPC) codes and decoders. We first review the basics of LDPC codes design. We then present a theoretical analysis of the performance of LDPC codes and decoders under hardware faults, and we show how to introduce energy constraints in the code design process. To finish, we discuss connections with related problems of energy-efficient machine learning algorithms.

Biography

Elsa Dupraz earned her Master of Science (M.Sc) in Advanced Systems of Radiocommunications (SAR) in 2010 and gratuated from ENS de Cachan and University Paris Sud. In 2013, she received a PhD in physics from University Paris-Sud at Laboratoire des Signaux et Systèmes (LSS) with Michel Kieffer and Aline Roumy as advisors. From January 2014 to September 2015, she held a post-doctoral position at ETIS (ENSEA, University Cergy-Pointoise, CNRS, France) and ECE department of the University of Arizona (United States). Since October 2015, she is an Assistant Professor at IMT Atlantique.

Stability and Sensitivity of the Capacity in Continuous Channels

Malcolm Egan (INSA Lyon)

Presentation slides

Abstract

Many notions in information theory admit variational interpretations. One of both theoretical and practical importance is the Shannon capacity, which via the noisy channel coding theorem informs maximum achievable rates. In this talk, we study the capacity from an optimisation-theoretic point of view. In recent years, there have been a number of significant results establishing properties of the optimal input distribution. We briefly overview the key ideas behind this progress and then turn to studying the impact of channel parameters and constraints on the capacity. This approach emphasises perturbations of the optimisation problem for which we present the main tools. Some applications of these results in non-Gaussian noise channels with non-standard constraints are given.

Biography

Malcolm Egan received the Ph.D. in Electrical Engineering from the University of Sydney, Australia, in 2014. He is currently Assistant Professor at Département Telecommunications, INSA Lyon, France. Previously, he held postdoctoral positions at INRIA and INSA Lyon, the Laboratoire de Mathématiques, Université Blaise Pascal, France, and the Department of Computer Science, Czech Technical University in Prague, Czech Republic. His research interests are in the areas of information theory, statistical signal processing and mechanism design with applications in wireless and molecular communications, and intelligent transportation.