Tutorials

To create a title:

insert a Text Editor, choose the title part and the HTML tag to be Heading 2.

To add main text here:

insert a Text Editor, type in the content, set the HTML Tag: p (paragraph)

T1. Causal Inference: Past, Present and (maybe) Future

Anish Agarwal, Devavrat Shah and Dennis Shen

Data-driven decision-making fundamentally requires answering counterfactual questions of the form:

What will happen to Y if we do A?

Examples abound: What will happen to a data center’s latency if a new congestion control protocol is used? What will happen to the probability of a power outage if a new transmission line is introduced in the electric grid? What will happen to a patient’s health if they are given a new therapy?

Answering such counterfactual questions requires thinking carefully about the causal relationship between A and Y. The key challenge in doing so is tackling confounding, i.e., the hidden correlation between A and Y that might be present in observed data – colloquially, we refer to this challenge as “correlation does not imply causation”. In addition, modern datasets are inherently high-dimensional, noisy, and sparse, which adds to the challenge of building reliable causal models.

The purpose of this tutorial is to provide a formal introduction to causal inference and do a survey of the rapidly growing modern statistical toolkit to learn causal relationships from data.  The goal is to connect ideas from two fields: (i) Econometrics which focuses on learning causal relationships from observational data by carefully correcting for confounding. (ii) Machine learning (ML) and high-dimensional statistics which focuses on designing estimators that deal with high-dimensionality, noise, and sparsity. In this tutorial, we will introduce some of the canonical frameworks to define and learn causal relationships such as the Neyman-Rubin potential outcomes model, randomized control trials, instrumental variables, synthetic controls, regression discontinuity, etc. We will then provide one unified modern lens to view these various frameworks – in particular, through tensor completion. Open questions pertaining to the fundamental statistical & computational tradeoffs in causal inference, and possible connections to other modern developments in ML and statistics such as mixture modeling, learning dynamical systems, stochastic block modeling, etc. will be presented.

   Bios of Speakers

Anish Agarwal is currently a postdoctoral fellow at the Simons Institute at UC Berkeley. He will be starting his faculty position at Columbia University in 2023. He received his PhD from the department of EECS, MIT where he was advised by Alberto Abadie, Munther Dahleh, and Devavrat Shah. His research focuses on designing and analyzing methods for causal machine learning and applying them to critical problems in social and engineering systems. He currently serves as a technical consultant to TauRx Therapeutics and Uber Technologies on questions related to experiment design and causal inference. Prior to the PhD, he was a management consultant at Boston Consulting Group. He received his BSc and MSc at Caltech. 

Devavrat Shah (SM’16) received his B.Tech. degree from IIT Bombay and his Ph.D. degree from Stanford University, both in Computer Science. He is Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT where he has been teaching since 2005. He is currently the faculty director of Deshpande Center for Technology Innovation at MIT. His research focuses on statistical inference and stochastic networks. He has been an associate editor for IEEE Transaction on Information Theory, Operations Research, and Queuing Systems as well as lead guest editor for IEEE Journal on Selected Areas in Information Theory on Estimation and Inference. He is a Kavli Fellow of the National Academy of Science invited as a distinguished young scientist as part of the 2014 Indonesian-American Symposium. He has received paper awards from INFORMS Applied Probability Society, INFORMS Management Science and Operations Management, NeurIPS, ACM Sigmetrics, and IEEE Infocom. He has received the Erlang Prize from INFORMS Applied Probability Society and the Rising Star Award from ACM Sigmetrics. He has received multiple Test of Time paper awards from ACM Sigmetrics. He is a distinguished alumnus of his alma mater IIT Bombay. In 2013, he co-founded the machine learning start-up Celect (part of Nike since August 2019) which helps retailers optimize inventory using accurate demand forecasting. In 2019, he co-founded Ikigai Labs with the mission of building a self-driving organization by empowering data business operators to make data-driven decisions with the ease of spreadsheets.

Dennis Shen is a FODSI postdoctoral fellow at UC Berkeley, where he is hosted by Professors Peng Ding, Jasjeet Sekhon, and Bin Yu. Previously, he completed his PhD in Electrical Engineering and Computer Science (EECS) at MIT, where he was advised by Professor Devavrat Shah. His graduate thesis has received George Sprowls best dissertation prize from MIT as well as Honorable mention for George B Dantzig best dissertation prize from INFORMS. Prior to his PhD, he completed his B.S. in Electrical Engineering from UC San Diego. His current research is focused on problems at the interface of causal inference and machine learning.

T2. Advancement in Coding for Privacy and Security in Distributed Systems​

Rawad Bitar and Sidharth Jaggi

Privacy and security of sensitive data is of paramount importance, but it is facing serious challenges mainly due to the distributed nature of current information systems. Researchers are extensively investigating coding techniques that allow data distribution (whether in storage, computation or communication) while maintaining privacy and security of the data at hand. Variants of those problems are known as coded computing, private and secure distributed storage, stealth communication, private information and private function retrieval (PIR and PFR), federated learning and private coded caching. Information-theoretic privacy schemes designed for those problems generally rely on secret sharing. In terms of security, significant improvements over error-correcting codes can be made by leveraging potential weaknesses of the adversary. Thus, understanding the theory of secret sharing and coding for security against weak adversaries is of extreme importance and would allow significant improvement in the research on private and secure distributed systems.

In this tutorial, we cover the fundamentals of threshold and communication-efficient secret sharing. We then detail how to manipulate this powerful tool to obtain fast, private and secure distributed computing schemes in the master/worker setting. The designed schemes are deemed fast since they can tolerate a flexible number of stragglers. In fact, two of them are rateless schemes that allow the master to assign different number of tasks to the workers, depending on their average computing time. Security against malicious workers is then explained as an add-on to the introduced private schemes. In terms of communication, we consider the problem of reliable, information-theoretically private, and stealthy communication over multipath networks. We provide information-theoretically optimal rates and computationally efficient codes asymptotically achieving these rates for both reliable as well as private transmission against additive and overwrite non-causal adversaries, causal adversaries, and causal adversaries with passive feedback to the transmitter. Finally, we demonstrate schemes that in addition to privacy and reliability, also provide “stealth”, which ensures that an eavesdropper who does not observe all the links being used is unable to infer whether the transmitter is communicating something innocent (say humming a song), or something secret (say state secrets).

In addition to the several open problems that arise from the topics discussed, we conclude with a new research direction: Creating secret sharing schemes that maintain desirable properties of the input data. More precisely, secret sharing outputs dense codewords. In settings in which the codewords are required to be sparse, maintaining information theoretic privacy is impossible. We explain how to manipulate the theory of secret sharing to construct sparse codes with codewords and desired privacy guarantees of the input data.

   Bios of Speakers

 Rawad Bitar is currently doing a habilitation at the Technical University of Munich. From February 2020 until February 2022, he was a post-doctoral researcher at the same institute. He obtained a PhD degree in Electrical and Computer Engineering from Rutgers University in 2020, and the Diploma and M.S. degrees in computer and communication engineering from the Lebanese University, in 2013 and 2014, respectively. He held short-term visiting researcher positions at Aalto University, TU Berlin and the Chinese University of Hong Kong. His research interests are centered around coding for distributed systems with emphasis on privacy, security and reliability. Additionally, he is interested in network coding and coding for insertions and deletions.

Sidharth (Sid) Jaggi received his B. Tech. from I.I.T. Bombay 2000, his M.S. and Ph.D. degrees from the CalTech in 2001 and 2006 respectively, all in EE. He spent 2006 as a Postdoctoral Associate at LIDS MIT. He joined the Department of Information Engineering at the Chinese University of Hong Kong in 2007, and the School of Mathematics at the University of Bristol in 2020, where he is now an Associate Professor. His interests lie at the intersection of network information theory, coding theory, and algorithms. His research group thus (somewhat unwillingly) calls itself the CAN-DO-IT team (Codes, Algorithms, Networks: Design and Optimization for Information Theory). Topics he has dabbled in include sparse recovery/group-testing, covert communication, network coding, and adversarial channels.

T3. Information-Theoretic Tools for Responsible Machine Learning

Shahab Assodeh, Flavio P. Calmon, Mario Diaz and Haewon Jeong

This tutorial will present a survey of recent developments in fair and private machine learning (ML), describe open challenges in responsible data science, and serve as a call to action for the information theory community to engage with problems of social importance. Our tutorial is divided into two parts. First, we survey recent results at the intersection of differential privacy and information theory. Differential privacy has become the de facto standard for privacy adopted in both government and industry. Our tutorial will review key definitions, metrics, and mechanisms used in differential privacy and recent developments in the field. We aim to present these topics using mathematical tools familiar to information theorists. Our tutorial will show that differential privacy metrics can be cast in terms of f-divergences and Rényi divergences. We demonstrate how to apply the properties of these divergences to analyze differentially private algorithms deployed in ML applications (e.g., online learning and deep learning). We describe how the information-theoretic vantage point reveals fundamental operational limits of differentially private statistical analysis. We also discuss recent advances in differential privacy mechanism design for ML. The first half of the tutorial will conclude with a description of open problems in differential privacy that may benefit from information-theoretic techniques.

The second part of the tutorial will focus on fair ML. Our goal is to present recent developments in the field of fair ML through an information-theoretic lens. We start by overviewing metrics for evaluating fairness and discrimination, including individual fairness, group fairness, predictive multiplicity, and fair use. We formulate these metrics using easy-to-understand and unified notation based on error rates and divergences. We then present recent results on the limits of fair classification. These limits include trade-offs between splitting classifiers across different demographic groups and are proved using standard converse results familiar to information theorists. We also overview state-of-the-art fairness interventions, describing and contrasting several techniques developed over the past five years. Finally, we outline open problems in the field.

The tutorial will conclude with a hands-on demo of software packages for private and fair ML targeted toward student and post-doc attendees. No previous background in privacy or fair ML is required.

   Bios of Speakers

Shahab Asoodeh is an Assistant Professor of Computer Science in the Department of Computing and Software at McMaster University. He is also an Academic Collaborator with Meta’s Statistics and Privacy Team.  Before joining McMaster, he was a postdoctoral scholar at John A. Paulson School of Engineering and Applied Sciences at Harvard University and also in Knowledge Lab at The University of Chicago. He received his Ph.D. in Applied Mathematics from Queen’s University. His main research interests are information theory, inference, and statistics, with applications to fairness, privacy, and machine learning.

 Flavio P. Calmon is an Assistant Professor of Electrical Engineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences. Before joining Harvard, he was the inaugural Data Science for Social Good Post-Doctoral Fellow at IBM Research in Yorktown Heights, New York. He received his Ph.D. in  Electrical Engineering and Computer Science at MIT. His research develops information-theoretic tools for responsible, reliable, and rigorous machine learning. Prof. Calmon has received the NSF CAREER award, faculty awards from Google, IBM, Oracle, and Amazon, the NSF-Amazon Fairness in AI award, the Harvard Data Science Initiative Bias2 award, among other grants. He also received the inaugural Título de Honra ao Mérito (Honor to the Merit Title) given to alumni from the Universidade de Brasília, being the first awardee from engineering and computer science. In 2020, he was awarded a commendation from Harvard’s Dean of Undergraduate Studies for “Extraordinary Teaching during Extraordinary Times” for his teaching during the COVID pandemic. 

Mario Diaz is a Research Associate with the Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas (IIMAS), Universidad Nacional Autónoma de México. Prior to this, he was a Postdoctoral Scholar with Arizona State University, Centro de Investigación en Matemáticas, and Harvard University. He received a Ph.D. degree in mathematics and statistics from Queen’s University in 2017. His research interests include the mathematical and statistical foundations of information privacy, theoretical machine learning, and random matrix theory. Upon joining IIMAS in 2019, Prof. Diaz has taught graduate courses on theoretical machine learning, information theory, and the statistical foundations of privacy.

Haewon Jeong is a postdoctoral fellow in Electrical Engineering at Harvard University. She received a Ph.D. in Electrical and Computer Engineering at Carnegie Mellon University, with her thesis on the coding theoretic approach for reliable large-scale computing. Her research interests include information theory, distributed computing, and fairness in machine learning. She has taught introductory courses in Wireless Communication and Machine Learning as a TA and gave several guest lectures in advanced graduate-level courses, including Information Theory and Coding Theory. She also has extensive outreach experience in teaching math and science to K-12 students.

T4. Universal Decoding by Guessing Random Additive Noise Decoding - GRAND

Muriel Medard and Ken R. Duffy

Forward error correction decoding has traditionally been a code-specific endeavor. An innovative recent alternative is noise-centric guessing random additive noise decoding (GRAND). Our approach uses modern developments in the analysis of guesswork to create a universal algorithm where the effect of noise is guessed from most likely to least likely. The noise effect is removed from the received signal and the codebook is used simply as a hash check to verify whether the result is in the codebook. The guessing continues until the hash check is correct or the algorithm declares an erasure. This approach provably provides a Maximum Likelihood (ML) decoding for any block code as long as the guesswork order matches the channel statistics.

This tutorial will introduce the algorithmic basis of GRAND for universal hard and soft decoding. It will introduce the audience to the evaluation of GRAND’s performance in terms of common metrics (such as block error rate, bit error rate, and complexity) for a wide variety of codes, and it will present hardware architectures for GRAND, highlighting themes such as parallelism and pipelining.

This tutorial will introduce GRAND in three parts. The first will be an algorithmic discussion centered on GRAND for hard detection. The tutorial will consider both the theory behind GRAND, including its optimality, as well as performance of GRAND, particularly showing that even random linear codes perform well when decoded with an accurate decoder. We show how any statistical information about noise, such as knowing that noise is correlated, for example because of its origin as interference, can be used to aid in noise guessing and possibly obviate the use of interleavers. This first part of the tutorial will be presented by Professor Médard and take approximately 70 minutes.

The second part of the tutorial will consider the issue of soft information. We shall begin with a ML decoder that we call SGRAND, which is a development of the hard detection ML decoder GRAND, that fully avails of soft detection information by giving a tree structure to all possible error vectors. We then present approaches for simplifying SGRAND, based on relative ordering of symbol errors, that approximate well SGRAND while requiring very light complexity in terms of guessing and decoding the order of guessing. This part of the tutorial will examine the architectural issues around implementing GRAND. Key among these are the design of ordered noise guessing sequences. This second part of the tutorial, led by Professor Duffy, will take approximately 70 minutes.

The end of the tutorial, taught by the two instructors, will be an overview of research questions related to GRAND, in the domains of theoretical properties, algorithmic development, and architectural aspects. This part of the tutorial will encourage discussion with the participants, with multiple possible entry points depending on the interests and backgrounds of the audience. This final part is scheduled for about 30 minutes. The total length of the tutorial is three hours, with 10 minutes of break, where informal conversations and networking can take place.

   Bios of Speakers

Muriel Médard is the Cecil H. and Ida Green Professor in the Electrical Engineering and Computer Science (EECS) Department at MIT, where she leads the Network Coding and Reliable Communications Group in the Research Laboratory for Electronics at MIT. She obtained three Bachelors degrees (EECS 1989, Mathematics 1989 and Humanities 1991), as well as her M.S. (1991) and Sc.D (1995), all from MIT. She is a Member of the US National Academy of Engineering (elected 2020), a Fellow of the US National Academy of Inventors (elected 2018), and a Fellow of the Institute of Electrical and Electronics Engineers (elected 2008). Muriel was elected president of the IEEE Information Theory Society in 2012, and served on its board of governors for eleven years. She holds an Honorary Doctorate from the Technical University of Munich (2020).

She was co-winner of the MIT 2004 Harold E. Egerton Faculty Achievement Award and was named a Gilbreth Lecturer by the US National Academy of Engineering in 2007. She received the 2017 IEEE Communications Society Edwin Howard Armstrong Achievement Award and the 2016 IEEE Vehicular Technology James Evans Avant Garde Award. She received the 2019 Best Paper award for IEEE Transactions on Network Science and Engineering, the 2018 ACM SIGCOMM Test of Time Paper Award, the 2009 IEEE Communication Society and Information Theory Society Joint Paper Award, the 2009 William R. Bennett Prize in the Field of Communications Networking, the 2002 IEEE Leon K. Kirchmayer Prize Paper Award, as well as eight conference paper awards. Most of her prize papers are co-authored with students from her group.

She has served as technical program committee co-chair of ISIT (twice), CoNext, WiOpt, WCNC and of many workshops. She has chaired the IEEE Medals committee, and served as member and chair of many committees, including as inaugural chair of the Millie Dresselhaus Medal. She was Editor in Chief of the IEEE Journal on Selected Areas in Communications and has served as editor or guest editor of many IEEE publications, including the IEEE Transactions on Information Theory, the IEEE Journal of Lightwave Technology, and the IEEE Transactions on Information Forensics and Security. She was a member of the inaugural steering committees for the IEEE Transactions on Network Science and for the IEEE Journal on Selected Areas in Information Theory.

She has over fifty US and international patents awarded, the vast majority of which have been licensed or acquired. For technology transfer, she has co-founded three companies, CodeOn, for which she consults, Chocolate Cloud, on whose board she serves, and Steinwurf, for which she is Chief Scientist. Muriel has supervised over 40 master students, over 20 doctoral students and over 25 postdoctoral fellows.

Pictured is Ken Duffy, CRT Director. Photo by Alan Place.Ken R. Duffy is a Professor of Applied Probability at Maynooth University and is the Director of the Hamilton Institute, the university’s interdisciplinary applied mathematics research institute. He obtained a B.A. (mod) in 1996 and Ph.D. in 2000, both in mathematics and awarded by Trinity College Dublin. He is one of three co-Directors of the Science Foundation Ireland Centre for Research Training in Foundations of Data Science, which will train over 120 Ph.D. students from 2019 to 2026.

His research encompasses the application of probability and statistics to science and engineering. As a result of broad multidisciplinary interests, his work has been published in mathematics journals (e.g. Annals of Applied Probability, Journal of Applied Probability, Journal of Mathematical Biology), engineering journals (e.g. IEEE Transactions on Information Theory, IEEE Transactions on Network Science and Engineering, IEEE/ACM Transactions on Networking) and scientific journals (e.g. Cell, Nature Communications, Science).

He is a co-founder of the Royal Statistical Society’s Applied Probability Section (2011), co-authored a cover article of Trends in Cell Biology (September, 2012), is a winner of a best paper award at the IEEE International Conference on Communications (2015), and the Best Paper award from IEEE Transactions on Network Science and Engineering (2019). He has had extended invited visits at the Kavli Institute for Theoretical Physics at UC Santa Barbara, The Research Laboratory of Electronics at the Massachusetts Institute of Technology, and the Immunology Division at the Walter and Eliza Hall Institute of Medical Research.

T5. Information Inequalities: Facets of Entropy and Automated Reasoning by Optimization

Siu Wai Ho, Chee Wei Tan and Raymond W. Yeung

Information inequalities that involve only Shannon’s information measures are important in information theory. They are the “physical laws” that characterize fundamental limits in applications like communications and cryptography. Since the birth of information theory, the nonnegativity of the Shannon information measures, for discrete random variables, are the known information inequalities that are categorically known as Shannon-type inequalities. In the late 1990s, information inequalities which are not implied by the nonnegativity of Shannon’s information measures were discovered. The discovery of such inequalities known as non-Shannon-type inequalities started the quest of new information inequalities that is still largely uncharted territory. To distinguish between Shannon-type and non-Shannon-type information inequality as well as to prove or to disprove the information inequality is a nontrivial task in general. Recently, advances were made to automatically explore finding the shortest proof or the smallest analytic counterexample by computers. This tutorial will present an overview of information inequalities and these recent advances in automated proving both Shannon-type and non-Shannon-type inequalities (both unconstrained and constrained) with a view towards its underlying theory, computational methodology and applications. It also covers a comprehensive introduction to the topic of Automated Reasoning by Convex Optimization as well as hands-on using the AITIP Software-as-a-Service (https://aitip.org) to interactively explore proofs to a variety of problems in information theory, network coding and machine learning. Finally, some open issues of information inequalities and automated reasoning in information theory will be discussed along with open-source code and datasets to inspire attendees to make new discoveries using information inequalities.

   Bios of Speakers

Siu-Wai Ho is a Senior Research Fellow with the Teletraffic Research Centre, University of Adelaide, Adelaide, S.A., Australia. He received the B.Eng., M.Phil., and Ph.D. degrees in information engineering from The Chinese University of Hong Kong, Hong Kong, in 2000, 2003, and 2006, respectively. He served as the general chair of the first Australian information theory school.

Dr. Ho received the Croucher Foundation Fellowship from 2006 to 2008, the 2008 Young Scientist Award from the Hong Kong Institution of Science, UniSA Research SA Fellowship from 2010 to 2013, and the Australian Research Council Australian Post-Doctoral Fellowship from 2010 to 2013. His project received the 2016 National iAward–consumer category from the Australian Information Industry Association. He was a co-recipient of the Best Paper Award at the IEEE/IET International Symposium on Communication Systems, Networks and Digital Signal Processing 2016 and the Best Student Paper Award in the 2016 Australian Communication Theory Workshop. With his PhD student, his project received an honorary mention in the 2015 ComSoc Student Competition Communications Technology Changing the World organized by the IEEE Communications Society.

Chee-Wei Tan received the M.A. and Ph.D. degrees in electrical and computer engineering from Princeton University. His research interests are in wireless networks, artificial intelligence, optimization and distributed machine learning. He is an IEEE Communications Society Distinguished Lecturer and has been an Associate Editor of IEEE Transactions on Communications and IEEE/ACM Transactions on Networking.

Raymond W. Yeung received the BS, MEng and PhD degrees in electrical engineering from Cornell University. He was with AT&T Bell Laboratories from 1988 to 1991. He joined CUHK in 1991 and has been with Department of Information Engineering since then, where he is currently the Choh-Ming Li Professor of Information Engineering and a Co-Director of the Institute of Network Coding. He is the author of the books A First Course in Information Theory (Kluwer Academic/Plenum Publishers, 2002) and Information Theory and Network Coding (Springer 2008), which have been adopted by over 100 institutions around the world. In spring 2014, he gave the first MOOC in the world on information theory that reached over 60,000 students since then. 

He has received a number of awards for his research contributions, most recently the 2021 IEEE Richard W. Hamming Award, and the 2002 Claude E. Shannon Award. He is a Fellow of the IEEE, Hong Kong Academy of Engineering Sciences, and Hong Kong Institution of Engineers.

T6. Model-based Deep Learning

Yonina Eldar and Nir Shlezinger

Recent years have witnessed a dramatically growing interest in machine learning (ML). These data-driven trainable structures have demonstrated an unprecedented success in various applications, including computer vision and speech processing. The benefits of ML-driven techniques over traditional model-based approaches are twofold: First, ML methods are independent of the underlying stochastic model, and thus can operate efficiently in scenarios where this model is unknown, or its parameters cannot be accurately estimated; Second, when the underlying model is extremely complex, ML has the ability to extract the meaningful information from the observed data. Nonetheless, not every problem should be solved using deep neural networks (DNNs). In scenarios for which model-based algorithms exist and are feasible, these analytical methods are typically preferable over ML schemes due to their performance guarantees and possible proven optimality. Among the notable areas where model-based schemes are typically preferable, and whose characteristics are fundamentally different from conventional deep learning applications, are communications, coding, and signal processing. In this tutorial, we present methods for combining DNNs with model-based algorithms. We will show hybrid model-based/data-driven implementations which arise from classical methods in communications and signal processing and demonstrate how fundamental classic techniques can be implemented without knowledge of the underlying statistical model, while achieving improved robustness to uncertainty.

   Bios of Speakers

Yonina C. Eldar is a Professor with the Department of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot, Israel. She was previously a Professor with the Department of Electrical Engineering, Technion, where she held the Edwards Chair in Engineering. She is also a Visiting Professor at MIT, a Visiting Scientist at the Broad Institute, and an Adjunct Professor at Duke University, and was a Visiting Professor at Stanford. Her research interests are in the broad areas of statistical signal processing, sampling theory and compressed sensing, learning and optimization methods, and their applications to biology and optics.

She is a member of the Israel Academy of Sciences and Humanities (elected 2017) and a EURASIP Fellow. She has received numerous awards for excellence in research and teaching, including the IEEE Signal Processing Society Technical Achievement Award (2013), the IEEE/AESS Fred Nathanson Memorial Radar Award (2014), and the IEEE Kiyo Tomiyasu Award (2016). She was a Horev Fellow of the Leaders in Science and Technology program at the Technion and an Alon Fellow. She received the Michael Bruno Memorial Award from the Rothschild Foundation, the Weizmann Prize for Exact Sciences, the Wolf Foundation Krill Prize for Excellence in Scientific Research, the Henry Taub Prize for Excellence in Research (twice), the Hershel Rich Innovation Award (three times), the Award for Women with Distinguished Contributions, the Andre and Bella Meyer Lectureship, the Career Development Chair at the Technion, the Muriel & David Jacknow Award for Excellence in Teaching, and the Technion’s Award for Excellence in Teaching (twice). She received several best paper awards and best demo awards together with her research students and colleagues including the SIAM outstanding Paper Prize and the IET Circuits, Devices and Systems Premium Award, and was selected as one of the 50 most influential women in Israel.

She was a member of the Young Israel Academy of Science and Humanities and the Israel Committee for Higher Education. She is the Editor-in-Chief of Foundations and Trends in Signal Processing, a member of the IEEE Sensor Array and Multichannel Technical Committee, and serves on several other IEEE committees. In the past, she was a Signal Processing Society Distinguished Lecturer, member of the IEEE Signal Processing Theory and Methods and Bio Imaging Signal Processing technical committees, and served as an Associate Editor for the IEEE TRANSACTIONS ON SIGNAL PROCESSING, the EURASIP Journal of Signal Processing, the SIAM Journal on Matrix Analysis and Applications, and the SIAM Journal on Imaging Sciences. She was Co-Chair and Technical Co-Chair of several international conferences and workshops.

Nir Shlezinger is an assistant professor in the School of Electrical and Computer Engineering in Ben-Gurion University, Israel. He received his B.Sc., M.Sc., and Ph.D. degrees in 2011, 2013, and 2017, respectively, from Ben-Gurion University, Israel, all in electrical and computer engineering. From 2017 to 2019 he was a postdoctoral researcher in the Technion, and from 2019 to 2020 he was a postdoctoral researcher in Weizmann Institute of Science, where he was awarded the FGS prize for outstanding research achievements. His research interests include communications, information theory, signal processing, and machine learning.