IEEE International Conference on Machine Learning for Communication and Networking
5–8 May 2024 // Stockholm, Sweden

Tutorials

Sunday Half-Day, 5 May 2024, 9:00-12:30

TUT-1: Unlocking the Potential of Data Engineering for Network Management and Orchestration
TUT-2: Will ML Disrupt Wi-Fi? Status and Future of ML Adoption in Wi-Fi
TUT-3: Machine Learning in the IoBNT: Exploiting the Potential of Nano-Scale Communication and Computing
TUT-4: Data-Driven Modelling and Optimization of Green Future Mobile Networks: From Machine Learning to Generative AI
TUT-5: Bridging the Gap: Machine Learning and Wireless Innovation

Sunday Half-Day, 5 May 2024, 14:00-17:30

TUT-6: Semantic and Goal-Oriented Communications for Edge AI: Enabling Technologies and Challenges
TUT-7: Applications of Diffusion Models in Communications
TUT-8: Differentiable Ray Tracing for Radio Propagation Modeling
TUT-9: Generative and Discriminative AI Models for Physical Layer Communication Challenges
TUT-10: Graph-Based Machine Learning for Wireless Communications

 


TUT-1: Unlocking the Potential of Data Engineering for Network Management and Orchestration

Sunday, May 5, 9:00-12:30
Room: H Building at Teknikringen 33, Rooms A
Speakers: Jorge Baranda; Engin Zeydan; Josep Mangues-Bafalluy

Abstract:

The tutorial will cover an in-depth study of convergence of data engineering in network management and orchestration domain. The first part will be background information on data engineering and second part will be on application of those technologies to network management problems. In the second part, there will be a step-by-step demonstration of one of the demos during the second part of the tutorial. The participants do not need to have any prerequisite components. Application of data engineering concepts in the management and orchestration of telecommunication networks is an emerging topic and will be most important in the coming years as well. The attendees to this tutorial will have the chance to learn more about data engineering for networking and its applications from the perspective of telecommunication operators.

Biographies:

Engin Zeydan received the PhD degree in February 2011 from the Department of Electrical and Computer Engineering at Stevens Institute of Technology, Hoboken, NJ, USA. Since November 2018, he has been with the Communication Networks Division of the CTTC working as a Senior Researcher. He was a part-time instructor at Electrical and Electronics Engineering department of Ozyegin University Istanbul, Turkey between January 2015 and June 2018. His research areas include data engineering/science for telecommunication networks.

Josep Mangues-Bafalluy received the PhD degree in Telecommunications in 2003 from the Technical University of Catalonia (UPC). He is Senior Researcher and Head of the Services as Networks (SAS) Research Unit of the CTTC. He has given keynotes in conferences/workshops (MONAMI 2015, Mobislice 2020). His research interests include SDN and NFV applied to next generation mobile networks and data science/engineering- and AIML-based network automation.

Jorge Baranda received the M.S. and Ph.D. degrees in telecommunications engineering from the Technical University of Catalonia (UPC), in 2008 and 2021, respectively. He is currently a Senior Researcher within the Services as Networks (SaS) Research Unit at the Centre Tecnològic de Telecomunicacions Catalunya (CTTC), Barcelona. At CTTC, he has participated in several European, national, and industrial projects related to the management and orchestration of mobile networks using SDN/NFV, efficient routing strategies for mobile network backhauling, and novel wireless communication systems. He has coauthored over 60 different peer-reviewed journal and conference papers.


T2: Will ML Disrupt Wi-Fi? Status and Future of ML Adoption in Wi-Fi

Sunday, May 5, 9:00-12:30
Room: H Building at Teknikringen 33, Rooms B
Speakers: Boris Bellalta; Katarzyna Kosek-Szott; Szymon Szott; Francesc Wilhelmi

Abstract:

The convergence of Wi-Fi and machine learning (ML) has a chance to revolutionize next-generation wireless local area networks. Future Wi-Fi networks will need to address critical design limitations to meet the requirements of emergent Internet-based applications such as extended reality (XR) or multi-sensory communications. ML has shown impressive results in many applications, such as image recognition and chatbots, and is becoming a cornerstone technology in many fields. The potential benefits of using ML in Wi-Fi networks include improved reliability, higher performance, and support for technical innovations in upcoming standards. This tutorial will provide an overview of the recent Wi-Fi evolution and trends, a description of ML methods adopted in Wi-Fi, and how these technologies can benefit from one another in the upcoming years. The main focus of this tutorial will be on the state-of-the-art solutions integrating ML and Wi-Fi, showcased through relevant use cases. Furthermore, to reinforce the concepts covered in the tutorial, we will provide on-site hands-on activities where participants will train and use ML models to address key Wi-Fi problems. Finally, we will outline future research areas in this field.

Biographies:

Prof. Boris Bellalta is a Full Professor at Universitat Pompeu Fabra (UPF), where he heads the Wireless Networking group. His research interests are in the area of wireless networks and performance evaluation, with emphasis on Wi-Fi technologies, and Machine Learning-based adaptive systems. His recent works on spatial reuse, spectrum aggregation, TXOP sharing and multi-link operation for next-generation Wi-Fi have received special attention from the research community and industry, with on-going collaborations with CISCO and Nokia to work towards Wi-Fi 8. He is currently involved as principal investigator and coordinator in several EU, national and industry funded research projects that aim to push forward our understanding of complex wireless systems, and in particular, contribute to the design of future wireless networks to support XR immersive and holographic communications. The results from his research have been published in 150+ international journal and conference papers. He has supervised 15 PhD students. At UPF he is teaching a course on Machine Learning for Networking, and Wi-Fi in particular.

Prof. Katarzyna Kosek-Szott received her M.Sc. and Ph.D. degrees in telecommunications from the AGH University of Science and Technology, Krakow, Poland in 2006 and 2011, respectively. In 2016 she received her habilitation degree. Currently she is working as an associate professor at the Institute of Telecommunications, AGH University. Her general research interests are focused mostly on wireless LANs, quality of service provisioning in 802.11 networks, novel 802.11 amendments, 5G and beyond, and coexistence of different radio technologies in unlicensed bands. She has been involved in several European projects: DAIDALOS II, CONTENT, CARMEN, FLAVIA, PROACTIVE, RESCUE as well as grants supported by the Polish Ministry of Science and Higher Education and the National Science Centre. She has co-authored 70+ research papers. In the years 2015-2020 she served as a member of the editorial board of the Ad Hoc Networks journal published by Elsevier. Since September 2020 she is a member of the editorial board of the ITU-J FET: Journal on Future and Evolving Technologies journal. In 2018 and 2020 she served as an European Research Council Panel member for the Systems and Communication Engineering area.

Prof. Szymon Szott received his MSc and PhD degrees in telecommunications (both with honors) from the AGH University of Science and Technology, Krakow, Poland in 2006 and 2011, respectively. Currently he is working as an associate professor at the Institute of Telecommunications of AGH University. In 2013, he was a visiting researcher at the University of Palermo (Italy) and at Stanford University (USA). His professional interests are related to wireless local area networks (channel access, quality of service, security, inter-technology coexistence). He is an IEEE 802.11 Working Group member and, in the past, he has been a member of ETSI’s Network Technology Working Group Evolution of Management towards Autonomic Future Internet (AFI), a senior member of IEEE, and on the management board of the Association of Top 500 Innovators. He is a reviewer for international journals and conferences. He has been involved in several European projects as well as grants supported by the Ministry of Science and Higher Education and the National Science Centre, Poland. He is the author or co-author of over 70 research papers.

Dr. Francesc Wilhelmi is a research engineer at Nokia Bell Labs. He holds a Ph.D. in Information and Communication Technologies (2020) and an M.Sc. in Intelligent and Interactive Systems (2016) from Universitat Pompeu Fabra (UPF). Previously, he was a researcher at Centre Tecnològic de Telecomunicacions de Catalunya (CTTC). His main research interests are in Wi-Fi technologies and their evolution, network simulators and network digital twinning, machine learning, decentralized learning, and distributed ledger technologies. In the past, he was also involved in standardization activities within the ITU-T, where he was one of the main editors of recommendation Y.3181 “Architectural framework for Machine Learning Sandbox in future networks including IMT-2020”. In addition, he has organized three problem statements in the ITU AI/ML in 5G Challenge.


T3: Machine Learning in the IoBNT: Exploiting the Potential of Nano-Scale Communication and Computing

Sunday, May 5, 9:00-12:30
Room: H Building at Teknikringen 33, Rooms C
Speaker Name: Roya Khanzadeh; Jorge Torres Gómez; Werner Haselmayr

Abstract:

This tutorial introduces the role of Machine Learning (ML) algorithms in the Internet of Bio-Nano Things (IoBNT), which is an emerging communication framework. The tutorial aims to identify the need for ML-enabled communication and computing techniques, specifically advocating for the bio-inspired Molecular Communications (MC) paradigm. It addresses the limitations and challenges of existing model-based approaches in MC systems and emphasizes the potential of ML and Explainable Artificial Intelligence (XAI) in IoBNT networks. It also covers the possible practical realization of ML models on nano-scales. The tutorial aims to bridge the gap between ML-driven methods and existing IoBNT challenges by providing researchers with fundamental knowledge and application examples of ML in this emerging field.

Biographies:

Roya Khanzadeh is a postdoc researcher at the Institute for Communications Engineering and RF Systems at Johannes Kepler University (JKU) Linz, Austria. She received her Master’s and Ph.D. degrees (Hons) in telecommunication engineering in 2014 and 2022, respectively. Her current research interests include the design and analysis of novel communication systems through the utilization of machine learning algorithms for molecular communications and trustworthy Internet of Things networks. She has worked as a senior expert in the R&D section of telecom companies for two years and has served as a project manager in several industrial telecom projects for more than 6 years. She is an IEEE member and has served as a TPC Member for IEEE conferences, such as IEEE GLOBECOM, INFOCOM, WCNC and ICMLCN as well as a reviewer for international peer-reviewed journals such as IEEE ACCESS.

Jorge Torres Gómez received the B.Sc., M.Sc., and Ph.D. degrees from the Technological University of Havana, CUJAE, Cuba, in 2008, 2010, and 2015, respectively. He is currently a Senior Researcher with the Telecommunication Networks Group, Department of Telecommunication Systems, Technical University of Berlin. He is an IEEE Senior Member holding positions as head of educational activities at the IEEE Germany Section, supporting teaching activities and chairing conference events in education. Since 2008, he has been with the School of Telecommunications and Electronics, CUJAE University, where he was a Lecturer with CUJAE, from 2008 to 2018. He has been with the Department of Signal Theory and Communications, Carlos III University of Madrid, Leganés Campus, Madrid, Spain, as a guest lecturer and with the Department of Digital Signal Processing and Circuit Technology, the Chemnitz University of Technology as a postdoc. His research interests include the age of information, molecular communications, digital signal processing, software-defined radio, and wireless and wired communication systems.

Werner Haselmayr is an Associate Professor at the Institute for Communications Engineering and RF-Systems, Johannes Kepler University (JKU) Linz, Austria. He received his Ph.D. and Habilitation degrees from JKU in 2013 and 2020, respectively. His research interests include the design and analysis of synthetic molecular communication systems and communications and networking in droplet-based microfluidic systems. He has given several invited talks and tutorials on various aspects of droplet-based communications and networking. Furthermore, he has authored 2 book chapters and more than 80 papers, appeared in top-level international peer-reviewed journals and conference proceedings. He is an IEEE member and serves as an Associate Editor for the IEEE Transactions on Molecular, Biological, and Multi-Scale Communications (TMBMC) and as Guest Editor for the IEEE TMBMC and Elsevier Digital Signal Processing. Moreover, he co-organized the 4th and 5th Workshop on Molecular Communications in 2019 and 2021. Since 2019 he is a steering committee member of the Workshop on Molecular Communications.


T4: Data-Driven Modelling and Optimization of Green Future Mobile Networks: From Machine Learning to Generative AI

Sunday, May 5, 9:00-12:30
Room: H Building at Teknikringen 33, Rooms D
Speaker Name: Nicola Piovesan; Antonio De Domenico; David López-Pérez

Abstract:

The fifth generation (5G) of radio technology is revolutionizing our everyday lives, by enabling a high degree of automation, through its larger capacity, massive connectivity, and ultra-reliable low-latency communications. Moreover, 5G technology is allowing for the first time to expand cellular systems into new ecosystems, thus impacting every industry. Despite its unprecedented capabilities, however, 5G networks can —and must— further improve in certain key technology areas, such as that of energy efficiency. While current third generation partnership project (3GPP) new radio (NR) deployments provide an improved energy efficiency of around 4x w.r.t. 3GPP long term evolution (LTE) ones, they still consume up to 3x more energy. This is mostly due to the more processing required to handle the wider bandwidth and the more antennas, and is resulting in increased carbon emissions and electricity bills for operators. Even if the 3GPP NR specification provides a rich set of tools to meet IMT-2020 energy efficiency requirements, such as carrier, channel, and symbol shutdown, among others, it is important to note that one of the main energy consumption challenges of 5G networks is the complexity of their optimization in wide-area deployments: A large-scale, stochastic, non-convex and non-linear optimization problem. In light of the increasing interest in this field, this one-of-a-kind tutorial shares the author's industrial and academic views on this 5G energy efficiency problem. In more details, the tutorial provides a fresh look at energy efficiency enabling technologies in 3GPP NR, in particular, massive MIMO, carrier aggregation, the lean carrier design and different shutdown methods. By leveraging on the concepts of big data and machine learning, the tutorial presents practical scenarios in which data collected from thousands of base stations can be successfully used to derive accurate machine learning models for the main building blocks of the energy efficiency optimization problem. Furthermore, it explores the possibility of a future where generative AI plays a central role in autonomously generating explainable models, thus advancing the quest for energy-efficient networks. In this context, the tutorial delves into the adoption of large language models (LLMs) in the industry and the need for foundational telecom models and specific evaluation frameworks for generative AI.

Biographies:

Dr. Nicola Piovesan is a Senior Researcher at Huawei Technologies in Paris, France, and his main research focuses on large-scale network modeling and optimization, green communications, and machine learning in wireless communication systems. He earned his PhD in Network Engineering from the Universitat Politècnica de Catalunya (UPC), Barcelona, Spain, in 2020, and he received the BSc degree in Information Engineering and the MSc in Telecommunication Engineering from the University of Padova, Italy, in 2013 and 2016, respectively. In 2016, he was awarded with a European Commission;s Marie Sklodowska-Curie fellowship. From 2016 to 2019, he worked as Assistant Researcher at the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Barcelona, Spain, focusing on developing new architectures and algorithms to integrate renewable energy sources into sustainable mobile networks.

Dr. Antonio De Domenico received the M.Sc. degree in telecommunication engineering from the University of Rome La Sapienza in 2008 and the Ph.D. degree in telecommunication engineering from the University of Grenoble in 2012. From 2012 to 2019, he was a Research Engineer with CEA LETI MINATEC, Grenoble, France. In 2018, he was a Visiting Researcher with the University of Toronto, Canada. Since 2020, he has been a Senior Researcher with Huawei Technologies, Paris Research Center, France. He is the main inventor or a co-inventor of more than 25 patents. His research interests include heterogeneous wireless networks, machine learning, and green communications. He is currently an Editor of Wireless Communications and Mobile Computing.

Dr. David López-Pérez is a Distinguished Researcher at Universitat Politècnica de València since 2023. With a diverse and interdisciplinary background, David has consistently blended his expertise in wireless communications, mathematics, and artificial intelligence throughout his career. Prior to this, David was an Expert and Technical Leader at Huawei Technologies, Paris, and a Distinguished Member of Technical Staff at Nokia Bell Labs, Dublin. Throughout his career, David has been dedicated to exploring the intricate realms of both cellular and Wi-Fi networks. His research interests encompass large-scale network modeling, network performance analysis, network planning and optimization, green networking, drone communications, big data, machine learning, and innovative technology and feature development. Notably, David has contributed significantly to advancing our understanding of small cells and ultra-dense networks. He has pioneered groundbreaking work in areas such as cellular and Wi-Fi inter-working, network energy efficiency, and the development of multi-antenna capabilities and ultra-reliable low-latency features for hyper-critical wireless networks. He was recognized as Bell Labs Distinguished Member of Staff in 2019, he has published 13 book chapters, 65 journal articles and 97 conference papers on a diverse array of related topics. Beyond academia, David's innovation shines through his patent portfolio, which comprises over 60 patent applications.


T5: Bridging the Gap: Machine Learning and Wireless Innovation

Sunday, May 5, 9:00-12:30
Room: H Building at Teknikringen 33, Rooms E
Speaker Name: Hina Tabassum; Aryan Kaushik

Abstract:

The neXt generation (XG) of wireless networks is anticipated to be more complex and heterogeneous due to higher transmission frequencies, massive internet-of-things (IoT) devices in air-space-ground networks, and ultra-dense access points. Subsequently, the wireless channel coherence time is reducing, and faster online decision making is becoming critical than ever before. This tutorial will provide a holistic view of how artificial intelligence (AI) can support future wireless networks. In the first module, starting from the basics of deep unsupervised learning (DUL), we will present how DUL enables faster radio resource management (RRM) without high-quality training labels. Nevertheless, incorporating and satisfying convex/non-convex constraints with zero constraint violation is a fundamental challenge. Differentiable projection-based approaches will be discussed in this regard. The second part of the tutorial will focus on the applications of deep reinforcement learning (DRL) in vehicular networks and non-terrestrial networks (NTN). Vehicle-to-Infrastructure (V2I) communication is becoming critical for the enhanced reliability of autonomous vehicles (AVs). However, the uncertainties in the road-traffic and AVs' wireless connections can severely impair timely decision-making. It is thus critical to develop resilient and rapid decision-making algorithms to optimize the AVs' network selection and driving policies in order to minimize road collisions while maximizing the communication data rates. Thus, this tutorial will present a multi-objective DRL framework to characterize efficient network selection and autonomous driving policies. Finally, the tutorial will cover the applications of DRL in edge and cloud computing-based NTN-IoT networks and the use of AI solutions for energy efficient NTN-IoT. The tutorial will also touch upon the key usage scenarios in IMT-2030/6G framework and how intelligent multi-function approaches can be exploited to 6G networks. The tutorial will conclude by pointing out game-changing solutions such as generative AI and quantum-inspired AI solutions for further research.

Biographies:

Hina Tabassum Prof. Hina Tabassum is an Associate Professor at the Lassonde School of Engineering, York University, Canada, where she joined as an Assistant Professor in 2018. She is also appointed as York Research Chair on 5G/6G-enabled mobility and sensing applications in 2023. She received her PhD from the King Abdullah University of Science and Technology (KAUST) and completed postdoctoral research at University of Manitoba, Canada, in 2018. She received Lassonde Innovation Early-Career Researcher Award in 2023, N2Women: Rising Stars in Computer Networking and Communications in 2022, and listed in the Stanford's list of the World’s Top Two-Percent Researchers in 2021 and 2022. She is the founding chair of SIG on THz communications in IEEE Communications Society - Radio Communications Committee (RCC). She is a Senior member of IEEE and registered Professional Engineer in the province of Ontario, Canada. She has published 85+ refereed articles in well-reputed IEEE journals, magazines, and conferences. She is a Senior member of IEEE and registered Professional Engineer in the province of Ontario, Canada. Currently, she is serving as an Area Editor in IEEE OJCOMS and Associate Editor in IEEE TCOM, IEEE TGCN, IEEE Communications Surveys and Tutorials, and IEEE IoT Magazine.

Aryan Kaushik Prof. Aryan Kaushik is Assistant Professor at the University of Sussex, UK, since 2021. He has been with University College London, UK, University of Edinburgh, UK, and HKUST, Hong Kong. He has held visiting appointments at Imperial College London, UK, University of Luxembourg, Luxembourg, Athena RC, Greece, and Beihang University, China. He has been Panel Member for the UKRI EPSRC ICT Prioritisation Panel 2023, Editor of two upcoming books on ‘ISAC’ and ‘NTN’ to be published by Elsevier, and PhD External Examiner such as at UC3M, Spain. He has been Editor for IEEE Communications Technology News, IEEE OJCOMS, IEEE Communications Letters, Guest Editor for IEEE IoT Magazine, IEEE OJCOMS, and several others, Invited Panel Speaker at IEEE VTC-Spring 2023, EuCNC and 6G Summit 2023, IEEE Globecom 2023 Workshop, IEEE PIMRC 2023 Workshop, and IEEE BlackSeaCom 2023, and Tutorial/Invited Speaker at IEEE Globecom 2023, IEEE WCNC 2023, EuCNC and 6G Summit 2023, and many other conferences/events globally. He has been involved in chairing at Organizing and Technical Program Committees such as at IEEE ICC 2024, IEEE ICMLCN 2024, IEEE WCNC 2023-24, etc. and many workshops such as at IEEE ICC 2024, IEEE WCNC 2023-24, IEEE Globecom 2023, IEEE PIMRC 2022-23, etc.


T6: Semantic and Goal-Oriented Communications for Edge AI: Enabling Technologies and Challenges

Sunday, May 5, 14:00-17:30
Room: H Building at Teknikringen 33, Rooms A
Speaker Name: Emilio Calvanese Strinati; Mohamed Sana; Mattia Merluzzi

Abstract:

Today, as part of the race to 6G, Artificial Intelligence (AI), Machine Learning (ML), and wireless communications are converging into a unique system in which data are constantly collected, processed and exchanged among heterogeneous agents. The tremendous overhead and the need of computing resources makes it unfeasible to encode and transmit these data with classical approaches, while chasing challenging sustainability targets without degrading performance. This is due to energy, wireless capacity, and hardware constraints, whose limits are close to be achieved. The goal of this tutorial is to present the new paradigm of semantic and goal-oriented communications to enable edge AI and ML, with a wireless and computing perspective. Starting from the challenges of 6G in terms of use cases, key performance and value indicators, the tutorial will first focus on the communication perspective, with semantic extraction as a way of reducing communication overhead while correctly conveying desired messages and achieving target goals. Then, due to the involvement of computing resources in such future services, the focus will shift to goal-oriented connect-compute resource orchestration, to enable edge AI with target energy, latency, and accuracy. Overall, the tutorial brings together notions from communication theory, AI, ML and signal processing, to provide the learners with some of the useful tools to contribute to the emerging field of edge intelligence.

Biographies:

Dr. Emilio Calvanese Strinati (Ph.D in 2005, HDR in 2018) started working at Motorola Labs in 2002. In 2006 he joined CEA LETI as a research engineer. From 2007, he becomes a PhD supervisor. From 2010 to 2012, he has been the co-chair of the wireless working group in GreenTouch Initiative, which deals with design of future energy efficient communication networks. From 2011 to 2016 he was the IoT & Telecommunications programs Director, then until 2020 the IoT & Telecommunications Scientific and Innovation Director. Since 2020 he is the Nanotechnologies and Wireless for 6G Program Director focusing on future 6G technologies. He has published around 200 papers in journals, conferences, and books chapters, and he has given more than 200 international invited talks, keynotes and tutorials. He is the main inventor of more than 80 patents. He has a strong and successful experience in leading and coordinating research activities: main coordinator of 5G-CHAMPIONS, 5GAllstar or RISE-6G and, he is the coordinator of 6G-GOALS and 6G-DISAC. His current research interests are on Reconfigurable Intelligent Surfaces, Semantic communications, Goal-oriented communications AI-native technologies in the context of future 6G networks.

Mohamed Sana received the M.Eng. degree in Signal Processing and Wireless Communications in 2018 at Phelma, a Grenoble INP - UGA engineering school, and the Ph.D. degree in Mathematics and Computer Science from the University of Grenoble Alpes in 2021. During the course of his Ph.D., he authored more than a dozen publications in international conferences and journals, including three patents. He has made major technical contributions to European projects (CSP4EU, 5G-CONNI, HEXA-X). Since 2021, he holds the position of research engineer at CEA-Leti, in the wireless broadband solution laboratory, where he is currently contributing in the EU Project DEDICAT 6G. His current research focuses on multi-agent systems, machine learning- and reinforcement learning-based optimizations in next-generation mobile networks.

Mattia Merluzzi (M.S. degree in telecommunication engineering) received the Ph.D. degree in Information and Communication Technologies from Sapienza University of Rome, Italy, in 2021. He is currently a research engineer at CEA-Leti, France, where he is involved in the research activities on 6G. He has participated in the H2020 European 6G flagship project Hexa-X, the H2020 EU/Japan project 5G-Miedge, the H2020 EU/Taiwan project 5G CONNI, and different national research projects in Italy and France. His primary research interests are in beyond 5G systems, stochastic network optimization, and edge machine learning/artificial intelligence. He was recipient of the 2021 GTTI Best Ph.D. Thesis Award.


T7: Applications of Diffusion Models in Communications

Sunday, May 5, 14:00-17:30
Room: H Building at Teknikringen 33, Rooms B
Speaker Name: Muah Kim; Rick Fritschek; Rafael F. Schaefer

Abstract:

Innovations in machine learning have brought advancements in other engineering fields employing machine learning as well, and the area of communications is one of them. Diffusion models have drawn enormous attention as a potent generative model and demonstrated remarkable sample quality, particularly in computer vision. In the communication engineering field, recent studies employed generative models to solve complicated problems such as to synthesize the channel distribution, to learn optimal designs of channel codes and signaling schemes, to remove channel noise and distortion, to optimize beamforming, etc. The emergence of diffusion models opened up the possibility of further development of such topics. This tutorial aims at providing an introduction of diffusion denoising probabilistic models and reviewing the application of diffusion models in communication engineering.

Biographies:

Muah Kim is currently pursuing the Dr.-Ing. degree with the Technische Universität Berlin, Germany. She received the B.S. degree in electrical engineering with minoring physics and M.S. degree in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), South Korea, in 2017 and 2019, respectively. She worked at the Korea Institute of Science and Technology Europe (KIST-Europe), Germany, as a Research Assistant, in 2019. Along with the Ph.D. program, she served as a Research Assistant for Technische Universität Berlin from 2019 to 2020 and for Universität Siegen from 2020 to 2022, and she is currently working as a Research Assistant at Technische Universität Dresden since 2022. Her research interests include machine learning, information theory and communication engineering.

Rick Fritschek received the B.Sc. degree in electrical engineering from the Hochschule Furtwangen University in 2010, the M.Sc. degree in electrical engineering from the Technische Universität Berlin and the Dr.-Ing. degree in electrical engineering from the Technische Universität Berlin in 2018. He worked as a postdoctoral researcher at Technische Universität Berlin, Freie Universität Berlin, and Universität Siegen before joining Universität Dresden. From 2019 to 2022 he was principal investigator for the DFG project "Machine Learning for Physical Layer Security", at the Freie Universität Berlin. Since 2023 he is a postdoctoral researcher at the Universität Dresden. His current research interests are in the areas of information theory and security, machine learning and deep learning, and high-dimensional statistics.

Rafael F. Schaefer is a Professor and head of the Chair of Information Theory and Machine Learning at Technische Universität Dresden. He received the Dipl.-Ing. degree in electrical engineering and computer science from the Technische Universität Berlin, Germany, in 2007, and the Dr.-Ing. Degree in electrical engineering from the Technische Universität München, Germany, in 2012. From 2013 to 2015, he was a Post-Doctoral Research Fellow with Princeton University. From 2015 to 2020, he was an Assistant Professor with the Technische Universität Berlin, Germany. From 2020 to 2022, He was a Professor with Universität Siegen. Among his publications is the recent book Information Theoretic Security and Privacy of Information Systems (Cambridge University Press, 2017). He was a recipient of the VDE Johann-Philipp-Reis Prize in 2013. He received the best paper award of the German Information Technology Society (ITG-Preis) in 2016. He is currently an Associate Editor of the IEEE Transactions on Information Forensics and Security and of the IEEE Transactions on Communications. He is a Member of the IEEE Information Forensics and Security Technical Committee. He regularly gives tutorials at ComSoc flagship conferences including ICC 2022, GLOBECOM 2022, and the newly established VCC 2023.


T8: Differentiable Ray Tracing for Radio Propagation Modeling

Sunday, May 5, 14:00-17:30
Room: H Building at Teknikringen 33, Rooms C
Speaker Name: Jakob Hoydis; Fayçal Ait Aoudia; Sebastian Cammerer

Abstract:

Digital Twin Networks (DTNs) are an emerging technology that will allow for the artificial intelligence (AI) and machine learning (ML)-based design, simulation, optimization, and control of 6G systems. Thanks to the recent progress of computer graphics, such as neural radiance fields (NeRFs), and other enabling technologies like LIDAR, scene geometries can be obtained with unprecedented accuracy in an automated fashion. However, the simulation of radio wave propagation via ray tracing does not only require accurate scene geometries but also the electromagnetic material properties of all scene objects. Obtaining the latter is a delicate task for which no mainstream solutions exist. In this tutorial, we will introduce a potential solution to this problem, called differentiable ray tracing for modeling of radio propagation, which is inspired by the recently developed inverse rendering techniques in computer graphics. The core idea is to treat the radio propagation simulation as a differentiable function with trainable parameters. This allows for gradient descent-based optimization of various parameters, ranging from permeability and conductivity over antenna and scattering patterns to the orientation and even position of objects. Apart from providing a solid theoretical introduction to (differentiable) ray tracing, we will explain different approaches to the creation of scenes and discuss several concrete applications of this novel technology which opens many exciting directions for future research. Of particular interest are learning-based approaches to aligning ray tracing algorithms and scene descriptions to measurements. All examples shown in the tutorial can be reproduced with the freely available open-source link-level simulator Sionna that is developed and maintained by the instructors.

Biographies:

Jakob Hoydis is a Principal Research Scientist at NVIDIA working on the intersection of machine learning and wireless communications. Prior to this, he was head of a research department at Nokia Bell Labs, France, and co-founder of the social network SPRAED. He is one of the maintainers and core developers of the Sionna open-source link-level simulator. He obtained the diploma degree in electrical engineering from RWTH Aachen University, Germany, and the Ph.D. degree from Supéléc, France. He is a Distinguished Industry Speaker of the IEEE Signal Processing Society for the 2023-2024 term. From 2019-2021, he was chair of the IEEE COMSOC Emerging Technology Initiative on Machine Learning as well as Editor of the IEEE Transactions on Wireless Communications. Since 2019, he is Area Editor of the IEEE JSAC Series on Machine Learning in Communications and Networks. He is recipient of several awards, including the 2019 VTG IDE Johann-Philipp-Reis Prize, the 2019 IEEE SEE Glavieux Prize, the 2018 IEEE Marconi Prize Paper Award, the 2015 IEEE Leonard G. Abraham Prize, the IEEE WCNC 2014 Best Paper Award, and the 2013 VDE ITG Förderpreis Award. He is a co-author of the textbook “Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency” (2017).

Fayçal Aït Aoudia is a Senior Research Scientist at NVIDIA working on the convergence of wireless communications and machine learning. Before joining NVIDIA, he was a research scientist at Nokia Bell Labs, France. He is one of the maintainers and core developers of the Sionna open-source link-level simulator. He obtained the diploma degree in computer science from the Institut National des Sciences Appliquées de Lyon, France, in 2014, and the PhD in signal processing from the University of Rennes 1, France, in 2017. He has received the 2018 Nokia AI Innovation Award, as well as the 2018, 2019, and 2020 Nokia Top Inventor Awards.

Sebastian Cammerer is a Senior Research Scientist at NVIDIA. Before joining NVIDIA, he received his PhD in electrical engineering and information technology from the University of Stuttgart, Germany, in 2021. He is one of the maintainers and core developers of the Sionna open-source link-level simulator. His main research topics are machine learning for wireless communications and channel coding. Further research interests include parallel computing for signal processing and information theory. He is recipient of the IEEE SPS Young Author Best Paper Award 2019, the Best Paper Award of the University of Stuttgart 2018, the Anton- und Klara Röser Preis 2016, the Rohde&Schwarz Best Bachelor Award 2015, the VDE-Preis 2016 for his master thesis and third prize winner of the Nokia Bell Labs Prize 2019.


T9: Generative and Discriminative AI Models for Physical Layer Communication Challenges

Sunday, May 5, 14:00-17:30
Room: H Building at Teknikringen 33, Rooms D
Speaker Name: Andrea M Tonello

Abstract:

Machine learning (ML) for communications is gaining significant interest although it is not fully clear whether ML can bring disruptive innovation and offer improved solutions w.r.t. to well established model-based approaches. This tutorial has three distinctive elements: firstly, it focuses on a timely topic, i.e., generative and discriminative AI and ML models for signal analysis (learning) and synthesis (generation); secondly, it highlights the pivotal role played by the estimation of the channel input/output mutual information; thirdly, it considers some relevant problems in physical layer communications: a) optimal detection/decoding, b) joint design of coding/decoding, c) channel capacity estimation in unknown channels). Finally, it illustrates neural architectures derived from a mathematical formulation of optimality criterions.

Biography:

Andrea Tonello is professor of embedded communication systems at the University of Klagenfurt, Austria. He has been associate professor at the University of Udine, Italy, technical manager at Bell Labs-Lucent Technologies, USA, and managing director of Bell Labs Italy where he was responsible for research activities on cellular technology. He is co-founder of the spin-off company WiTiKee and has a part-time associate professor post at the University of Udine, Italy. Dr. Tonello received the PhD from the University of Padova, Italy (2002). He was the recipient of several awards including: the Lucent Bell Labs Recognition of Excellence Award (1999), the RAENG (UK) Distinguished Visiting Fellowship (2010), the IEEE Vehicular Technology Society Distinguished Lecturer Award (2011-15), the IEEE Communications Society (ComSoc) Distinguished Lecturer Award (2018-19), the IEEE ComSoc TC-PLC Interdisciplinary and Research Award (2019), the IEEE ComSoc TC-PLC Outstanding Service Award (2019), and the Chair of Excellence from UC3M (2019-20). He also received 10 best paper awards. He was/is associate editor of IEEE TVT, IEEE TCOM, IEEE ACCESS, IET Smart Grid, Elsevier Journal of Energy and Artificial Intelligence. He was the chair of the IEEE ComSoc Technical Committee on PLC (2014-18), and the director for industry outreach in the IEEE ComSoc board of governors (2020-21). He serves as the chair of the IEEE ComSoc Technical Committee on Smart Grid Communications (2020-TD).


T10: Graph-Based Machine Learning for Wireless Communications

Sunday, May 5, 14:00-17:30
Room: H Building at Teknikringen 33, Rooms E
Speaker Name: Santiago Segarra; Ananthram Swami; Zhongyuan Zhao

Abstract:

As communication networks continue to grow in scale and complexity, traditional approaches to network design and operation are becoming inadequate. Machine learning (ML) has garnered significant attention in the communications and signal processing communities for its potential to complement conventional mathematical models in the capabilities of describing complex wireless systems and deriving computationally efficient solutions. However, standard ML methods, such as multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs), struggle to effectively leverage the underlying topology of communication networks, causing significant performance degradation as network size increases. Graph neural networks (GNNs) emerge as a promising ML approach that has gained surging interest within the ML community. GNNs excel when dealing with large network scales and complex topologies, outperforming MLPs and CNNs in such scenarios. This timely tutorial provides a gentle introduction to GNNs and explores recent approaches for applying them to solve classical problems in wireless communications and networking, including power allocation, link scheduling, and routing. The emphasis will be on how GNNs can augment, rather than replace, existing solutions to these challenges. The goal of the tutorial is to foster further research and exchange between the communications and ML communities and to serve as a mechanism to get a larger number of researchers involved in this nascent area.

Biographies:

Santiago Segarra received the B.Sc. degree in Industrial Engineering with highest honors (Valedictorian) from the Instituto Tecnológico de Buenos Aires (ITBA), Argentina, in 2011, the M.Sc. in Electrical Engineering from the University of Pennsylvania (Penn), Philadelphia, in 2014 and the Ph.D. degree in Electrical and Systems Engineering from Penn in 2016. From September 2016 to June 2018, he was a postdoctoral research associate with the Institute for Data, Systems, and Society at the Massachusetts Institute of Technology. Since July 2018, Dr. Segarra is a W. M. Rice Trustee Assistant Professor in the Department of Electrical and Computer Engineering at Rice University. Dr. Segarra received the 2020 IEEE Signal Processing Society Young Author Best Paper Award, and five best conference paper awards.

Ananthram Swami received the B.Tech. degree from IIT-Bombay; the M.S. degree from Rice University, and the Ph.D. degree from the University of Southern California (USC), all in Electrical Engineering. He is the Army’s ST for Network Science and is with the DEVCOM Army Research Lab (ARL). Prior to joining ARL, he held positions with Unocal Corporation, USC, CS-3, and Malgudi Systems. He was a Statistical Consultant to the California Lottery, developed a MATLAB-based toolbox for non-Gaussian signal processing, has held visiting faculty positions at INP, Toulouse, and at Imperial College, London. Recent awards include a 2020 Army Civilian Service Commendation medal, the 2018 IEEE ComSoc MILCOM Technical Achievement Award, a 2017 Presidential Rank Award (Meritorious), a 2016 ARL Publication award, and a 2015 IEEE Globecom best paper award. He holds 5 patents and has over 500 publications. He is a member of the Network Science Society’s steering board and of the IEEE TNSE Steering Committee and has been actively involved in IEEE/ACM.

Zhongyuan Zhao received the B.Sc. and M.S. degrees in Electronic Engineering from the University of Electronic Science and Technology of China, Chengdu, China, in 2006 and 2009, respectively, and the Ph.D. degree in Computer Engineering from the University of Nebraska-Lincoln, Lincoln, Nebraska, in 2019. From 2009 to 2013, he worked for ArrayComm and Ericsson, respectively, as an engineer developing 4G base-stations. Since December 2019, Dr. Zhao has been a postdoctoral research associate in Dr. Santiago Segarra’s group in the Department of Electrical and Computer Engineering at Rice University. Dr. Zhao served on the technical program committee of IEEE VTC 2020, and chaired sessions on the topic of machine learning for wireless communications in IEEE ICASSP 2022-2023. Dr. Zhao’s current research interests are machine learning and signal processing for wireless communications and networking.

Patrons

Supporters