KEYNOTE SPEAKERS

Quantum Machine Learning and Optimization for 6G Networks


Trung Q. Duong (IEEE Fellow)

Trung Q. Duong (IEEE Fellow)
Full Professor, Memorial University, Canada
Dr. Trung Q. Duong (IEEE Fellow, EIC Fellow, and AAIA Fellow) is a Canada Excellence Research Chair and Full Professor at Memorial University of Newfoundland, Canada. He is also an adjunct professor at Queen’s University Belfast, UK. His current research interests include quantum optimisation and machine learning in wireless communications. He is an author/co-author of 600+ publications with 22,000+ citations and h-index 81. He has served as an Editor for many reputable IEEE journals (IEEE Trans on Wireless Communications, IEEE Trans on Communications, IEEE Trans on Vehicular Technology, IEEE Communications Surveys & Tutorials, IEEE Communications Letters, and IEEE Wireless Communications Letters) and has been awarded best paper awards in many flagship conferences including IEEE ICC 2014, IEEE GLOBECOM 2016, 2019, and 2022. He was awarded the Research Fellowship and Research Chair of the Royal Academy of Engineering. In 2017, he was awarded the Newton Prize from the UK government. He is a Fellow of IEEE, EIC, and AAIA. He is currently the Editor-in-Chief of IEEE Communications Surveys and Tutorials and an IEEE ComSoc Distinguished Lecturer.

Abstract:
Quantum computing uses the concept of quantum mechanics to offer a massive leap forward in relations to solving complex computation problems. Hybrid quantum-classical machine learning algorithms can significantly enhance the processing efficiency and exponentially computational speed-up, highly capable of guaranteeing high QoS requirements of 6G networks. This talk presents the state-of-the-art in quantum machine learning and optimization and provide a comprehensive overview of its potential, via machine learning approaches. Furthermore, this talk introduces quantum-inspired machine learning/optimization applications for 6G networks in terms of 6G channel estimation and RF fingerprinting considering their enabling technologies and potential challenges. Finally, some dominating research issues and future research directions for the quantum-inspired machine learning/optimization in 6G networks are elaborated.


When Generative AI Goes Airborne


Gottfried Vossen, University of Münster, Germany

Gottfried Vossen
University of Münster, Germany
Gottfried Vossen is a Professor of Computer Science in the Department of Information Systems at the University of Muenster in Germany and former Dean of the School of Business and Economics. He is a Fellow of the German Computer Science Society and a former Editor-in-Chief of Elsevier's Information Systems - An International Journal. He received his master’s and Ph.D. degrees as well as the German Habilitation, all in Computer Science, from RWTH Aachen University in Germany, His research interests include conceptual as well as application-oriented challenges concerning digitalization, business process modelling, data marketplaces, Large Language Models, and Generative AI.

Abstract:
Generative AI (GenAI) has shown transformative potential in a wide range of applications – from text generation to programming to image and video creation. Progress in recent years has been so rapid that we can expect to move beyond terrestrial domains soon. GenAI models can already enhance image resolution, fill missing data, optimize noisy data, and simulate realistic local as well as global scenarios, offering new opportunities, for example, for environmental monitoring, disaster response, and infrastructure planning. These capabilities are further enhanced by Agentic AI and the advances in robot technology. It therefore does not come as a surprise that these powerful capabilities will soon get extended into domains such as remote sensing or airborne platforms such as satellites and Unmanned Aerial Vehicles (UAVs) or drones. This talk explores the opportunities and challenges of such an extension. Just like Big Data posed challenges to data processing 20 years ago, the volume and complexity of (geospatial) data requires new approaches to (pre-)processing, timeliness, scalability, and many others. The presentation outlines key research directions, challenges in domain adaptation, and provides pointers to current research.