IEEE International Conference on Computer Communications
20–23 May 2024 // Vancouver, Canada

Workshop on Integrating Edge Intelligence and Large Model in Next Generation Networks (IEILM) - Program

IEILM 2024: Workshop on Integrating Edge Intelligence and Large Model in Next Generation Networks

20 May, 2024 ● 08:30 – 12:15 ● Room: Georgia B




Edge Intelligence and Large Model in next generation networks are closely related to the increasing convergence of networking, artificial intelligence, and cloud-edge technologies. As cloud-edge computing gains momentum, the demand for intelligent, context-aware, and efficient networking solutions is rising. The integration of edge intelligence and large models enables networks to become more adaptive, self-optimizing, and responsive to user and application needs.

The Workshop on "Integrating Edge Intelligence and Large Models in Next Generation Networks" provides a forum that brings together industry and academia, engineers, and researchers to discuss up-to-date developments in integrating edge intelligence and large models in Large AI models.



Victor C. M. Leung (The University of British Columbia, Vancouver, Canada)


TPC Co-Chairs:

F. Richard Yu  (Carleton University, ON, Canada)

Dusit Niyato (Nanyang Technological University, Singapore)

Xiaofei Wang (Tianjin University, Tianjin, China)

Tarik Taleb (Ruhr University Bochum, Bochum, Germany)


Local Chair:

Zhenchao Ma (The University of British Columbia, Vancouver, Canada)


Publicity/Web Chair:

Qiu Chao (Tianjin University, Tianjin, China)

Xiuhua Li (Chongqing University, Chongqing, China)


Operation Chair:

Chenyang Wang (Shenzhen University, Shenzhen, Guangdong, China)


08:30 – 8:45

Opening Session

Chair: Victor C. M. Leung


8:45 – 9:30

Keynote Session

Bochun Li (University of Toronto, Canada)

Title: Is It Feasible to Fine-Tune Large Language Models with Private Data?


With the recent surge of research interests in Large Language Models (LLMs), a natural question that arises is how pre-trained LLMs can be fine-tuned to tailor to specific needs of enterprises and individual users, while preserving the privacy of data used in the fine-tuning process. On the one hand, sending private data to one datacenter for fine-tuning is, without a doubt, unacceptable from a privacy perspective. On the other hand, conventional federated learning requires each client to perform local training, which is not feasible for LLMs with respect to both computation costs and communication overhead, since they involve billions of model parameters.

In this talk, we present some of our recent advances towards addressing the challenge of fine- tuning large language models with private data. We first present Titanic, a new distributed training paradigm that allows LLMs to be fine-tuned in a privacy-preserving fashion directly on the client devices where private data is produced, while operating within the resource constraints on computation and communication bandwidth. The key idea of Titanic is to partition an LLM across multiple client devices, so that it can be fine-tuned with no or minimal losses in training performance. In designing Titanic, we focused on its feasibility in real-world systems, and implemented a model-agnostic partitioning mechanism that is fully automated. In closing, we briefly present our recent work on the use of multiple cloud platforms to perform distributed machine learning across clouds. This preserves the privacy of data as we ship training workloads to where data resides. We envision a high-speed overlay network atop datacenters and clouds, capable of relaying data across multiple paths while reacting nimbly against network changes with optimized policies.



Baochun Li received his B.Engr. degree from the Department of Computer Science and Technology, Tsinghua University, China, in 1995 and his M.S. and Ph.D. degrees from the Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, in 1997 and 2000. Since 2000, he has been with the Department of Electrical and Computer Engineering at the University of Toronto, where he is currently a Professor. He holds the Bell Canada Endowed Chair in Computer Engineering since August 2005. His current research interests include cloud computing, security and privacy, distributed machine learning, federated learning, and computer networking.

Dr. Li has co-authored more than 460 research papers, with a total of over 25000 citations, an H-index of 88 and an i10-index of 340, according to Google Scholar Citations. He was the recipient of the IEEE Communications Society Leonard G. Abraham Award in the Field of Communications Systems in 2000, the Multimedia Communications Best Paper Award from the IEEE Communications Society in 2009, the University of Toronto McLean Award in 2009, the Best Paper Award from IEEE INFOCOM in 2023, and the IEEE INFOCOM Achievement Award in 2024. He is a Fellow of the Canadian Academy of Engineering, a Fellow of the Engineering Institute of Canada, and a Fellow of IEEE.



Session 1: Collaborative learning and large model in edge computing

Chair: Wei Cai


Efficient Adapting for Vision-language Foundation Model in Edge Computing based on Personalized and Multi-Granularity Federated Learning

Fei Gao, Yunfeng Zhao, Chao Qiu and Xiaofei Wang (Tianjin University, China)


GenG: An LLM-based Generic Time Series Data Generation Approach for Edge Intelligence via Cross-domain Collaboration

Xiaomao Zhou and Qingmin Jia (Purple Mountain Laboratories, China); Yujiao Hu (Northwestern Polytechnical University, China); Renchao Xie and Tao Huang (Beijing University of Posts and Telecommunications, China); F. Richard Yu (Carleton University, Canada)


FedBF16-Dynamic: Communication-Efficient Federated Learning with Adaptive Transmission

Fan-Hsun Tseng and Yu-Hsiang Huang (National Cheng Kung University, Taiwan)


Adaptive Split Learning over Energy-Constrained Wireless Edge Networks

Zuguang Li (Harbin Institute of Technology, Shenzhen & Peng Cheng Laboratory, China); Wen Wu (Peng Cheng Laboratory, China); Shaohua Wu (Harbin Institute of Technology, China); Wei Wang (Nanjing University of Aeronautics and Astronautics, China)



Coffee break



Session 2: Large model-driven network and resource management in edge scenarios

Chair: Shaoyuan Huang


Edge Intelligence and Ad Hoc Network-Empowered Resource Allocation for CBTC System

Sixing Ma, Meng Li, Pengbo Si, Ruizhe Yang and Zhuwei Wang (Beijing University of Technology, China)


Online AI-Generated Content Request Scheduling with Deep Reinforcement Learning

Chenglong Feng, Ying Zheng and Yuedong Xu (Fudan University, China)


Simulating LLM training in CXL-based Heterogeneous Computing Cluster

Yinan Tang (Inspur Electronic Information Industry Co., Ltd, China); Tongtong Yuan (Beijing University of Technology, China); Fang Cao, Li Wang, Zhenhua Guo, Yaqian Zhao and Rengang Li (Inspur Electronic Information Industry Co, China)


Network Traffic Prediction Using PSO-LightGBM-TM

Feng Li (Nanyang Technological University, Singapore); Wei Nie (Zhejiang Gongshang University, China); Kwok-Yan Lam and Bowen Shen (Nanyang Technological University, Singapore); Xiuhua Li (Chongqing University, China)




Session 3: Economic-level solutions in next generation networks integrating large models

Chair: Fan-Hsun Tseng


Maximizing the Social Welfare of Decentralized Knowledge Inference through Evolutionary Game

Yuanfang Chi (The University of British Columbia, Canada); Qiyue Zhang (The Chinese University of Hong Kong, Shenzhen, Guangdong, China); Jiaxiang Sun and Wei Cai (The Chinese University of Hong Kong, Shenzhen, China); Z. Jane Wang (University of British Columbia, Canada); Victor C.M. Leung (Shenzhen University, China & The University of British Columbia, Canada)


Stackelberg Game-based and Broker-assisted Computation Offloading in MEC Networks

Deng Meng, Jianmeng Guo and Liang Zhao (China Three Gorges University, China); Huan Zhou (Northwestern Polytechnical University, China); Shouzhi Xu (China Three Gorges University, China)


Gold Patrons

Student Travel Grant Sponsors