Objective

The objective of this NSF project (NSF Grants 1704662 and 1704092) is to develop a novel Deep Learning (DL) approach to enable efficient design and operations of future wireless networks with big data. Specifically, we will propose DL models and algorithms for spatiotemporal analysis and prediction of key system parameters, which can provide accurate and useful input information for existing resource allocation algorithms to better operate a wireless network.   Moreover, we will develop a novel Deep Reinforcement Learning (DRL) based control framework for a wireless network to efficiently allocate its resources by jointly learning the system environment and making decisions under the guidance of a powerful deep neural network.

To achieve the above object, the project is organized into three cohesive thrusts:

  1. Deep Learning based Modeling and Prediction: We will develop novel deep models and training algorithms for spatiotemporal analysis based on big wireless system data to provide accurate prediction for key system parameters (such as traffic load, resource usages, etc). We will also propose a framework to enable fast and parallel training of the complex deep models over a cluster of distributed GPUs.

  2. Deep Reinforcement Learning based Dynamic Resource Allocation: We aimed to develop a novel DRL-based control framework for dynamic resource allocation with the objective of minimizing power consumption while meeting Signal-to-Interference-plus-Noise-Ratio (SINR) and other Quality-of-Service (QoS) requirements.

  3. Implementation and Performance Evaluation: This task will be carried out in two steps. First, we will train the proposed models, algorithms and framework over a GPU cluster and test their performance using real data collected from major wireless carriers. Second, we will implement the proposed DRL-based control framework on a Software Defined Radio (SDR) based testbed and conduct real experiments to validate and evaluate its performance.

Key Personnel

Syracuse University & Northeasten University (Subawardee)
Prof. Jian Tang Lead PI
Prof. YanZhi Wang Co-PI
Caiwen Ding Ph.D student
Ning Liu Ph.D student
Xiaolong Ma Ph.D student
Ao Ren Ph.D student
Kun Wu Ph.D student
Zhiyuan Xu Ph.D student
Chengxiang Yin Ph.D student
Kaiqi Zhang Ph.D student
Mingyang Li Ph.D student
Zaidao Mei Ph.D student

Arizona State University
Prof. Guoliang Xue PI
Karan Jain Master Student
Kangjian Ma Ph.D student
Ruozhou Yu Ph.D student

Publications

  1. Z. Xu, K. Wu, W. Zhang, J. Tang, Y. Wang and G. Xue,
    PnP-DRL: a plug-and-play deep reinforcement learning approach for experience-driven networking,
    IEEE Journal on Selected Areas in Communications (JSAC), Special Issue on Machine Learning for Communications and Networks, In press.

  2. Z. Xu, D. Yang, C. Yin, J. Tang, Y. Wang and G. Xue,
    A co-scheduling framework for DNN models on mobile and edge devices with
    heterogeneous hardware, IEEE Transactions on Mobile Computing, In press.

  3. T. Li, Z. Xu, J. Tang, K. Wu and Y. Wang,
    EXTRA: an experience-driven control framework for distributed stream data processing
    with a variable number of threads, IEEE/ACM IWQoS'2021.

  4. T. Zhang, S. Ye, X. Feng, X. Ma, K. Zhang, Z. Li, J. Tang, S. Liu, X. Lin, Y. Liu, M. Fardad
    and Y. Wang, StructADMM: achieving ultra-high efficiency in structured pruning for DNNs,
    IEEE Transactions on Neural Networks and Learning Systems, In press.

  5. J. Tang, C. Yin, Y. Wang, M. C. Gursoy and G. Xue,
    ReCARL: Resource Allocation in Cloud RANs with Deep Reinforcement Learning, 
    IEEE Transactions on Mobile Computing, In press..

  6. Z. Xu, D. Yang, J. Tang, Y. Tang, T. Yuan, Y. Wang and G. Xue,
    An actor-critic-based transfer learning framework for experience-driven networking,
    IEEE/ACM Transactions on Networking, In press.

  7. Y. Wang, Z. Zhan, L. Zhao, J. Tang, S. Wang, J. Li, B. Yuan, W. Wen and X. Lin,
    Universal approximation property and equivalence of stochastic computing-based neural networks and binary neural networks, AAAI’2019. [PDF]

  8. J. Xu, J. Tang, Z. Xu, C. Yin, K. Kwiat and C. Kamhoua,
    A deep recurrent neural network based predictive control framework for reliable distributed stream data processing, IEEE IPDPS’2019. [PDF]

  9. J. Xu, J. Tang, J. Meng, W. Zhang, Y. Wang, C. Liu and D. Yang,
    Experience-driven networking: a deep reinforcement learning based approach,
    IEEE INFOCOM'2018. [PDF]

  10. Y. Wang, C. Ding, G. Yuan, S. Liao, Z. Li, X. Ma, B. Yuan, X. Qian, J. Tang, Q. Qiu and X. Lin,
    Towards ultra-high performance and energy efficiency of deep learning systems: an algorithm-hardware co-optimization framework, AAAI'2018. [PDF]

  11. Z. Xu, J. Tang, C. Yin, Y. Wang and G. Xue, Experience-driven congestion control: when multi-path TCP meets deep reinforcement learning,
     IEEE Journal on Selected Areas in Communications (JSAC)
    Special Issue on AI and Machine Learning for Networking and Communications, Vol. 37, No. 6, 2019, pp. 1325-1336. [PDF]

  12. X. Chen, Z. Han, H. Zhang, G. Xue, Y. Xiao, M. Bennis,
    Wireless resource scheduling in virtualized radio access networks using stochastic learning,
    IEEE Transactions on Mobile Computing, Vol. 17, 2018, pp. 961-974. [PDF]

  13. R. Yu, G. Xue, X. Zhang,
    QoS-aware and reliable traffic steering for service function chaining in mobile networks,
    IEEE Journal on Selected Areas in Communications, Vol. 35, 2018, pp. 2522-2531. [PDF]

  14. J. Wang, J. Tang, Z. Xu, Y. Wang, G. Xue, X. Zhang and D. Yang,
    Spatiotemportal modeling and prediction in cellular networks: a big data enabled deep learning approach, IEEE INFOCOM'2017, pp. 1323-1331. [PDF]

  15. Z. Xu, Y. Wang and J. Tang, J. Wang and M. C. Gursoy,
    A deep reinforcement learning based framework for power-efficient resource allocation in cloud RANs, IEEE ICC'2017. [PDF]

Educational Activities

  1. Graduate students were trained during this period. They learned cutting-edge technologies including DL, DRL, 5G communications and distributed big data processing as well as widely-used software tools for networking R&D including TensorFlow, TensorFlow Lite, Pytorch, PyTorch Mobile, Matlab, Gurobi optimizer and Linux kernel programming. More importantly, they learned how to conduct scientific research by getting involved in algorithm design, system development, simulation and experiments.

  2. The research results from this project have been integrated into a graduate course, Deep Learning Embedded Systems, which was offered at Northeastern University in Spring 2020.

  3. The research results from this project have been integrated into a graduate course, Statistical Machine Learning, which was offered at Arizona State University in Fall 2019.

  4. The research results from this project have been integrated into a graduate course, Advances in Deep Learning, whcih was offered in Spring 2018, Fall 2018, Spring 2019, Spring 2021 and Fall 2021..

Dissemination

  1. A related paper “EXTRA: An Experience-driven Control Framework for Distributed Stream Data Processing with a Variable Number of Threads” was presented at IEEE/ACM IWQoS’2021 (virtual conference) on Jun. 2021.

  2. The Co-PI Yanzhi Wang gave a talk “From 7,000X model compression to 100X acceleration: achieving real-time execution of all DNNs on mobile devices” at University of California, Santa Barbara, on Nov. 2019.

  3. The Co-PI Yanzhi gave a talk “From 7,000X model compression to 100X acceleration: achieving real-time execution of all DNNs on mobile devices” at AI Research Seminar in the CS Department at Boston University on Oct. 2019.
  4. A related paper “Universal approximation property and equivalence of stochastic computing-based neural networks and binary neural networks” was presented at AAAI’2019 on Jan. 2019 in Honululu, HI, USA.

  5. A related paper “A deep recurrent neural network based predictive control framework for reliable distributed stream data processing” was presented at IEEE IPDPS’2019 on May 2019 in  Rio de Janeiro, Brazil.

  6. A related paper "Spatiotemporal modeling and prediction in cellular networks: a big data enabled deep learning approach" was presented at IEEE INFOCOM'2017 in Atlanta, GA, USA.

  7. A related paper "A deep reinforcement learning based framework for power-efficient resource allocation in cloud RANs" was presented at IEEE ICC'2017 in Paris, France.

  8. A related paper "Experience-driven networking: a deep reinforcement learning based approach" was presented at IEEE INFOCOM'2018 in Honolulu, Hawaii, USA.

  9. A related paper "Towards ultra-high performance and energy efficiency of deep learning systems: an algorithm-hardware co-optimization framework" was presented at AAAI'2018 in New Orleans, LA, USA.

  10. The Co-PI Yanzhi Wang gave a talk  “Towards 1,000X model compression in deep neural networks" at Hardware and Algorithms for Learning On-a-Chip (HALO) Workshop, co-held with ICCAD 2018 on Nov. 2018

  11. The Co-PI Yanzhi Wang gave a talk  “Towards 1,000X model compression in deep neural networks" at a Seminar in IBM Research Cambridge on Jan. 2019

  12. The Co-PI Yanzhi Wang gave a talk  “Towards 1,000X model compression in deep neural networks"  at Boston University, Rice University, Texas A&M University, University of Texas Austin, etc on Jan. 2019.

  13. The PI Jian Tang gave a talk "Experience-driven Networking: A Deep Learning Approach" on December 2017 at Nanyang Technological University, Singapore.

  14. The Co-PI Yanzhi Wang gave a talk "Energy Efficient Deep Learning Systems and Applications" on November 2017 at Cornell University, Ithaca, NY, USA.

  15. The Co-PI Yanzhi Wang gave a talk "Energy Efficient Deep Learning Systems and Applications" on February 2018 at Rice University, Houston, TX, USA.

  16. The PI Guoliang Xue gave a Keynote Lecture "Payment Channel Networks for Blockchain-based Cryptocurrencies" at IFIP WWIC2018 (The 16th International Conference on Wired/Wireless Internet Communications) on June, 2018 at Boston, MA, USA.