National Tsing Hua University

Ambient Intelligence for Immersive Networked Systems (AIINS) Lab

Cheng-Hsin Hsu
https://aiins.cs.nthu.edu.tw

Research Field

Smart Computing (Information)

Introduction

Cheng-Hsin Hsu is a Professor in the Department of Computer Science at National Tsing Hua University (NTHU). What distinguishes Cheng-Hsin from many other academics is his extensive experience in industrial R&D. Prior to joining NTHU, he accumulated extensive industrial R&D experience, serving as a Senior Research Scientist at Deutsche Telekom R&D Lab in Silicon Valley (USA), following his tenures at Motorola and Lucent. This unique background allows him to bridge the gap between theoretical algorithms and real-world system deployment, ensuring that research conducted in his group has high practical value and industrial relevance. His work is highly influential, evidenced by over 5,800 citations, multiple Best Paper Awards (e.g., IEEE RTAS, IEEE CloudCom, IEEE SMARTCOMP, ACM EMS, and ACM MMSys), and his service as an Associate Editor for ACM TOMM.

Dr. Hsu’s research focuses on the end-to-end pipelines for: (i) immersive VR and $360^{\circ}$ video streaming, (ii) dynamic point cloud and 3D Gaussian Splats (3DGS) compression, and (iii) AI-assisted edge computing. He is deeply committed to talent cultivation, leveraging his industry insights to guide students toward impactful, original research. His students frequently win international recognition, including the Qualcomm Innovation Fellowship, MSRA Fellowship Finalists, and Novatek Scholarship. Successful interns will work on cutting-edge problems in volumetric media and drone analytics, gaining the rigorous training needed for top-tier industrial or academic careers.

Led by Cheng-Hsin Hsu at NTHU, the Ambient Intelligence for Immersive Networked Systems (AIINS) Lab is a dynamic team of systems-oriented innovators dedicated to bridging the gap between theory and real-world deployment. Leveraging Cheng-Hsin’s extensive industrial R&D experience (Deutsche Telekom, Motorola, and Lucent), we build next-generation prototypes for immersive media and smart environments. Our active research spans 3D Gaussian Splatting (3DGS), dynamic point clouds, Cloud XR over 5G/6G, autonomous drone swarms, and Digital Twins. We don't just analyze theories; we build scalable systems that solve real-world problems.

The AIINS Lab operates as a global hub for multimedia research. We maintain long-term strategic partnerships with world-class groups at  UCI, Rutgers, NUS, AAU, Aalto, Northeastern, and UiO. Our students consistently achieve top-tier recognition, highlighted by a recent "winning streak" of Best Paper Awards (ACM MMSys 2025, EMS 2024, MADiMa 2023) and the prestigious 2026 Qualcomm Innovation Fellowship. With members having presented their work in over 60 cities across 20 countries, AIINS offers a vibrant, international environment for students passionate about pushing the boundaries of AI and Multimedia Networking.


Research Topics

The AIINS Lab is actively pushing the boundaries of multimedia networking and AI. The following list outlines our current primary research directions where we are building next-generation prototypes. However, we value creativity above all else—interns are not limited to these specific topics and are warmly welcome to propose novel ideas or interdisciplinary projects that align with our systems-oriented vision.

  • Dynamic 3D Gaussian Splatting (4DGS) Streaming: Building on our prior success in streaming static high-fidelity 3D scenes, we are now extending our framework to handle time-varying, dynamic content. Interns will develop AI-driven compression algorithms to optimize the transmission of moving 3D objects and build WebXR-based players for universal access.
  • Real-Time Drone Swarm Coordination: We have previously developed optimization algorithms for offline drone trajectory planning. The next phase focuses on online, real-time path planning. Interns will design "Next-Best-View" algorithms that enable swarms of drones to collaboratively explore and reconstruct large-scale environments in real-time.
  • Generative AI for Network Digital Twins: Following our development of a software-defined controller for synchronizing IoT devices, we are now scaling up to city-level infrastructure. Research will focus on integrating Generative AI to predict network failures and automate self-healing processes for massive-scale Digital Twins.
  • Predictive Foveated Rendering for 6G Cloud XR: Our team has successfully implemented gaze-adaptive rendering to reduce VR bandwidth consumption. To further mitigate latency in wireless networks, interns will apply deep learning models (RNNs/Transformers) to predict user eye movement, enabling lag-free volumetric video streaming over next-gen 5G/6G networks.

Honor
  • ACM SIGMM Test of Time Paper Award, ACM Multimedia Systems Conference (MMSys'25), 2025
  • Best Paper Award, ACM Multimedia Systems Conference (MMSys'25), 2025
  • Honorable Mention Associate Editor, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 2025
  • Best Paper Award, ACM SIGCOMM Workshop on Emerging Multimedia Systems (EMS'24), 2024
  • Best Associate Editor, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 2024
  • Visiting Faculty Award, J. Yang & Family Foundation, University of California, Irvine (UCI), 2022-2024
  • Best Paper Award, ACM International Workshop on Multimedia Assisted Dietary Management (MADiMa'23), 2023
  • Honorable Mention Associate Editor, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 2021
  • AI 2000 Most Influential Scholar Honorable Mention in Multimedia, AMiner, 2021
  • Best Paper Award, IEEE International Conference on Smart Computing (SMARTCOMP'20), 2020
  • Outstanding Reviewer/Best Reviewer Awards: ACM Multimedia'20, IEEE ICME'20, IEEE NOMS'20
  • Outstanding Scholar Award, Foundations for the Advancement of Outstanding Scholarship (FAOS), 2018-2023
  • Best Paper Award, IEEE International Conference on Cloud Computing Technology and Science (CloudCom'17), 2017
  • Best Paper Award, Asia-Pacific Network Operations and Management Symposium (APNOMS'16), 2016
  • Best Associate Editor, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 2016
  • IEEE Senior Member, IEEE, since 2016
  • New Faculty Research Award, CSEE College, National Tsing Hua University, 2014
  • Excellent Junior Research Investigator Grant, National Science Council (NSC), 2013-2016
  • TAOS Best Paper Award, IEEE Global Communications Conference (GLOBECOM'12), 2012
  • Best Paper Award, IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS'12), 2012
  • Best Demo Award, ACM International Conference on Multimedia (Multimedia'08), 2008

Educational Background
  • Ph.D. in Computing Science (2009) Simon Fraser University, Canada 
    • Advisor: Prof. Mohamed Hefeeda 
    • Thesis: Efficient Mobile Multimedia Streaming
  • M.Eng. in Electrical and Computer Engineering (2003) University of Maryland, College Park, USA
  • M.S. in Computer Science and Information Engineering (2000) National Chung Cheng University, Taiwan
    • Advisor: Prof. Daniel J. Buehrer 
    • Thesis: Making Java Applications Run Remotely
  • B.S. in Mathematics (1996) National Chung Cheng University, Taiwan

Job Description

  • Location: This is an on-site position based in the heart of the Hsinchu Science Park. We are conveniently located approximately 45 minutes from Taoyuan International Airport (TPE) and one hour from Taipei.
  • Global Collaboration: We maintain a strong network of international partners. High-performing candidates will have the opportunity to collaborate with experts from leading institutions in Singapore, the USA, and Europe.

Preferred Intern Educational Level

Currently pursuing a PhD or a Master’s degree with a strong interest and background in research.

Skill sets or Qualities

  • Proficiency in C++ and Python (system development, data analysis, and visualization)
  • Solid understanding of Computer Networking and/or 3D Computer Vision/Graphics.
  • Proven ability to formulate research questions, solve complex problems, and communicate findings through academic writing.

Job Description

  • Key Research Areas:
    • Video Understanding: Designing efficient pipelines for large-scale video analysis and processing.
    • Semantic Compression: Developing content-aware compression to optimize storage costs for video data.
    • Retrieval and Inference: Enabling accurate, low-latency retrieval and reasoning for visual LLM-driven assistants.
  • Your Goal: You will identify and solve system-level problems to optimize visual LLM-driven assistants' performance and/or prototype an innovative end-to-end system.
  • Location: This is an on-site position based in the heart of the Hsinchu Science Park. We are conveniently located approximately 45 minutes from Taoyuan International Airport (TPE) and one hour from Taipei.
  • Global Collaboration: We maintain a strong network of international partners. High-performing candidates will have the opportunity to collaborate with experts from leading institutions in Singapore, the USA, and Europe.

Preferred Intern Educational Level

  • Currently pursuing a PhD or a Master’s degree with a strong interest and background in research.

Skill sets or Qualities

  • Proficiency in C++ and Python (system development, data analysis, and visualization).
  • Familiar with machine learning pipelines, LLM inference, and database management systems.
  • Proven ability to formulate research questions, solve complex problems, and communicate findings through academic writing.