2025 Green Edge-AI Workshop
2025 Green Edge-AI Workshop
Are you following the rapid advancements in Artificial Intelligence, while also concerned about its energy consumption and environmental impact? As AI technology rapidly evolves, achieving high-performance and low-power "Green AI" has become a critical challenge jointly faced by industries and academia worldwide.
The IEEE Taipei Section will host the "2025 Green Edge-AI Workshop" on July 22, 2025. This workshop brings together top academic experts and industry leaders to jointly explore the latest trends, core technologies, and future challenges in Green Edge-AI.
We are honored to invite several prominent speakers who will delve into key Green Edge-AI topics from various perspectives. The workshop content covers diverse aspects including hardware design, system integration, model development, storage technologies, communication applications, and heterogeneous integration.
It aims to provide a platform for industry and academia to exchange ideas and collaborate, jointly exploring the endless possibilities of Green AI. We sincerely invite individuals from all sectors interested in this topic to participate and jointly contribute to building a more sustainable and intelligent future!
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
- International Conference Hall
- No. 1001, Daxue Rd. East Dist.,
- Hsinchu, T'ai-pei
- Taiwan 30010
- Building: CPT building
- Contact Event Hosts
-
▌Contact Information:
Tel: (03)5712121# 31590 / 54484
E-mail : ie3taipeisection@gmail.com - Survey: Fill out the survey
Speakers
Jay Kuo of University of Southern California
Modern AI, Data Fitting, and Green Learning
Modern AI is built upon a data-driven approach, where AI problems are solved by deep neural networks (e.g., CNNs, ResNets, and Transformers). Do neural networks own human-like intelligence? To answer this, I relate “modern AI” to “heavily supervised learning” (or weak AI) and “neural networks” to “data-fitting machines.” This view provides deeper insights into the working principle of neural networks. So, we can understand what they can and cannot do. They are fundamentally different from human brains. The next question is “whether neural networks provide a unique data-fitting machinery for huge input-output data pairs.” If not, does a better alternative exist? I have researched this topic since 2014, developed an alternative data-fitting methodology, and coined this emerging tool - green learning (GL). It is “green” since it demands low power consumption in training and inference. GL has many attractive characteristics: smaller model sizes, lower computational complexity, mathematical transparency, ease of incremental learning, etc. GL uses signal processing and statistical tools (e.g., filter banks, linear algebra, probability theory, etc.) and conducts both training and testing in a feedforward manner. Recent GL developments will be presented.
Biography:
Dr. C.-C. Jay Kuo received his Ph.D. from the Massachusetts Institute of Technology in 1987. He is now with the University of Southern California (USC) as the Ming Hsieh Chair Professor, a Distinguished Professor of Electrical and Computer Engineering and Computer Science, and the Director of the Media Communications Laboratory. His research interests are in visual computing and communication. He is a Fellow of AAAS, ACM, IEEE, NAI, and SPIE and an Academician of Academia Sinica. Dr. Kuo has received a few awards for his research contributions, including the 2010 Electronic Imaging Scientist of the Year Award, the 2010-11 Fulbright-Nokia Distinguished Chair in Information and Communications Technologies, the 2019 IEEE Computer Society Edward J. McCluskey Technical Achievement Award, the 2019 IEEE Signal Processing Society Claude Shannon-Harry Nyquist Technical Achievement Award, the 72nd annual Technology and Engineering Emmy Award (2020), and the 2021 IEEE Circuits and Systems Society Charles A. Desoer Technical Achievement Award. Dr. Kuo was the Editor-in-Chief of the IEEE Transactions on Information Forensics and Security (2012-2014) and the Journal of Visual Communication and Image Representation (1997-2011). He is currently the Editor-in-Chief for the APSIPA Trans. on Signal and Information Processing (2022-2025). He has guided 181 students to their Ph.D. degrees and supervised 31 postdoctoral research fellows.
Chun-Ta Huang of National Yang Ming Chiao Tung University
Hardware-Friendly Compression Algorithms for Large AI Models
The rapid growth of Artificial Intelligence, particularly in the realm of large-scale deep learning models, has brought about unprecedented capabilities but also significant challenges related to computational complexity, memory footprint, and energy consumption. Deploying these powerful yet resource-intensive models on edge devices or embedded systems, where power and computational resources are often limited, remains a major hurdle. This talk will delve into the latest advancements in hardware-friendly compression algorithms designed to efficiently reduce the size and computational requirements of large AI models. We will explore various techniques such as quantization, pruning, and low-rank approximation, with a specific focus on how these algorithms can be optimized to leverage the underlying hardware architectures for improved performance and energy efficiency. The presentation will discuss the trade-offs between model accuracy, compression ratio, and hardware implementation costs, highlighting practical strategies for enabling the pervasive deployment of advanced AI on a wide range of devices.
Biography:
Prof. Chun-Ta Huang is a Professor in the Department of Electronics Engineering at National Yang Ming Chiao Tung University (NYCU). His research interests primarily focus on the intersection of artificial intelligence and hardware acceleration. He has extensive experience in developing efficient algorithms and architectures for deep learning, with a particular emphasis on optimizing AI models for resource-constrained environments. His work aims to bridge the gap between complex AI models and their practical deployment on various hardware platforms, contributing significantly to the field of energy-efficient AI and edge computing.
Hung-Han Shuai of National Yang Ming Chiao Tung University
Tiny Brains, Smart Gains: Knowledge Distillation for Edge AI
The proliferation of Artificial Intelligence applications at the network edge presents a unique set of challenges, primarily stemming from the computational, memory, and energy constraints inherent in edge devices. While large, complex AI models offer superior accuracy, their direct deployment on resource-limited hardware is often impractical. This talk will explore Knowledge Distillation (KD) as a powerful paradigm for overcoming these limitations, enabling "smart gains" from "tiny brains." We will delve into how KD techniques facilitate the transfer of knowledge from large, high-performing "teacher" models to smaller, more efficient "student" models, making them suitable for real-time inference on edge devices. The presentation will cover various KD strategies, including response-based, feature-based, and relation-based distillation, and discuss their effectiveness in compressing deep neural networks while retaining critical performance. We will also address practical considerations and emerging trends in applying KD to diverse Edge AI scenarios, ultimately showcasing its potential to democratize advanced AI capabilities by enabling efficient and intelligent processing directly at the source of data.
Biography:
Prof. Hung-Han Shuai is a Professor in the Department of Electrical Engineering at National Yang Ming Chiao Tung University (NYCU). His research primarily focuses on the development of efficient and intelligent algorithms for Edge AI and distributed computing systems. He is particularly interested in optimizing deep learning models for resource-constrained environments, including areas such as knowledge distillation, model compression, and low-power AI inference. His work aims to enable robust and high-performance artificial intelligence applications to be deployed directly on edge devices, contributing to the advancement of ubiquitous AI.
Wei Lin of Phison Electronics Corp.
The Secret Weapon of Private AI: iDAPTIV+
As Artificial Intelligence continues to revolutionize industries, enterprises increasingly seek to deploy AI solutions that prioritize data privacy, security, and low-latency processing, often necessitating on-premise or edge-based implementations rather than relying solely on public cloud infrastructure. This talk will uncover iDAPTIV+, Phison's secret weapon designed to empower the realization of Private AI. We will delve into how iDAPTIV+ provides a robust and efficient foundation for deploying AI models securely within private networks and on edge devices. The presentation will explore the unique capabilities of iDAPTIV+ in optimizing data flow, accelerating AI computations at the source, and ensuring data integrity and compliance without compromising performance. We will discuss its architectural innovations and practical applications that enable organizations to harness the full potential of AI while maintaining complete control over their sensitive data, thereby offering a powerful solution for the growing demands of private and decentralized AI deployment.
Biography:
Dr. Wei Lin is the Chief Technology Officer (CTO) at Phison Electronics Corp., a global leader in NAND flash controllers and storage solutions. With extensive experience in semiconductor technology, data management, and artificial intelligence integration, Dr. Lin plays a pivotal role in shaping Phison's technological roadmap and innovation strategy. His expertise lies in developing cutting-edge solutions that bridge the gap between advanced AI capabilities and high-performance, secure storage infrastructures, addressing the evolving needs of various industries, particularly in the realm of private and edge AI deployments.
Shao-Hsuan Wu of National Yang Ming Chiao Tung University
Green Learning for Integrated Sensing and Communications with OAM Beamforming
The paradigm shift towards Integrated Sensing and Communications (ISAC) in future wireless networks (e.g., 6G) promises revolutionary capabilities by co-designing sensing and communication functionalities. However, achieving high performance in such complex systems often comes with a substantial energy cost. This talk will introduce the concept of Green Learning – an approach focused on developing energy-efficient machine learning algorithms and optimization strategies – specifically tailored for ISAC systems. We will delve into how Green Learning can be synergistically combined with Orbital Angular Momentum (OAM) Beamforming techniques to enhance both communication and sensing performance while significantly reducing power consumption. The presentation will cover the fundamental principles, design challenges, and potential benefits of integrating these advanced technologies to create sustainable and highly efficient ISAC architectures. We will discuss novel algorithms and methodologies that enable intelligent resource allocation and signal processing in OAM-enabled ISAC, paving the way for a greener and more powerful wireless future.
Biography:
Prof. Shao-Hsuan Wu is a Professor in the Department of Communication Engineering at National Yang Ming Chiao Tung University (NYCU). His research expertise lies in the frontiers of next-generation wireless communication systems, with a strong focus on enhancing spectral and energy efficiency. His work encompasses Integrated Sensing and Communications (ISAC), advanced beamforming techniques such as Orbital Angular Momentum (OAM) beamforming, and the application of Green Learning approaches to develop sustainable and high-performance communication solutions. Professor Wu's contributions aim to shape the future of intelligent and eco-friendly wireless networks.
Chien-Nan Liu of National Yang Ming Chiao Tung University
The ever-increasing complexity and shrinking geometries in Integrated Circuit (IC) design necessitate highly accurate and efficient methodologies for power consumption and noise prediction. Traditional approaches often struggle with the vast design space and intricate interdependencies, leading to prolonged design cycles and sub-optimal energy efficiency. This talk will introduce the concept of Green Learning applied to ML-based power and noise prediction in IC design. We will explore how energy-efficient machine learning algorithms and design strategies can be leveraged to create predictive models that are not only highly accurate but also computationally light. The presentation will delve into novel techniques that enable rapid and precise estimation of power consumption and noise characteristics early in the design flow, thereby facilitating more sustainable and optimized IC architectures. We will discuss the benefits of this approach in reducing design iterations, enhancing chip reliability, and ultimately contributing to the development of more energy-efficient and environmentally friendly electronic systems.
Biography:
Prof. Chien-Nan Liu is a Professor in the Department of Electronics Engineering at National Yang Ming Chiao Tung University (NYCU). His research focuses on the application of advanced machine learning techniques to address critical challenges in integrated circuit (IC) design. His expertise spans areas such as power and noise analysis, design automation, and the development of energy-efficient methodologies for very large-scale integration (VLSI) systems. Professor Liu's work contributes significantly to enhancing the predictability, reliability, and "green" aspects of modern IC design processes.
Agenda
▌ Speakers
-
Dr. C. C. Jay Kuo
-
Academician, Academia Sinica & Fellow, National Academy of Inventors, USA
-
Lecture Title: Modern AI, Data Fitting, and Green Learning
-
-
Prof. Chun-Ta Huang
-
Professor, Department of Electronics Engineering, National Yang Ming Chiao Tung University (NYCU)
-
Lecture Title: Hardware-Friendly Compression Algorithms for Large AI Models
-
-
Prof. Hung-Han Shuai
-
Professor, Department of Electrical Engineering, National Yang Ming Chiao Tung University (NYCU)
-
Lecture Title: Tiny Brains, Smart Gains: Knowledge Distillation for Edge AI
-
-
Dr. Wei Lin
-
CTO, Phison Electronics Corp.
-
Lecture Title: The Secret Weapon of Private AI: iDAPTIV+
-
-
Prof. Shao-Hsuan Wu
-
Professor, Department of Communication Engineering, National Yang Ming Chiao Tung University (NYCU)
-
Lecture Title: Green Learning for Integrated Sensing and Communications with OAM Beamforming
-
-
Prof. Chien-Nan Liu
-
Professor, Department of Electronics Engineering, National Yang Ming Chiao Tung University (NYCU)
-
Lecture Title: Green Learning for ML-Based Power/Noise Prediction in IC Design
-
Media
1140722-議程大圖_0 | 598.03 KiB |