【AI Seminar】11/05 Trustworthy AI in a Smarter World: Addressing Awareness, Authenticity, and Security Challenges- Ming-Ching Chang, Associate Professor
Topic: Trustworthy AI in a Smarter World: Addressing Awareness, Authenticity, and
Security Challenges
Speaker: Ming-Ching
Chang, Associate Professor, Dept. of Computer Science,
College of Engineering and Applied Sciences
University at Albany, State University of New York
Time: 2024/11/05
(Tue) 14:10-16:00
Venue: The
Management Building, 11F, AI Lecture Hall
Join Online: https://gqr.sh/LrGY
About the Speaker:
Ming-Ching Chang is an Associate Professor
with tenure (since Fall 2022) in the Department of Computer Science at the
University at Albany, SUNY. He previously held positions in the Department of
Electrical and Computer Engineering (2016-2018) and as an Adjunct Professor in
Computer Science (2012-2016). From 2008 to 2016, he worked as a Computer
Scientist at GE Global Research Center and was an Assistant Researcher at the
Mechanical Industry Research Labs, ITRI in Taiwan from 1996 to 1998.
Dr. Chang earned his Ph.D. in Engineering
Man/Machine Systems from Brown University in 2008, along with an M.S. in
Computer Science and Information Engineering (1998) and a B.S. in Civil
Engineering (1996) from National Taiwan University. His research focuses on
video analytics, computer vision, image processing, and artificial
intelligence, with over 70 published papers. His projects have received funding
from DARPA, IARPA, NIJ, VA, GE Global Research, Kitware Inc., and the
University at Albany. Dr. Chang is a senior member of IEEE.
Abstract:
Trustworthy AI research aims to create AI
models that are efficient, robust, secure, fair, privacy-preserving, and
accountable. As the adoption of Foundation Models and Generative AI grows,
enabling the composition of articles and the generation of hyper-realistic
images, the boundary between authenticity and deception is increasingly blurred
in our rapidly evolving digital landscape. The demand for sophisticated tools
and techniques to authenticate media content and discern the real from the fake
has never been more urgent.
In this talk, I will explore recent
breakthroughs in Trustworthy AI, Digital Media Forensics, and secure
computation. First, I will introduce a novel approach to learning
multi-manifold embeddings for Out-of-Distribution (OOD) detection, along with a
method for uncovering hidden hallucination factors in large vision-language
models through causal analysis. Additionally, I will cover a noisy-label
learning technique designed to tackle long-tailed data distributions.
In the field of Digital Media Forensics, I
will showcase novel advancements in Image Manipulation Detection (IMD) using
implicit neural representations under limited supervision. This includes the
development of IMD datasets featuring object-awareness and semantically
significant annotations, leveraging stable diffusion to emulate real-world
scenarios more effectively.
Finally, I will discuss key innovations in
secure encrypted computation, particularly in accelerating Fully Homomorphic
Encryption (FHE) for deep neural network inference using GPUs, as well as
enhancing functional bootstrapping through quantization and network fine-tuning
strategies.
Organizers: College of Intelligent
Computing & Artificial Intelligence Research Center
※ No registration needed.