Topic: Trustworthy AI in a Smarter World: Addressing Awareness, Authenticity, andSecurity Challenges
Speaker: Ming-ChingChang, Associate Professor, Dept. of Computer Science,
College of Engineering and Applied SciencesUniversity at Albany, State University of New York
Time: 2024/11/05(Tue) 14:10-16:00
Venue: TheManagement Building, 11F, AI Lecture Hall
Join Online: https://gqr.sh/LrGY
About the Speaker:
Ming-Ching Chang is an Associate Professorwith tenure (since Fall 2022) in the Department of Computer Science at theUniversity at Albany, SUNY. He previously held positions in the Department ofElectrical and Computer Engineering (2016-2018) and as an Adjunct Professor inComputer Science (2012-2016). From 2008 to 2016, he worked as a ComputerScientist at GE Global Research Center and was an Assistant Researcher at theMechanical Industry Research Labs, ITRI in Taiwan from 1996 to 1998.
Dr. Chang earned his Ph.D. in EngineeringMan/Machine Systems from Brown University in 2008, along with an M.S. inComputer Science and Information Engineering (1998) and a B.S. in CivilEngineering (1996) from National Taiwan University. His research focuses onvideo analytics, computer vision, image processing, and artificial intelligence,with over 70 published papers. His projects have received funding from DARPA,IARPA, NIJ, VA, GE Global Research, Kitware Inc., and the University at Albany.Dr. Chang is a senior member of IEEE.
Abstract:
Trustworthy AI research aims to create AI modelsthat are efficient, robust, secure, fair, privacy-preserving, and accountable.As the adoption of Foundation Models and Generative AI grows, enabling thecomposition of articles and the generation of hyper-realistic images, theboundary between authenticity and deception is increasingly blurred in ourrapidly evolving digital landscape. The demand for sophisticated tools andtechniques to authenticate media content and discern the real from the fake hasnever been more urgent.
In this talk, I will explore recentbreakthroughs in Trustworthy AI, Digital Media Forensics, and securecomputation. First, I will introduce a novel approach to learningmulti-manifold embeddings for Out-of-Distribution (OOD) detection, along with amethod for uncovering hidden hallucination factors in large vision-languagemodels through causal analysis. Additionally, I will cover a noisy-labellearning technique designed to tackle long-tailed data distributions.
In the field of Digital Media Forensics, Iwill showcase novel advancements in Image Manipulation Detection (IMD) usingimplicit neural representations under limited supervision. This includes thedevelopment of IMD datasets featuring object-awareness and semanticallysignificant annotations, leveraging stable diffusion to emulate real-worldscenarios more effectively.
Finally, I will discuss key innovations insecure encrypted computation, particularly in accelerating Fully HomomorphicEncryption (FHE) for deep neural network inference using GPUs, as well asenhancing functional bootstrapping through quantization and network fine-tuningstrategies.
Organizers: College of Intelligent Computing& Artificial Intelligence Research Center
Topic: FromWord Embeddings to Large Language Models: Evolution and Prospects
Speaker: Ying-JiaLin Ph.D. in Computer Science and Information Engineering form National ChengKung University
Time: 2024/10/15 (Tue) 14:10-16:00
Venue: TheManagement Building, 11F, AI Lecture Hall
Join Online: https://gqr.sh/NU8B
About the Speaker:
Dr. Ying-Jia Lin is a postdoctoralresearcher at National Tsing Hua University. He received his PhD from theDepartment of Computer Science and Information Engineering at National ChengKung University in 2024. Prior to that, he obtained his MS from the Instituteof Biomedical Informatics at National Yang-Ming University in 2019 and his BSin Biomedical Sciences from Chang Gung University in 2017. His current researchfocuses on text summarization, model compression, and BioNLP. Ying-Jia Lin haspublished in top AI/NLP conferences, such as AAAI, EMNLP, and AACL. He is anhonorary member of the Phi Tau Phi Society, and he won two Best Paper Awards atTAAI in 2022 and 2019.
Abstract:
This presentation explores the evolution ofNatural Language Processing (NLP) from the foundational concept of wordembeddings to the emergence of large-scale language models like GPT. In thefirst part, we will journey through the history of NLP, highlighting keydevelopments that have led to the current state of the field. The second partcritically examines whether GPT has really solved the challenges of NaturalLanguage Generation, using text summarization as a case study. We will discussarchitectural issues inherent in GPT models, such as those related to theKey-Value (KV) cache, and examine knowledge limitations, particularly in theapplication of GPT to medical text reports. The role of Retrieval-AugmentedGeneration (RAG) in addressing these challenges will also be explored. Thistalk aims to provide insights into the advancements and remaining hurdles inNLP, offering perspectives on future directions and prospects.
Organizers: Collegeof Intelligent Computing & Artificial Intelligence Research Center
※ No registration needed.