【轉知】10/15從詞嵌入到大型語言模型:自然語言處理的演進與前景-林英嘉博士演講

演講主題:從詞嵌入到大型語言模型:自然語言處理的演進與前景
講者: 林英嘉 國立成功大學資訊工程學系 博士
時間: 2024/10/15 (二) 14:10-16:00
地點: 管理大樓11樓-AI講堂
直播連結: https://gqr.sh/NU8B

講者簡介:

Dr. Ying-Jia Lin is a postdoctoral researcher at National Tsing Hua University. He received his PhD from the Department of Computer Science and Information Engineering at National Cheng Kung University in 2024. Prior to that, he obtained his MS from the Institute of Biomedical Informatics at National Yang-Ming University in 2019 and his BS in Biomedical Sciences from Chang Gung University in 2017. His current research focuses on text summarization, model compression, and BioNLP. Ying-Jia Lin has published in top AI/NLP conferences, such as AAAI, EMNLP, and AACL. He is an honorary member of the Phi Tau Phi Society, and he won two Best Paper Awards at TAAI in 2022 and 2019.

演講大綱:

本次演講將探討自然語言處理 (NLP) 的演進,從詞嵌入的基礎概念到像 GPT 這樣的大型語言模型的出現。在第一部分,我們將回顧 NLP 的歷史,強調引領該領域發展至今的關鍵進展。第二部分我們審視 GPT 是否真正解決了自然語言生成任務,此部分將以文本摘要為例進行探討。接著我們將討論 GPT 模型中內在的架構問題,如減少記憶體用量的議題,並深入探討其知識上的限制,特別是在 GPT 應用於醫學文本報告時,是否採用 Retrieval-Augmented Generation (RAG) 的做法。本次演講旨在提供對 NLP 發展和剩餘挑戰的洞見,並對未來方向和前景提出觀點。

主辦單位:智慧運算學院、人工智慧研究中心

※本活動無需報名

公告單位:智慧運算學院
承辦人:林奕妤
聯絡分機:409-2501

Topic: From Word Embeddings to Large Language Models: Evolution and Prospects
Speaker: Ying-Jia Lin Ph.D. in Computer Science and Information Engineering form National Cheng Kung University
Time: 2024/10/15 (Tue) 14:10-16:00
Venue: The Management Building, 11F, AI Lecture Hall
Join Online: https://gqr.sh/NU8B
About the Speaker:
Dr. Ying-Jia Lin is a postdoctoral researcher at National Tsing Hua University. He received his PhD from the Department of Computer Science and Information Engineering at National Cheng Kung University in 2024. Prior to that, he obtained his MS from the Institute of Biomedical Informatics at National Yang-Ming University in 2019 and his BS in Biomedical Sciences from Chang Gung University in 2017. His current research focuses on text summarization, model compression, and BioNLP. Ying-Jia Lin has published in top AI/NLP conferences, such as AAAI, EMNLP, and AACL. He is an honorary member of the Phi Tau Phi Society, and he won two Best Paper Awards at TAAI in 2022 and 2019.

Abstract:
This presentation explores the evolution of Natural Language Processing (NLP) from the foundational concept of word embeddings to the emergence of large-scale language models like GPT. In the first part, we will journey through the history of NLP, highlighting key developments that have led to the current state of the field. The second part critically examines whether GPT has really solved the challenges of Natural Language Generation, using text summarization as a case study. We will discuss architectural issues inherent in GPT models, such as those related to the Key-Value (KV) cache, and examine knowledge limitations, particularly in the application of GPT to medical text reports. The role of Retrieval-Augmented Generation (RAG) in addressing these challenges will also be explored. This talk aims to provide insights into the advancements and remaining hurdles in NLP, offering perspectives on future directions and prospects.
Organizers: College of Intelligent Computing & Artificial Intelligence Research Center

※ No registration needed.

Contact person:Elaine Lin
Contact number:409-2501