Zheyuan (Frank) Liu

Hi, I am Zheyuan (Frank) Liu (刘哲源), a second year CS PhD Student studying at the University of Notre Dame, advised by Prof. Meng Jiang and affiliated with the DM2 lab. Before that, I received B.S in Computer Science and Applied Math at Brandeis University.

My research interest lies in GenAI (e.g. MLLM/LLM/) Safety, AI Privacy, Trustworthy AI. Currently, I am exploring Machine Unlearning and LLM Security. For my CV, please refer CV.

Please feel free to drop me an Email for any form of communication or collaboration! I am also an active blog writer on Red Notes (小红书), feel free to find my account here !

Email  /  CV  /  Google Scholar  /  Semantic Scholar  /  X (Twitter)  /  Github  /  LinkedIn  /  Red Notes (小红书)

profile photo
🔥What's New
  • 2025/01/22 One paper got accepted by NAACL main and one got accepted by NAACL findings! Congrats to all collaberators! Link here: MLLMU-Bench , SSU
  • 2024/10/29 Our new benchmark paper about MLLM Unlearning - MLLMU-Bench is now available on arxiv! Github Repo here !
  • 2024/09/19 Two paper got accepted by EMNLP main, congrats to all collaberators! Link here: OPPU, Per-Pcs, see you at Miami!
  • 2024/07/30 Our new survey paper about Generative AI Machine Unlearning is now available on arxiv! Reading list here!
  • 2024/05/15 One paper was accepted to ACL 2024👏: SKU. See you at Thailand !
  • 2024/04 I will join Amazon as an Applied Scientist Intern this summer (2024)! See you in Seattle!
  • 2024/03/04 Our paper GraphPrompter has been accepted by WWW conference 2024 short paper track!
  • 2024/01/23 Our paper ConMU has been accepted by WWW conference 2024!
Selected Publications (* indicates equal contribution) [Google Scholar]
2025
3DSP Protecting Privacy in Multimodal Large Language Models with MLLMU-Bench
Zheyuan Liu, Guangyao Dou, Mengzhao Jia, Zhaoxuan Tan, Qingkai Zeng, Yongle Yuan, Meng Jiang
Proceedings of NAACL 2025 (Main) Oral.

We introduce Multimodal Large Language Model Unlearning Benchmark (MLLMU-Bench), a novel benchmark aimed at advancing the understanding of multimodal machine unlearning. MLLMU-Bench consists of 500 fictitious profiles and 153 profiles for public celebrities, each profile feature over 14 customized question-answer pairs, evaluated from both multimodal (image+text) and unimodal (text) perspectives.

3DSP Avoiding Copyright Infringement via Large Language Model Unlearning
Guangyao Dou, Zheyuan Liu, Qing Lyu, Kaize Ding, Eric Wong
Proceedings of NAACL 2025 (Findings).

We propose Stable Sequential Unlearning (SSU), a novel framework designed to unlearn copyrighted content from LLMs over multiple time steps. Our approach works by identifying and removing specific weight updates in the model's parameters that correspond to copyrighted content. We improve unlearning efficacy by introducing random labeling loss and ensuring the model retains its general-purpose knowledge by adjusting targeted parameters.

2024
3DSP Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts
Zhaoxuan Tan, Zheyuan Liu, Meng Jiang
Proceedings of EMNLP 2024.

We proposed Personalized Pieces (Per-Pcs) for personalizing large language models, where users can safely share and assemble personalized PEFT modules efficiently through collaborative efforts. Per-Pcs outperforms non-personalized and PEFT retrieval baselines, offering performance comparable to OPPU with significantly lower resource use, promoting safe sharing and making LLM personalization more efficient, effective, and widely accessible.

3DSP Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning
Zhaoxuan Tan, Qingkai Zeng, Yijun Tian, Zheyuan Liu, Bing Yin, Meng Jiang
Proceedings of EMNLP 2024.

We proposed One PEFT Per User (OPPU) for personalizing large language models, where each user is equipped a personal PEFT module that can be plugged in base LLM to obtain their personal LLM. OPPU exhibits model ownership and enhanced generalization in capturing user behavior patterns compared to existing prompt-based LLM personalization methods.

3DSP Towards Safer Large Language Models through Machine Unlearning
Zheyuan Liu, Guangyao Dou, Zhaoxuan Tan, Yijun Tian, Meng Jiang
Proceedings of ACL (Findings), 2024.

we introduce Selective Knowledge negation Unlearning (SKU), a novel unlearning framework for LLMs, designed to eliminate harmful knowledge while preserving utility on normal prompts.

3DSP Can we Soft Prompt LLMs for Graph Learning Tasks?
Zheyuan Liu*, Xiaoxin He*, Yijun Tian*, Nitesh V. Chawla
Proceedings of The Web Conference (WWW) (Short Paper Track), 2024.

we introduce GraphPrompter, a novel framework designed to align graph information with LLMs via soft prompts.

3DSP Breaking the Trilemma of Privacy, Utility, Efficiency via Controllable Machine Unlearning
Zheyuan Liu*, Guangyao Dou*, Yijun Tian, Chunhui Zhang, Eli Chien, Ziwei Zhu
Proceedings of The Web Conference (WWW), 2024.

We present Controllable Machine Unlearning (ConMU), a novel framework designed to facilitate the calibration of MU.

2023
3DSP Fair Graph Representation Learning via Diverse Mixture of Experts
Zheyuan Liu*, Chunhui Zhang*, Yijun Tian, Erchi Zhang, Chao Huang, Fanny Ye, Chuxu Zhang
Proceedings of The Web Conference (WWW), 2023.

We develop Graph-Fairness Mixture of Experts (G-Fame), a novel plug-and-play method to assist any GNNs to learn distinguishable representations with unbiased attributes.

2022
3DSP Graphbert: Bridging graph and text for malicious behavior detection on social media
Jiele Wu, Chunhui Zhang, Zheyuan Liu, Erchi Zhang, Seven Wilson,Chuxu Zhang
Proceedings of ICDM 2022.

We first present a novel and a large-scale corpus of tweet data to benefit both graph-based and language-based malicious behavior detection research.

Industrial Experience
Amazon Science
Seattle, WA, USA
2024.05 - 2024.08

Applied Scientist Intern @ PXTCS
Manager: Dr. Belhassen Bayar
Mentor: Dr. Suraj Maharjan
Mentor: Rahil Parikh
Education
University of Notre Dame
South Bend, IN, USA
2023.08 - present

Ph.D. in Computer Science and Engineering
GPA: 3.92 / 4.00
Advisor: Prof. Meng Jiang
Brandeis University
Waltham, MA, USA
2019.08 - 2023.05

B.S. in Computer Science
B.S. in Applied Mathematics
GPA: 3.87 / 4.00
Service
  • Journal Reviewer: IEEE Transactions on Big Data Reviewer (2023), TNNLS (2023), TKDE (2023, 2024)
  • Conference Reviewer: ICDM 2024 MLoG Workshop, ARR (2023-), CIKM Applied Research Track (2024), NeurIPS Dataset and Benchmark(2024)
Invited Talks
  • Jan 22th, 2025: PhooD Seminar @ Brandeis University
Miscellaneous
  • I've always been surrounded by wonderful friends, collaborators, and advisors, and I try to maintain an optimistic outlook. If you're having a tough time and would like someone to talk to, feel free to reach out!
  • I like sports, lifting and making new friends.
  • I like animals especially chinchilla, hamsters and kitties.
  • MBTI: ESFJ :)
  • Here is my instagram

Template courtesy: Jon Barron.