PhD Student @ VT
:email-icon-color: Email
:google-scholar-icon-color: Google Scholar
:github-icon: GitHub
:linkedin-icon-color: Linkedin
:x-icon-color: X (formerly Twitter)
Hi there, I’m Yu-Min Tseng (曾郁珉). You can call me Amy! 👋
I am a first-year PhD student at Virginia Tech, advised by Prof. Tu Vu. Previously, I was a visiting graduate student at the University of Virginia, supervised by Prof. Yu Meng. I received my M.S. in DS from National Taiwan University, where I was advised by Prof. Hsin-Hsi Chen and Prof. Chuan-Ju Wang.
<aside> 🤖
My research broadly focuses on large language models (LLMs). Recently, I am especially interested in language model augmentation and test-time adaptation. I’m always happy to chat about research — feel free to reach out!
</aside>
𝓐𝓾𝓰, 𝟐𝟎𝟐𝟓. 🏆 Happy to receive the COLM Travel Scholarship Award and volunteer. See you all in Montreal!
𝓙𝓾𝓵𝔂, 𝟐𝟎𝟐𝟓. 🎉 Our paper, Evaluating LLMs as Expert Annotators, is accepted to COLM 2025!
𝓙𝓾𝓷𝓮, 𝟐𝟎𝟐𝟓. 📄 New preprint, SealQA, is now available on arXiv! Check our datasets on Hugging Face!
𝓜𝓪𝔂, 𝟐𝟎𝟐𝟓. ✈ Back in Taiwan from May through July — feel free to reach out. Let’s catch up!
𝓢𝓮𝓹𝓽., 𝟐𝟎𝟐𝟒. 🎉 Our paper, Expert-Level, is accepted to WiML workshop @ NeurIPS 2024!
𝓢𝓮𝓹𝓽., 𝟐𝟎𝟐𝟒. 🎉 Our paper, LLM Persona Survey, is accepted to EMNLP 2024 Findings! Check our GitHub repo!
𝓙𝓾𝓷𝓮, 𝟐𝟎𝟐𝟒. 🏆 I’m honored to be 1 of 21 recipients nationwide of the Foxconn Technology Fellowship!
Please see my Google Scholar or Publications for the full list.
Evaluating Large Language Models as Expert Annotators
***** COLM 2025*
Yu-Min Tseng, Wei-Lin Chen, Chung-Chi Chen, Hsin-Hsi Chen.
:papers: Paper ** :github-icon:** GitHub
Two Tales of Persona in LLMs: A Survey of Role-Playing and Personalization
EMNLP 2024 Findings
Yu-Min Tseng*, Yu-Chao Huang*, Teng-Yun Hsiao*, Wei-Lin Chen*, Chao-Wei Huang, Yu Meng, Yun-Nung Chen.
:papers: Paper ** :github-icon:** GitHub :x-icon-color: Post :poster: Poster **