Hi, I’m Xiangyu Hong, a third-year undergraduate majoring in Electronic Engineering at Tsinghua University.
My research interests focus on understanding the inner workings of large language models (LLMs) and enhancing the interpretability of their behaviors. I aim to leverage this analysis to improve LLM performance and reliability. Some of the key questions I want to explore include:
- How do contextual knowledge and parameter-encoded knowledge interact, especially in cases of knowledge conflict or complementarity?
- How can we deepen our understanding of model representations to detect hallucinations, pinpoint key nodes in information processing, and effectively extract and apply task-specific representations?
Publications
On the Token Distance Modeling Ability of Higher RoPE Attention Dimension Poster Slide
Xiangyu Hong*, Che Jiang*, Biqing Qi, Fandong Meng, Mo Yu, Bowen Zhou, Jie Zhou.
EMNLP 2024 Findings.
Focused on long-text semantic modeling, with particular attention to the impact of rotary position encoding (RoPE) in enhancing LLM performance when processing long documents.On Large Language Models’ Hallucination with Regard to Known Facts
Jiang, Che, Biqing Qi, Xiangyu Hong, Dayuan Fu, Yang Cheng, Fandong Meng, Mo Yu, Bowen Zhou, Jie Zhou.
NAACL 2024 Main.
Focused on detecting hallucinations in large language models (LLMs), with an emphasis on the dynamics of reasoning when hallucinations occur, particularly when models fail to recall known facts.
Awards
- Tsinghua Spark Scientific and Technological Innovation Fellowship (Top 1% in university) June 2024 – Present
- Received funding from the Beijing Natural Science Foundation October 2024
- Tsinghua University Comprehensive Excellence Scholarship - Qinxiao Scholarship October 2024
- Tsinghua University Academic Excellence Scholarship October 2023