Hello! Welcome to my homepage. Currently, I am advised by Prof. Soujanya Poria and focus on grounded QA with Agentic LLM. Want academic cooperation? Feel free to contact me at maojia_song@mymail.sutd.edu.sg.

I graduated from School of Electronic and Electrical Engineering @University of Leeds with a First Class Honours degree.

My research interest includes . My published papers can be found here .

For more details, Please see my CV.

๐ŸŒ  My Wish

I am curious about the construction of self-sustained AI agents, which requires many prevailing technologies, including Large language models, Human-in-the-loop, the world model, and even adaptive intelligence. The idea can be simply explained: self-sustaining AI should learn by himself from the environment constantly and is able to rebuild other related constitutes from the provided abstract representations of the same identity. It forces self-sustained AI to adaptively face the world, rather than only finding a one-fits-all solution. The foundation of its external performance should rely on the understanding of abstract representations to transform the real world into a digital world. Based on the former abilities, the agents of an applicable system can thereby help this underlying self-sustained AI to achieve embodied interaction with human.

๐Ÿ”ฅ News

2026.01: ย ๐Ÿคž We got four papers accepted at ICLR 2026.

2025.10: ย ๐Ÿ” Embarked on an exciting journey to uncover the driving mysteries behind deep agentic search! Dive into our latest work: Demystifying deep search: a holistic evaluation with hint-free multi-hop questions and factorised metrics.

2025.09: ย ๐Ÿš€ Thrilled to unveil our groundbreaking paradigm for agent pre-training: Scaling Agents via Continual Pre-training.

2025.02: ย ๐Ÿ˜Ž Iโ€™ll be presenting an oral talk on Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse at ICLR 2025.

2024.09: ย ๐ŸŽ‰๐ŸŽ‰ We release Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse. A holistic evaluation of the groundness of LLMs in a RAG framework, and the Trust-Align framework that aligns LLMs for higher Trust-Score.

2023.08: ย ๐ŸŽ‰ I join DeClaRe Lab as a NLP researcher!

๐Ÿ“ Selected Publications