Yunxiang Mo  莫云翔

I am an undergraduate at the Hong Kong University of Science and Technology (HKUST), pursuing a double major in Computer Science and Mathematics with an Extended Major in Artificial Intelligence. I am fortunate to be advised by Prof. Yangqiu Song at the HKUST KnowComp Group.

My research interests center on natural language processing, with a focus on the reasoning and evaluation of large language models and vision-language models. I am especially interested in abductive and multimodal reasoning — how models form, defend, and revise hypotheses under ambiguity.

I am currently looking for a research-exchange position in the U.S. for the upcoming term. If you are a faculty member working on related topics and have an opening, I would be glad to chat — please feel free to reach out via email.

🔥 News

  • 2026.03: 🎉 An extended version of DixitWorld was accepted to ACL 2026 (AC meta-review 9/10, oral decision pending). [OpenReview]
  • 2026.01: 🎉 ScaleCUA was accepted to ICLR 2026 as an Oral. [arXiv]
  • 2025.10: 🎉 DixitWorld received a Spotlight at the EMNLP 2025 Workshop (BlackBox NLP). [arXiv]

💼 Experience

  • 2025.04 – present | Undergraduate Researcher, HKUST KnowComp Group, Hong Kong SAR, China
    Advised by Prof. Yangqiu Song. Research on reasoning and evaluation of large language models and vision-language models, with focus on abductive and multimodal reasoning.

  • 2025.06 – 2025.08 | Machine Learning Engineer Intern, Beijing Ingenic Semiconductor Co., Ltd., Beijing, China
    Developed and optimized ML models for embedded and on-chip AI scenarios; built training, evaluation, and inference pipelines in PyTorch; deployed models to edge devices under tight latency and memory constraints.

  • 2025.01 | Intern, Benchmark Architectural Design Co., Ltd.
    Developed front-end modules with the MFC framework for an internal mini-program project; UI design, event handling, and system debugging in a small team.

📚 Publications

DixitWorld: Evaluating Multimodal Abductive Reasoning in Vision-Language Models with Multi-Agent Dixit Gameplay

Yunxiang Mo, Tianshi Zheng, Qing Zong, Jiayu Liu, Baixuan Xu, Yauwai Yim, Chunkit Chan, Jiaxin Bai, Yangqiu Song.

ACL 2026 Oral pending AC 9/10

Extended version of the workshop paper below — adds a Medium difficulty tier (252 vs. 168 QA items), a 72B-parameter scaling ablation, and calibration/sensitivity analyses.

ScaleCUA: Scaling Open-Source Computer Use Agents with Cross-Platform Data

Zhaoyang Liu, Jingjing Xie, Zichen Ding, Zehao Li, Bowen Yang, Zhenyu Wu, Xuehui Wang, Qiushi Sun, Shi Liu, Weiyun Wang, Shenglong Ye, Qingyun Li, Xuan Dong, Yue Yu, Chenyu Lu, Yunxiang Mo, Yao Yan, Zeyue Tian, Xiao Zhang, Yuan Huang, Yiqian Liu, Weijie Su, Gen Luo, Xiangyu Yue, Biqing Qi, Kai Chen, Bowen Zhou, Yu Qiao, Qifeng Chen, Wenhai Wang.

ICLR 2026 Oral

My contribution: data pipeline and cross-platform workflow components in the open-source codebase.

DixitWorld: Evaluating Multimodal Abductive Reasoning in Vision-Language Models with Multi-Agent Dixit Gameplay

Yunxiang Mo, Tianshi Zheng, Qing Zong, Jiayu Liu, Baixuan Xu, Yauwai Yim, Chunkit Chan, Jiaxin Bai, Yangqiu Song.

EMNLP 2025 Workshop Spotlight

Original workshop version; extended version accepted to ACL 2026 Main (above).

🏆 Honors and Awards

  • University’s Scholarship Scheme for Continuing Undergraduate Students, HKUST, 2024 & 2025. Top 1% of continuing undergraduates.
  • S.S. Chern Class, HKUST. Honor for top academic performance across all mathematics coursework.
  • Dean’s List Honor, HKUST, 2024 & 2025. GPA above 3.7.

🤝 Academic Services

(Coming soon.)

🎓 Teaching

  • Teaching Assistant, Discrete Mathematics — HKUST.
  • Teaching Assistant, Exploring Artificial Intelligence — HKUST.