About me

Hi, thanks for stopping by.

I am Zonglin Yang, a third-year Ph.D. candidate at Nanyang Technological University supervised by Erik Cambria. I also work closely with Soujanya Poria. Before NTU, I obtained my master’s degree at Cornell University, supervised by Claire Cardie and Xinya Du, and my bachelor’s degree at Huazhong University of Science and Technology, supervised by Xinggang Wang. I have interned at Microsoft Research in NLC group mentored by Li Dong.

My representative works are the MOOSE and MOOSE-Chem series.

Like the moose that ventures into uncharted wilderness, MOOSE & MOOSE-Chem series explores the untamed landscape of scientific hypotheses to uncover hidden insights.

I am open to academic collaborations and please drop me an email (zonglin.yang@ntu.edu.sg) if you are interested in collaborating with me.

Research

My current research interests are:

  • LLMs for Scientific Discovery
    • MOOSE
      • The first work showing that LLMs can be leveraged to generate novel and valid scientific hypothesis.
    • MOOSE-Chem
      • Provide a mathematically proved theoretical foundation on automated scientific hypothesis discovery.
      • The first work showing that LLMs can rediscover the main innovations of many research hypotheses published in Nature or Science.
    • MOOSE-Chem2
      • Introduce the task of fine-grained scientific hypothesis discovery, aiming for experimentally actionable hypothesis.
      • Frame the task as an optimization problem, and propose hierarchical heuristic search that can theoretically smoothen the optimization landscape to reach to better local optimum.
    • MOOSE-Chem3
      • Introduce the task of experiment-guided ranking, which bridges automated scientific hypothesis discovery and experimental feedback.
      • Propose an experimental simulator that enables scalable research of experiment-guided ranking without relying on real wet-lab experiments.
    • ResearchBench
      • The first large-scale benchmark for evaluating LLMs with a sufficient set of sub-tasks of scientific discovery: inspiration retrieval, hypothesis composition, and hypothesis ranking.
      • It suggests that LLMs can serve as research hypothesis mines, with stronger LLMs acting as richer mines and greater inference compute enabling more miners.
    • NoveltyBench
      • The first benchmark for evaluating the novelty of a scientific hypothesis.
    • Survey
      • The first comprehensive survey on how LLMs can assist scientific research.
  • Reasoning & Knowledge in Natural Language Processing

News

[2025.05]. I’m hosting a tutorial on Frontiers of LLMs for Automated Scientific Discovery at the AI4X conference on July 7th — stay tuned!

[2025.03]. I will take an invited talk at ICLR 2025 Agentic AI4S workshop on MOOSE-Chem on April 27th!

[2025.02]. Got one paper accepted to CVPR 2025. Congrats to Di, Junxian, Jingdi, and Xunzhi!

[2025.01]. I’m invited as an Area Chair in ACL Rolling Review (ARR)!

[2025.01]. MOOSE-Chem is accepted to ICLR 2025. Thanks to all my collaborators!

[2024.07]. MOOSE has won the Best Poster Award in ICML 2024 AI4Science workshop!

[2024.05]. MOOSE is accepted to ACL 2024. Thanks to all my collaborators!

[2024.04]. I will take an invited talk at IJCAI 2024 AI4Research workshop on MOOSE on August 5th!

[2024.01]. Got one paper accepted to EACL 2024. Thanks to all my collaborators!

[2023.10]. Got one paper accepted to EMNLP 2023. Congrats to Wei!

[2023.06]. I will take an invited talk at ICCBR 2023 TMG workshop on our Case-based Reasoning paper on July 17th!

[2023.05]. Got one paper accepted to ACL 2023. Congrats to Jinjie!

[2023.04]. Our EACL 2023 paper will have an oral presentation!

[2023.01]. Got one paper accepted to EACL 2023. Thanks to all my collaborators!

[2020.10]. Got one paper accepted to EMNLP 2020 (findings). Thanks to all my collaborators!

Academic Services

Area Chair:

  • ARR

Conference Reviewer:

  • ARR, COLM 2025, ICLR 2025, NLPCC 2024, COLM 2024, COLING 2024, EMNLP 2023, ACL 2023, EMNLP 2022, COLING 2022

Student Volunteer:

  • EMNLP 2023