About Me

I am a PhD student in the Department of Mathematics at UC San Diego (since 2023). My primary research objective is to develop deep learning theory that provides practical insights and guidance with particular focus on understanding how inductive bias of optimization algorithms and model architecture help generalization. I am currently working with Prof. Alex Cloninger, Prof. Rahul Parhi and Prof. Yu-Xiang Wang on this topic.

Before turning to machine learning, I worked in algebraic topology and algebraic geometry, particularly in motivic homotopy theory. I received both my B.S. and M.S. degrees in Mathematics from Southern University of Science and Technology, where I was advised by Prof. Yifei Zhu.

Papers

IsoCompute Playbook: Optimally Scaling Sampling Compute for LLM RL
Zhoujun Cheng, Yutao Xie, Yuxiao Qu, Amrith Setlur, Shibo Hao, Varad Pimpalkhute, Tongtong Liang, Feng Yao, Zhengzhong Liu, Eric Xing, Virginia Smith, Ruslan Salakhutdinov, Zhiting Hu, Taylor Killian, Aviral Kumar
Manuscript · arXiv

The Inductive Bias of Convolutional Neural Networks: Locality and Weight Sharing Reshape Implicit Regularization
Tongtong Liang, Esha Singh, Rahul Parhi, Alexander Cloninger, Yu-Xiang Wang
Manuscript · arXiv


Generalization Below the Edge of Stability: The Role of Data Geometry

Tongtong Liang, Alexander Cloninger, Rahul Parhi, Yu-Xiang Wang
ICLR 2026 · arXiv

Stable Minima of ReLU Neural Networks Suffer from the Curse of Dimensionality: The Neural Shattering Phenomenon
Tongtong Liang, Dan Qiao, Yu-Xiang Wang, Rahul Parhi
NeurIPS 2025 Spotlight · arXiv