About Me

I am a PhD student in the Department of Mathematics at UC San Diego (since 2023). I am currently working with Prof. Alex Cloninger, Prof. Rahul Parhi and Prof. Yu-Xiang Wang on deep learning research. Before turning to deep learning, I worked in algebraic topology and algebraic geometry. I received both my B.S. and M.S. degrees in Mathematics from Southern University of Science and Technology, where I was advised by Prof. Yifei Zhu.

My research focuses on the factors that shape representation formation in neural network training: (1) architectural inductive bias, (2) the implicit bias of gradient-based optimization, (3) data- and objective-induced priors, and their interplay. My recent work develops a predictive framework of how these factors jointly determine which representations emerge and how they generalize in shallow ReLU networks trained with vanilla gradient descent. My long-term goal is to extend this framework to modern neural architectures and optimizers, and to turn that understanding into design principles for deep learning systems.

Papers

IsoCompute Playbook: Optimally Scaling Sampling Compute for LLM RL
Zhoujun Cheng, Yutao Xie, Yuxiao Qu, Amrith Setlur, Shibo Hao, Varad Pimpalkhute, Tongtong Liang, Feng Yao, Zhengzhong Liu, Eric Xing, Virginia Smith, Ruslan Salakhutdinov, Zhiting Hu, Taylor Killian, Aviral Kumar
Preprint · arXiv

The Inductive Bias of Convolutional Neural Networks: Locality and Weight Sharing Reshape Implicit Regularization
Tongtong Liang, Esha Singh, Rahul Parhi, Alexander Cloninger, Yu-Xiang Wang
Preprint · arXiv


Generalization Below the Edge of Stability: The Role of Data Geometry

Tongtong Liang, Alexander Cloninger, Rahul Parhi, Yu-Xiang Wang
ICLR 2026 · arXiv

Stable Minima of ReLU Neural Networks Suffer from the Curse of Dimensionality: The Neural Shattering Phenomenon
Tongtong Liang, Dan Qiao, Yu-Xiang Wang, Rahul Parhi
NeurIPS 2025 Spotlight · arXiv