Biography

I am a PhD candidate at the Institute for Computational and Mathematical Engineering at Stanford University, advised by Surya Ganguli. I am an Open Philanthropy AI Fellow and previously a Stanford Data Science Scholar. I completed my B.S. in Applied Math from Brown University in 2017.

Research

Neural network models have revolutionized artificial intelligence, yet the mathematical foundations of their success remain unclear. My research investigates the learning dynamics of neural networks to understand how inductive biases emerge through training and how networks extract meaningful representations from data. Integrating insights from statistics, physics, and neuroscience, I aim to uncover fundamental mathematical principles governing learning in both artificial and natural intelligence.

Publications

Clémentine C.J. Dominé*, Nicolas Anguita*, Alexandra M. Proca, Lukas Braun, Daniel Kunin, Pedro A. M. Mediano, Andrew M. Saxe. From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks. ICLR 2025. [paper]

Daniel Kunin*, Allan Raventós*, Clémentine Dominé, Feng Chen, David Klindt, Andrew Saxe, Surya Ganguli. Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning. Spotlight NeurIPS 2024. [paper] [code]

Feng Chen*, Daniel Kunin*, Atsushi Yamamura*, Surya Ganguli. Stochastic Collapse: How Gradient Noise Attracts SGD Dynamics Towards Simpler Subnetworks. JSTAT 2024. [paper] [code] [talk]

Daniel Kunin*, Atsushi Yamamura*, Chao Ma, Surya Ganguli. The Asymmetric Maximum Margin Bias of Quasi-Homogeneous Neural Networks. Spotlight ICLR 2023. [paper] [talk]

Daniel Kunin*, Javier Sagastuy-Brena*, Lauren Gillespie, Eshed Margalit, Hidenori Tanaka, Surya Ganguli, Daniel L.K. Yamins. The limiting dynamics of SGD: modified loss, phase space oscillations and anomalous diffusion. NECO 2023. [paper] [code]

Chao Ma, Daniel Kunin, Lei Wu, Lexing Ying. Beyond the Quadratic Approximation: the Multiscale Structure of Neural Network Loss Landscapes. JMLR 2022. [paper]

Hidenori Tanaka, Daniel Kunin. Noether's Learning Dynamics: The Role of Kinetic Symmetry Breaking in Deep Learning. NeurIPS 2021. [paper]

Daniel Kunin*, Javier Sagastuy-Brena, Surya Ganguli, Daniel L.K. Yamins, Hidenori Tanaka*. Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics. ICLR 2021. [paper] [code] [talk] [blog]

Hidenori Tanaka*, Daniel Kunin*, Daniel L.K. Yamins, Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. NeurIPS 2020. [paper] [code]

Daniel Kunin*, Aran Nayebi*, Javier Sagastuy-Brena*, Surya Ganguli, Jonathan M. Bloom, Daniel L.K. Yamins. Two Routes to Scalable Credit Assignment without Weight Symmetry. ICML 2020. [paper] [code]

Daniel Kunin*, Jonathan M. Bloom*, Aleksandrina Goeva, Cotton Seed. Loss Landscapes of Regularized Linear Autoencoders. Oral ICML 2019. [paper] [code] [talk] [blog]

Teaching

CS230: Deep Learning. From 2018 through 2019 I worked as a Course Assistant for CS230 where I redesigned the weekly discussion sections and created educational tutorials of key deep learning concepts.

Seeing Theory: A Visual Introduction to Proability and Statistics. From 2015 through 2018 I built Seeing Theory, an online textbook of interactive visualizations of core concepts in probability and statistics. Since its launch in 2016, the website has been viewed over three million times by over one million users from every single country in the world.