Biography

I’m a Miller Postdoctoral Fellow at UC Berkeley, hosted by Mike DeWeese (Neuroscience) and Peter Bartlett (Statistics). I completed my PhD in 2025 at the Institute for Computational and Mathematical Engineering at Stanford University, advised by Surya Ganguli. I’m a former Open Philanthropy AI Fellow, a Stanford Data Science Scholar, and hold a B.S. in Applied Math from Brown University (2017).

Research

Neural network models have revolutionized artificial intelligence, yet the mathematical foundations of their success remain unclear. My research investigates the learning dynamics of neural networks to understand how inductive biases emerge through training and how networks extract meaningful representations from data. Integrating insights from statistics, physics, and neuroscience, I aim to uncover fundamental mathematical principles governing learning in both artificial and natural intelligence.

Publications

Daniel Kunin, Giovanni Luca Marchetti, Feng Chen, Dhruva Karkada, James B. Simon, Michael R. DeWeese, Surya Ganguli, Nina Miolane. Alternating Gradient Flows: A Theory of Feature Learning in Two-layer Neural Networks. NeurIPS 2025. [paper] [code]

Clémentine C.J. Dominé*, Nicolas Anguita*, Alexandra M. Proca, Lukas Braun, Daniel Kunin, Pedro A. M. Mediano, Andrew M. Saxe. From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks. ICLR 2025. [paper]

Daniel Kunin*, Allan Raventós*, Clémentine Dominé, Feng Chen, David Klindt, Andrew Saxe, Surya Ganguli. Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning. Spotlight NeurIPS 2024. [paper] [code]

Feng Chen*, Daniel Kunin*, Atsushi Yamamura*, Surya Ganguli. Stochastic Collapse: How Gradient Noise Attracts SGD Dynamics Towards Simpler Subnetworks. JSTAT 2024. [paper] [code] [talk]

Daniel Kunin*, Atsushi Yamamura*, Chao Ma, Surya Ganguli. The Asymmetric Maximum Margin Bias of Quasi-Homogeneous Neural Networks. Spotlight ICLR 2023. [paper] [talk]

Daniel Kunin*, Javier Sagastuy-Brena*, Lauren Gillespie, Eshed Margalit, Hidenori Tanaka, Surya Ganguli, Daniel L.K. Yamins. The limiting dynamics of SGD: modified loss, phase space oscillations and anomalous diffusion. NECO 2023. [paper] [code]

Chao Ma, Daniel Kunin, Lei Wu, Lexing Ying. Beyond the Quadratic Approximation: the Multiscale Structure of Neural Network Loss Landscapes. JMLR 2022. [paper]

Hidenori Tanaka, Daniel Kunin. Noether's Learning Dynamics: The Role of Kinetic Symmetry Breaking in Deep Learning. NeurIPS 2021. [paper]

Daniel Kunin*, Javier Sagastuy-Brena, Surya Ganguli, Daniel L.K. Yamins, Hidenori Tanaka*. Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics. ICLR 2021. [paper] [code] [talk] [blog]

Hidenori Tanaka*, Daniel Kunin*, Daniel L.K. Yamins, Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. NeurIPS 2020. [paper] [code]

Daniel Kunin*, Aran Nayebi*, Javier Sagastuy-Brena*, Surya Ganguli, Jonathan M. Bloom, Daniel L.K. Yamins. Two Routes to Scalable Credit Assignment without Weight Symmetry. ICML 2020. [paper] [code]

Daniel Kunin*, Jonathan M. Bloom*, Aleksandrina Goeva, Cotton Seed. Loss Landscapes of Regularized Linear Autoencoders. Oral ICML 2019. [paper] [code] [talk] [blog]

Teaching

CS230: Deep Learning. From 2018 through 2019 I worked as a Course Assistant for CS230 where I redesigned the weekly discussion sections and created educational tutorials of key deep learning concepts.

Seeing Theory: A Visual Introduction to Proability and Statistics. From 2015 through 2018 I built Seeing Theory, an online textbook of interactive visualizations of core concepts in probability and statistics. Since its launch in 2016, the website has been viewed over three million times by over one million users from every single country in the world.