Biography

I am a PhD candidate at the Institute for Computational and Mathematical Engineering at Stanford University, co-advised by Surya Ganguli and Daniel Yamins. I am an Open Philanthropy AI Fellow and previously a Stanford Data Science Scholar. I completed my B.S. in Applied Math from Brown University in 2017.

Research

My research uses principles from physics to explain mechanisms of implicit regularization arising from the learning dynamics of neural networks trained using stochastic gradient descent. Thus far, my research has focused on the effect of explicit regularization on the geometry of the loss landscape, identification of symmetry and conservation laws in the learning dynamics, and how ideas from thermodynamics can be used to understand the impact of stochasticity in SGD. My current research focuses on linking the dynamics of SGD to implicit biases impacting the generalization and robustness properties of the trained network.

Publications

Feng Chen*, Daniel Kunin*, Atsushi Yamamura*, Surya Ganguli. Stochastic Collapse: How Gradient Noise Attracts SGD Dynamics Towards Simpler Subnetworks. [paper] [code] [talk]

Daniel Kunin*, Atsushi Yamamura*, Chao Ma, Surya Ganguli. The Asymmetric Maximum Margin Bias of Quasi-Homogeneous Neural Networks. ICLR 2023. [paper] [talk]

Daniel Kunin*, Javier Sagastuy-Brena*, Lauren Gillespie, Eshed Margalit, Hidenori Tanaka, Surya Ganguli, Daniel L.K. Yamins. Rethinking the limiting dynamics of SGD: modified loss, phase space oscillations and anomalous diffusion. NECO 2023. [paper] [code]

Chao Ma, Daniel Kunin, Lei Wu, Lexing Ying. Beyond the Quadratic Approximation: the Multiscale Structure of Neural Network Loss Landscapes. JMLR 2022. [paper]

Hidenori Tanaka, Daniel Kunin. Noether's Learning Dynamics: The Role of Kinetic Symmetry Breaking in Deep Learning. NeurIPS 2021. [paper]

Daniel Kunin*, Javier Sagastuy-Brena, Surya Ganguli, Daniel L.K. Yamins, Hidenori Tanaka*. Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics. ICLR 2021. [paper] [code] [talk] [blog]

Hidenori Tanaka*, Daniel Kunin*, Daniel L.K. Yamins, Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. NeurIPS 2020. [paper] [code]

Daniel Kunin*, Aran Nayebi*, Javier Sagastuy-Brena*, Surya Ganguli, Jonathan M. Bloom, Daniel L.K. Yamins. Two Routes to Scalable Credit Assignment without Weight Symmetry. ICML 2020. [paper] [code]

Daniel Kunin*, Jonathan M. Bloom*, Aleksandrina Goeva, Cotton Seed. Loss Landscapes of Regularized Linear Autoencoders. ICML 2019. [paper] [code] [talk] [blog]

Teaching

CS230: Deep Learning. From 2018 through 2019 I worked as a Course Assistant for CS230 where I redesigned the weekly discussion sections and created educational tutorials of key deep learning concepts.

Seeing Theory: A Visual Introduction to Proability and Statistics. From 2015 through 2018 I built Seeing Theory, an online textbook of interactive visualizations of core concepts in probability and statistics. Since its launch in 2016, the website has been viewed over three million times by over one million users from every single country in the world.