Daniil Dmitriev

I am a fourth-year PhD student at ETH Zurich, Department of Mathematics.
My PhD is generously funded by ETH AI Center and ETH FDS initiative.
I am supervised by Afonso Bandeira and Fanny Yang.

I am broadly interested in topics at the intersection of mathematics, theoretical computer science and machine learning, with a current focus on the interplay between structure and randomness in algorithm analysis.

I have a MS in Data Science from EPFL, where I worked with Lenka Zdeborová and Martin Jaggi.

Google Scholar  /  CV  /  GitHub  /  LinkedIn  /  X

Contact me at: <surname><firstname>97 at gmail dot com.
profile photo

Selected Publications & Preprints

project image

Robust mixture learning when outliers overwhelm small groups


DD*, Rares-Darius Buhai*, Stefan Tiegel, Alexander Wolters, Gleb Novikov, Amartya Sanyal,
David Steurer, Fanny Yang
NeurIPS, 2024
arxiv / poster /

We propose an efficient meta-algorithm for recoving means of a mixture model in the presence of large additive contaminations.

project image

On the growth of mistakes in differentially private online learning: a lower bound perspective


DD, Kristof Szabo, Amartya Sanyal
COLT, 2024
arxiv / poster /

We prove that for a certain class of algorithms, the number of mistakes for online learning under differential privacy constraint must grow logarithmically.

project image

Asymptotics of learning with deep structured (random) features


Dominik Schröder*, DD*, Hugo Cui*, Bruno Loureiro
ICML, 2024
arxiv / poster /

We derive a deterministic equivalent for the generalization error of general Lipschitz functions. Furthermore, we prove a linearization for a sample covariance matrix of a structured random feature model with two hidden layers.

project image

Greedy heuristics and linear relaxations for the random hitting set problem


Gabriel Arpino, DD, Nicolo Grometto (αβ order)
APPROX, 2024
arxiv /

We prove that the standard greedy algorithm is order-optimal for the hitting set problem in the random bernoulli case.

project image

Deterministic equivalent and error universality of deep random features learning


Dominik Schröder, Hugo Cui, DD, Bruno Loureiro
ICML, 2023
arxiv / video / poster /

We rigorously establish Gaussian universality for the test error in ridge regression in deep networks with frozen intermediate layers.

project image

On monotonicity of Ramanujan function for binomial random variables


DD, Maksim Zhukovskii (αβ order)
Statistics & Probability Letters, 2021
arxiv /

We analyze properties of the CDF of binomial random variable near its median.

project image

Dynamic model pruning with feedback


Tao Lin, Sebastian U Stich, Luis Barba, DD, Martin Jaggi
ICLR, 2020
arxiv /

We propose a novel model compression method that generates a sparse trained model without additional overhead.



Invited Talks

Robust mixture learning when outliers overwhelm small groups


DACO seminar, ETH Zurich, 2024
website /

Robust mixture learning when outliers overwhelm small groups


Youth in High Dimensions, ICTP, Trieste, 2024
video /

Lower bounds for online private learning


Delta seminar, University of Copenhagen, 2024
website /

On integrality gaps for the random hitting set problem


Graduate seminar in probability, ETH Zurich, 2023

Integrality gaps for vertex covers on sparse Bernoulli hypergraphs


Workshop on Spin Glasses, Les Diablerets, 2022
video /


Teaching

ETH Zurich, TA: Mathematics of Data Science (Fall 2021), Mathematics of Machine Learning (Spring 2022)

              EPFL, TA: Artificial Neural Networks (Spring 2020, Spring 2021)

Supervising MSc theses at ETH Zurich: Carolin Heinzler (Fall 2023), Krish Agrawal, Ulysse Faure (Spring 2024)


Design and source code from Leonid Keselman's website