Daniil Dmitriev

I am a PhD student at the ETH AI Center, part of ETH Zurich.
My PhD advisors are Afonso Bandeira and Fanny Yang.

I have a MS in Data Science from EPFL, where I was a research assistant for Martin Jaggi working on optimization in machine learning. For my master thesis on statistical physics in neural network theory I was supervised by Lenka Zdeborová.

I am broadly interested in topics at the intersection of mathematics, theoretical computer science and machine learning. In particular, I am working currently on robustness, privacy and random matrix theory.

CV  /  Email  /  GitHub  /  Google Scholar  /  LinkedIn

profile photo

Selected Publications & Preprints

project image

Greedy heuristics and linear relaxations for the random hitting set problem


Gabriel Arpino, Daniil Dmitriev, Nicolo Grometto (alphabetical order)
preprint, 2023
arxiv /

Proved that the standard greedy algorithm is optimal for the hitting set problem in the random bernoulli case.

project image

Deterministic equivalent and error universality of deep random features learning


Dominik Schröder, Hugo Cui, Daniil Dmitriev, Bruno Loureiro
ICML, 2023
arxiv / video / poster /

Rigorously established Gaussian universality for the test error in ridge regression in deep networks with frozen intermediate layers.

project image

On monotonicity of Ramanujan function for binomial random variables


Daniil Dmitriev, Maksim Zhukovskii
Statistics & Probability Letters, 2021
arxiv /

Analyzed properties of the CDF of binomial random variable near its median.

project image

Dynamic model pruning with feedback


Tao Lin, Sebastian U Stich, Luis Barba, Daniil Dmitriev, Martin Jaggi
ICLR, 2020
arxiv /

Proposed a novel model compression method that generates a sparse trained model without additional overhead.



Invited Talks

Integrality gaps for vertex covers on sparse Bernoulli hypergraphs


Workshop on Spin Glasses, 2022
video /

Design and source code from Leonid Keselman's website