Daniil Dmitriev
I am a PhD student at the ETH AI Center, part of ETH Zurich.
My PhD advisors are Afonso Bandeira and Fanny Yang.
I have a MS in Data Science from EPFL, where I was a research assistant for Martin Jaggi working on optimization in machine learning.
For my master thesis on statistical physics in neural network theory I was supervised by Lenka Zdeborová.
I am broadly interested in topics at the intersection of mathematics, theoretical computer science and machine learning.
In particular, I am working currently on robustness, privacy and random matrix theory.
CV /
Email /
GitHub /
Google Scholar /
LinkedIn
|
|
Selected Publications & Preprints
|
Greedy heuristics and linear relaxations for the random hitting set problem
Gabriel Arpino, Daniil Dmitriev, Nicolo Grometto (alphabetical order)
preprint, 2023
arxiv /
Proved that the standard greedy algorithm is optimal for the hitting set problem in the random bernoulli case.
|
|
Deterministic equivalent and error universality of deep random features learning
Dominik Schröder, Hugo Cui, Daniil Dmitriev, Bruno Loureiro
ICML, 2023
arxiv /
video /
poster /
Rigorously established Gaussian universality for the test error in ridge regression in deep networks with frozen intermediate layers.
|
|
On monotonicity of Ramanujan function for binomial random variables
Daniil Dmitriev, Maksim Zhukovskii
Statistics & Probability Letters, 2021
arxiv /
Analyzed properties of the CDF of binomial random variable near its median.
|
|
Dynamic model pruning with feedback
Tao Lin, Sebastian U Stich, Luis Barba, Daniil Dmitriev, Martin Jaggi
ICLR, 2020
arxiv /
Proposed a novel model compression method that generates a sparse trained model without additional overhead.
|
|