Selected Publications & Preprints
|
Robust mixture learning when outliers overwhelm small groups
DD*, Rares-Darius Buhai*, Stefan Tiegel, Alexander Wolters, Gleb Novikov, Amartya Sanyal, David Steurer, Fanny Yang
NeurIPS, 2024
arxiv /
poster /
We propose an efficient meta-algorithm for recoving means of a mixture model in the presence of large additive contaminations.
|
|
On the growth of mistakes in differentially private online learning: a lower bound perspective
DD, Kristof Szabo, Amartya Sanyal
COLT, 2024
arxiv /
poster /
We prove that for a certain class of algorithms, the number of mistakes for online learning under differential privacy constraint must grow logarithmically.
|
|
Asymptotics of learning with deep structured (random) features
Dominik Schröder*, DD*, Hugo Cui*, Bruno Loureiro
ICML, 2024
arxiv /
poster /
We derive a deterministic equivalent for the generalization error of general Lipschitz functions.
Furthermore, we prove a linearization for a sample covariance matrix of a structured random feature model with two hidden layers.
|
|
Greedy heuristics and linear relaxations for the random hitting set problem
Gabriel Arpino, DD, Nicolo Grometto (αβ order)
APPROX, 2024
arxiv /
We prove that the standard greedy algorithm is order-optimal for the hitting set problem in the random bernoulli case.
|
|
Deterministic equivalent and error universality of deep random features learning
Dominik Schröder, Hugo Cui, DD, Bruno Loureiro
ICML, 2023
arxiv /
video /
poster /
We rigorously establish Gaussian universality for the test error in ridge regression in deep networks with frozen intermediate layers.
|
|
On monotonicity of Ramanujan function for binomial random variables
DD, Maksim Zhukovskii (αβ order)
Statistics & Probability Letters, 2021
arxiv /
We analyze properties of the CDF of binomial random variable near its median.
|
|
Dynamic model pruning with feedback
Tao Lin, Sebastian U Stich, Luis Barba, DD, Martin Jaggi
ICLR, 2020
arxiv /
We propose a novel model compression method that generates a sparse trained model without additional overhead.
|
Invited Talks
|
DACO seminar, ETH Zurich, 2024
website
|
Youth in High Dimensions, ICTP, Trieste, 2024
video
|
Delta seminar, University of Copenhagen, 2024
website
|
Graduate seminar in probability, ETH Zurich, 2023
|
Workshop on Spin Glasses, Les Diablerets, 2022
video
|
ETH Zurich, TA: Mathematics of Data Science (Fall 2021), Mathematics of Machine Learning (Spring 2022)
EPFL, TA: Artificial Neural Networks (Spring 2020, Spring 2021)
Supervising MSc theses at ETH Zurich: Carolin Heinzler (Fall 2023), Krish Agrawal, Ulysse Faure (Spring 2024)
Berkeley Math Circle (Fall 2024)
Reviewer: NeurIPS 2024 (Top reviewer), ICLR 2025
|