Ilya Tolstikhin
Cited by
Cited by
Wasserstein auto-encoders
I Tolstikhin, O Bousquet, S Gelly, B Schoelkopf
arXiv preprint arXiv:1711.01558, 426-433, 2017
Adagan: Boosting generative models
I Tolstikhin, S Gelly, O Bousquet, CJ Simon-Gabriel, B Schölkopf
arXiv preprint arXiv:1701.02386, 2017
Towards a learning theory of cause-effect inference
D Lopez-Paz, K Muandet, B Schölkopf, I Tolstikhin
International Conference on Machine Learning, 1452-1461, 2015
From optimal transport to generative modeling: the VEGAN cookbook
O Bousquet, S Gelly, I Tolstikhin, CJ Simon-Gabriel, B Schoelkopf
URL http://arxiv. org/abs/1705.07642, 2017
PAC-Bayes-empirical-Bernstein inequality
I Tolstikhin, Y Seldin
Advances in Neural Information Processing Systems 26 (NIPS 2013), 1-9, 2013
Minimax estimation of kernel mean embeddings
I Tolstikhin, BK Sriperumbudur, K Muandet
The Journal of Machine Learning Research 18 (1), 3002-3048, 2017
Mlp-mixer: An all-mlp architecture for vision
I Tolstikhin, N Houlsby, A Kolesnikov, L Beyer, X Zhai, T Unterthiner, ...
arXiv preprint arXiv:2105.01601, 2021
Minimax estimation of maximum mean discrepancy with radial kernels
IO Tolstikhin, BK Sriperumbudur, B Schölkopf
Advances in Neural Information Processing Systems 29, 1930-1938, 2016
On the latent space of wasserstein auto-encoders
PK Rubenstein, B Schoelkopf, I Tolstikhin
arXiv preprint arXiv:1802.03761, 2018
Differentially private database release via kernel mean embeddings
M Balog, I Tolstikhin, B Schölkopf
International Conference on Machine Learning, 414-422, 2018
Practical and consistent estimation of f-divergences
P Rubenstein, O Bousquet, J Djolonga, C Riquelme, IO Tolstikhin
Advances in Neural Information Processing Systems 32, 4070-4080, 2019
Predicting neural network accuracy from weights
T Unterthiner, D Keysers, S Gelly, O Bousquet, I Tolstikhin
arXiv preprint arXiv:2002.11448, 2020
Localized complexities for transductive learning
I Tolstikhin, G Blanchard, M Kloft
Conference on Learning Theory, 857-884, 2014
Competitive training of mixtures of independent deep generative models
F Locatello, D Vincent, I Tolstikhin, G Rätsch, S Gelly, B Schölkopf
arXiv preprint arXiv:1804.11130, 2018
Learning disentangled representations with wasserstein auto-encoders
PK Rubenstein, B Schölkopf, I Tolstikhin
Genet: Deep representations for metagenomics
M Rojas-Carulla, I Tolstikhin, G Luque, N Youngblut, R Ley, B Schölkopf
arXiv preprint arXiv:1901.11015, 2019
Permutational rademacher complexity
I Tolstikhin, N Zhivotovskiy, G Blanchard
International Conference on Algorithmic Learning Theory, 209-223, 2015
When can unlabeled data improve the learning rate?
C Göpfert, S Ben-David, O Bousquet, S Gelly, I Tolstikhin, R Urner
Conference on Learning Theory, 1500-1518, 2019
What Do Neural Networks Learn When Trained With Random Labels?
H Maennel, I Alabdulmohsin, I Tolstikhin, RJN Baldock, O Bousquet, ...
arXiv preprint arXiv:2006.10455, 2020
Probabilistic active learning of functions in structural causal models
PK Rubenstein, I Tolstikhin, P Hennig, B Schölkopf
arXiv preprint arXiv:1706.10234, 2017
The system can't perform the operation now. Try again later.
Articles 1–20