Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks T Hoefler, D Alistarh, T Ben-Nun, N Dryden, A Peste The Journal of Machine Learning Research 22 (1), 10882-11005, 2021 | 798 | 2021 |
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks A Peste, E Iofinova, A Vladu, D Alistarh Advances in Neural Information Processing Systems 34, 8557-8570, 2021 | 70 | 2021 |
How Well Do Sparse ImageNet Models Transfer? E Iofinova, A Peste, M Kurtz, D Alistarh Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022 | 45 | 2022 |
SSSE: Efficiently Erasing Samples from Trained Machine Learning Models A Peste, D Alistarh, CH Lampert arXiv preprint arXiv:2107.03860, 2021 | 24 | 2021 |
CrAM: A Compression-Aware Minimizer A Peste, A Vladu, E Kurtic, CH Lampert, D Alistarh ICLR 2023, 2022 | 9 | 2022 |
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization D Kuznedelev, E Kurtic, E Iofinova, E Frantar, A Peste, D Alistarh arXiv preprint arXiv:2308.02060, 2023 | 8 | 2023 |
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures E Iofinova, A Peste, D Alistarh Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 8 | 2023 |
Knowledge Distillation Performs Partial Variance Reduction M Safaryan, A Peste, D Alistarh Advances in Neural Information Processing Systems (NeurIPS) 2023, 2023 | 3 | 2023 |
An Explanatory Analysis of the Geometry of Latent Variables Learned by Variational Auto-Encoders A Peste, L Malagň, S Sârbu NIPS, Bayesian Deep Learning Workshop, 2017 | 2 | 2017 |
Learning in Variational Autoencoders with Kullback-Leibler and Renyi Integral Bounds S Sârbu, R Volpi, A Peşte, L Malagň arXiv preprint arXiv:1807.01889, 2018 | 1 | 2018 |
ELSA: Partial Weight Freezing for Overhead-Free Sparse Network Deployment P Halvachi, A Peste, D Alistarh, CH Lampert arXiv preprint arXiv:2312.06872, 2023 | | 2023 |
Efficiency and generalization of sparse neural networks EA Peste | | 2023 |