Folgen
Sharan Vaswani
Titel
Zitiert von
Zitiert von
Jahr
Fast and faster convergence of sgd for over-parameterized models and an accelerated perceptron
S Vaswani, F Bach, M Schmidt
The 22nd international conference on artificial intelligence and statistics …, 2019
1832019
Painless stochastic gradient: Interpolation, line-search, and convergence rates
S Vaswani, A Mishkin, I Laradji, M Schmidt, G Gidel, S Lacoste-Julien
Advances in neural information processing systems 32, 2019
1232019
Online Influence Maximization under Independent Cascade Model with Semi-Bandit Feedback
Z Wen, B Kveton, M Valko, S Vaswani
arXiv preprint arXiv:1605.06593, 2017
109*2017
Stochastic polyak step-size for sgd: An adaptive learning rate for fast convergence
N Loizou, S Vaswani, IH Laradji, S Lacoste-Julien
International Conference on Artificial Intelligence and Statistics, 1306-1314, 2021
632021
Model-independent online learning for influence maximization
S Vaswani, B Kveton, Z Wen, M Ghavamzadeh, LVS Lakshmanan, ...
International Conference on Machine Learning, 3530-3539, 2017
59*2017
Influence Maximization with Bandits
S Vaswani, L Lakshmanan, M Schmidt
arXiv preprint arXiv:1503.00024, 2015
572015
Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits
B Kveton, C Szepesvari, S Vaswani, Z Wen, M Ghavamzadeh, T Lattimore
Proceedings of the 36th International Conference on Machine Learning 97 …, 2019
522019
Fast and furious convergence: Stochastic second order methods under interpolation
SY Meng, S Vaswani, IH Laradji, M Schmidt, S Lacoste-Julien
International Conference on Artificial Intelligence and Statistics, 2020
202020
Old Dog Learns New Tricks: Randomized UCB for Bandit Problems
S Vaswani, A Mehrabian, A Durand, B Kveton
International Conference on Artificial Intelligence and Statistics, 2020
192020
New insights into bootstrapping for bandits
S Vaswani, B Kveton, Z Wen, A Rao, M Schmidt, Y Abbasi-Yadkori
arXiv preprint arXiv:1805.09793, 2018
182018
Adaptive influence maximization in social networks: Why commit when you can adapt?
S Vaswani, LVS Lakshmanan
arXiv preprint arXiv:1604.08171, 2016
162016
Combining Bayesian optimization and Lipschitz optimization
MO Ahmed, S Vaswani, M Schmidt
Machine Learning 109 (1), 79-102, 2020
152020
Adaptive Gradient Methods Converge Faster with Over-Parameterization (but you should do a line-search)
S Vaswani, I Laradji, F Kunstner, SY Meng, M Schmidt, S Lacoste-Julien
arXiv preprint arXiv:2006.06835, 2020
14*2020
Horde of bandits using gaussian markov random fields
S Vaswani, M Schmidt, L Lakshmanan
Artificial Intelligence and Statistics, 690-699, 2017
132017
Modeling non-progressive phenomena for influence propagation
VY Lou, S Bhagat, LVS Lakshmanan, S Vaswani
Proceedings of the second ACM conference on Online social networks, 131-138, 2014
132014
Performance evaluation of medical imaging algorithms on Intel®MIC platform
J Khemka, M Gajjar, S Vaswani, N Vydyanathan, R Malladi, SV Vinutha
20th Annual International Conference on High Performance Computing, 396-404, 2013
72013
Fast 3d salient region detection in medical images using gpus
R Thota, S Vaswani, A Kale, N Vydyanathan
Machine Intelligence and Signal Processing, 11-26, 2016
62016
SVRG Meets AdaGrad: Painless Variance Reduction
B Dubois-Taine, S Vaswani, R Babanezhad, M Schmidt, S Lacoste-Julien
arXiv preprint arXiv:2102.09645, 2021
42021
To each optimizer a norm, to each norm its generalization
S Vaswani, R Babanezhad, J Gallego, A Mishkin, S Lacoste-Julien, ...
arXiv preprint arXiv:2006.06821, 2020
42020
A general class of surrogate functions for stable and efficient reinforcement learning.
S Vaswani, O Bachem, S Totaro, R Müller, S Garg, M Geist, MC Machado, ...
AISTATS, 8619-8649, 2022
3*2022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20