Folgen
Cho-Jui Hsieh
Cho-Jui Hsieh
Bestätigte E-Mail-Adresse bei cs.ucla.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
LIBLINEAR: A library for large linear classification
RE Fan, KW Chang, CJ Hsieh, XR Wang, CJ Lin
the Journal of machine Learning research 9, 1871-1874, 2008
99732008
Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models
PY Chen, H Zhang, Y Sharma, J Yi, CJ Hsieh
Proceedings of the 10th ACM workshop on artificial intelligence and security …, 2017
20972017
Visualbert: A simple and performant baseline for vision and language
LH Li, M Yatskar, D Yin, CJ Hsieh, KW Chang
arXiv preprint arXiv:1908.03557, 2019
18812019
Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks
WL Chiang, X Liu, S Si, Y Li, S Bengio, CJ Hsieh
Proceedings of the 25th ACM SIGKDD international conference on knowledge …, 2019
14432019
Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent
X Lian, C Zhang, H Zhang, CJ Hsieh, W Zhang, J Liu
Advances in neural information processing systems 30, 2017
12902017
A dual coordinate descent method for large-scale linear SVM
CJ Hsieh, KW Chang, CJ Lin, SS Keerthi, S Sundararajan
Proceedings of the 25th international conference on Machine learning, 408-415, 2008
12112008
Large batch optimization for deep learning: Training bert in 76 minutes
Y You, J Li, S Reddi, J Hseu, S Kumar, S Bhojanapalli, X Song, J Demmel, ...
arXiv preprint arXiv:1904.00962, 2019
11102019
Efficient neural network robustness certification with general activation functions
H Zhang, TW Weng, PY Chen, CJ Hsieh, L Daniel
Advances in neural information processing systems 31, 2018
8492018
Towards fast computation of certified robustness for relu networks
L Weng, H Zhang, H Chen, Z Song, CJ Hsieh, L Daniel, D Boning, ...
International Conference on Machine Learning, 5276-5285, 2018
8092018
Training and testing low-degree polynomial data mappings via linear SVM.
YW Chang, CJ Hsieh, KW Chang, M Ringgaard, CJ Lin
Journal of Machine Learning Research 11 (4), 2010
7462010
Ead: elastic-net attacks to deep neural networks via adversarial examples
PY Chen, Y Sharma, H Zhang, J Yi, CJ Hsieh
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
6862018
Dynamicvit: Efficient vision transformers with dynamic token sparsification
Y Rao, W Zhao, B Liu, J Lu, J Zhou, CJ Hsieh
Advances in neural information processing systems 34, 13937-13949, 2021
6152021
Evaluating the robustness of neural networks: An extreme value theory approach
TW Weng, H Zhang, PY Chen, J Yi, D Su, Y Gao, CJ Hsieh, L Daniel
arXiv preprint arXiv:1801.10578, 2018
5512018
Towards robust neural networks via random self-ensemble
X Liu, M Cheng, H Zhang, CJ Hsieh
Proceedings of the european conference on computer vision (ECCV), 369-385, 2018
5172018
Imagenet training in minutes
Y You, Z Zhang, CJ Hsieh, J Demmel, K Keutzer
Proceedings of the 47th international conference on parallel processing, 1-10, 2018
4932018
Query-efficient hard-label black-box attack: An optimization-based approach
M Cheng, T Le, PY Chen, J Yi, H Zhang, CJ Hsieh
arXiv preprint arXiv:1807.04457, 2018
4812018
Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks
CC Tu, P Ting, PY Chen, S Liu, H Zhang, J Yi, CJ Hsieh, SM Cheng
Proceedings of the AAAI conference on artificial intelligence 33 (01), 742-749, 2019
4462019
Sparse inverse covariance matrix estimation using quadratic approximation
CJ Hsieh, I Dhillon, P Ravikumar, M Sustik
Advances in neural information processing systems 24, 2011
4292011
Towards stable and efficient training of verifiably robust neural networks
H Zhang, H Chen, C Xiao, S Gowal, R Stanforth, B Li, D Boning, CJ Hsieh
arXiv preprint arXiv:1906.06316, 2019
3662019
Coordinate descent method for large-scale l2-loss linear support vector machines.
KW Chang, CJ Hsieh, CJ Lin
Journal of Machine Learning Research 9 (7), 2008
3592008
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20