Folgen
Yinfei Yang
Yinfei Yang
Bestätigte E-Mail-Adresse bei apple.com
Titel
Zitiert von
Zitiert von
Jahr
Universal sentence encoder
D Cer, Y Yang, S Kong, N Hua, N Limtiaco, RS John, N Constant, ...
arXiv preprint arXiv:1803.11175, 2018
3037*2018
Scaling up visual and vision-language representation learning with noisy text supervision
C Jia, Y Yang, Y Xia, YT Chen, Z Parekh, H Pham, Q Le, YH Sung, Z Li, ...
International conference on machine learning, 4904-4916, 2021
22022021
Language-agnostic BERT sentence embedding
F Feng, Y Yang, D Cer, N Arivazhagan, W Wang
arXiv preprint arXiv:2007.01852, 2020
6232020
Scaling autoregressive models for content-rich text-to-image generation
J Yu, Y Xu, JY Koh, T Luong, G Baid, Z Wang, V Vasudevan, A Ku, Y Yang, ...
arXiv preprint arXiv:2206.10789 2 (3), 5, 2022
5452022
Multilingual universal sentence encoder for semantic retrieval
Y Yang, D Cer, A Ahmad, M Guo, J Law, N Constant, GH Abrego, S Yuan, ...
arXiv preprint arXiv:1907.04307, 2019
4492019
Cross-modal contrastive learning for text-to-image generation
H Zhang, JY Koh, J Baldridge, H Lee, Y Yang
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
2952021
PAWS-X: A cross-lingual adversarial dataset for paraphrase identification
Y Yang, Y Zhang, C Tar, J Baldridge
arXiv preprint arXiv:1908.11828, 2019
2722019
Single image 3D object detection and pose estimation for grasping
M Zhu, KG Derpanis, Y Yang, S Brahmbhatt, M Zhang, C Phillips, M Lecce, ...
2014 IEEE International Conference on Robotics and Automation (ICRA), 3936-3943, 2014
2522014
Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models
J Ni, GH Ábrego, N Constant, J Ma, KB Hall, D Cer, Y Yang
arXiv preprint arXiv:2108.08877, 2021
2202021
A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature
B Nye, JJ Li, R Patel, Y Yang, IJ Marshall, A Nenkova, BC Wallace
Proceedings of the conference. Association for Computational Linguistics …, 2018
2142018
Large dual encoders are generalizable retrievers
J Ni, C Qu, J Lu, Z Dai, GH Ábrego, J Ma, VY Zhao, Y Luan, KB Hall, ...
arXiv preprint arXiv:2112.07899, 2021
1682021
LongT5: Efficient text-to-text transformer for long sequences
M Guo, J Ainslie, D Uthus, S Ontanon, J Ni, YH Sung, Y Yang
arXiv preprint arXiv:2112.07916, 2021
1672021
Learning Semantic Textual Similarity from Conversations
Y Yang, S Yuan, D Cer, S Kong, N Constant, P Pilar, H Ge, YH Sung, ...
The 3rd Workshop on Representation Learning for NLP (RepL4NLP), ACL2018, 2018
1662018
Learning cross-lingual sentence representations via a multi-task dual-encoder model
M Chidambaram, Y Yang, D Cer, S Yuan, YH Sung, B Strope, R Kurzweil
arXiv preprint arXiv:1810.12836, 2018
1412018
Zero-shot neural passage retrieval via domain-targeted synthetic question generation
J Ma, I Korotkov, Y Yang, K Hall, R McDonald
arXiv preprint arXiv:2004.14503, 2020
125*2020
Improving multilingual sentence embedding using bi-directional dual encoder with additive margin softmax
Y Yang, GH Abrego, S Yuan, M Guo, Q Shen, D Cer, YH Sung, B Strope, ...
arXiv preprint arXiv:1902.08564, 2019
1072019
Effective parallel corpus mining using bilingual sentence embeddings
M Guo, Q Shen, Y Yang, H Ge, D Cer, GH Abrego, K Stevens, N Constant, ...
arXiv preprint arXiv:1807.11906, 2018
1042018
Semantic analysis and helpfulness prediction of text for online product reviews
Y Yang, Y Yan, M Qiu, F Bao
Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015
1002015
MURAL: Multimodal, multitask representations across languages
A Jain, M Guo, K Srinivasan, T Chen, S Kudugunta, C Jia, Y Yang, ...
Findings of the Association for computational Linguistics: EMNLP 2021, 3449-3463, 2021
66*2021
Cross-domain review helpfulness prediction based on convolutional neural networks with auxiliary domain discriminators
C Chen, Y Yang, J Zhou, X Li, F Bao
Proceedings of the 2018 Conference of the North American Chapter of the …, 2018
642018
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20