Folgen
Flavien Prost
Flavien Prost
Bestätigte E-Mail-Adresse bei google.com
Titel
Zitiert von
Zitiert von
Jahr
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
25562023
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ...
arXiv preprint arXiv:2403.05530, 2024
10312024
Fairness without demographics through adversarially reweighted learning
P Lahoti, A Beutel, J Chen, K Lee, F Prost, N Thain, X Wang, E Chi
Advances in neural information processing systems 33, 728-740, 2020
3822020
Debiasing embeddings for reduced gender bias in text classification
F Prost, N Thain, T Bolukbasi
First Workshop on Gender Bias in Natural Language Processing ACL 2019, 2019
822019
Understanding and improving fairness-accuracy trade-offs in multi-task learning
Y Wang, X Wang, A Beutel, F Prost, J Chen, EH Chi
Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data …, 2021
572021
Toward a better trade-off between performance and fairness with kernel-based distribution matching
F Prost, H Qian, Q Chen, EH Chi, J Chen, A Beutel
NeurIPS 2019 Workshop on Machine Learning with Guarantees, 2019
50*2019
Measuring Recommender System Effects with Simulated Users
S Yao, Y Halpern, N Thain, X Wang, K Lee, F Prost, AB H. Chi, Jilin Chen
2nd Workshop on Fairness, Accountability, Transparency, Ethics and Society …, 2020
492020
Practical compositional fairness: Understanding fairness in multi-component recommender systems
X Wang, N Thain, A Sinha, F Prost, EH Chi, J Chen, A Beutel
Proceedings of the 14th ACM International Conference on Web Search and Data …, 2021
36*2021
Measuring model fairness under noisy covariates: A theoretical perspective
F Prost, P Awasthi, N Blumm, A Kumthekar, T Potter, L Wei, X Wang, ...
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 873-883, 2021
162021
FRAPPÉ: A Group Fairness Framework for Post-Processing Everything
A Tifrea, P Lahoti, B Packer, Y Halpern, A Beirami, F Prost
Forty-first International Conference on Machine Learning, 0
6*
Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
F Prost, B Packer, J Chen, L Wei, P Kremp, N Blumm, S Wang, T Doshi, ...
arXiv preprint arXiv:2210.07755, 2022
42022
Inducing group fairness in llm-based decisions
J Atwood, P Lahoti, A Balashankar, F Prost, A Beirami
arXiv preprint arXiv:2406.16738, 2024
22024
InfAlign: Inference-aware language model alignment
A Balashankar, Z Sun, J Berant, J Eisenstein, M Collins, A Hutter, J Lee, ...
arXiv preprint arXiv:2412.19792, 2024
2024
Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification
J Atwood, T Tian, B Packer, M Deodhar, J Chen, A Beutel, F Prost, ...
International Conference on Machine Learning (ICML) SCIS Workshop, 2023
2023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–14