Folgen
Tristan Hume
Tristan Hume
Anthropic
Bestätigte E-Mail-Adresse bei anthropic.com - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Training a helpful and harmless assistant with reinforcement learning from human feedback
Y Bai, A Jones, K Ndousse, A Askell, A Chen, N DasSarma, D Drain, ...
arXiv preprint arXiv:2204.05862, 2022
7232022
Constitutional ai: Harmlessness from ai feedback
Y Bai, S Kadavath, S Kundu, A Askell, J Kernion, A Jones, A Chen, ...
arXiv preprint arXiv:2212.08073, 2022
6022022
Language models (mostly) know what they know
S Kadavath, T Conerly, A Askell, T Henighan, D Drain, E Perez, ...
arXiv preprint arXiv:2207.05221, 2022
2372022
Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned
D Ganguli, L Lovitt, J Kernion, A Askell, Y Bai, S Kadavath, B Mann, ...
arXiv preprint arXiv:2209.07858, 2022
2262022
Toy models of superposition
N Elhage, T Hume, C Olsson, N Schiefer, T Henighan, S Kravec, ...
arXiv preprint arXiv:2209.10652, 2022
1462022
Discovering language model behaviors with model-written evaluations
E Perez, S Ringer, K Lukošiūtė, K Nguyen, E Chen, S Heiner, C Pettit, ...
arXiv preprint arXiv:2212.09251, 2022
1312022
The capacity for moral self-correction in large language models
D Ganguli, A Askell, N Schiefer, TI Liao, K Lukošiūtė, A Chen, A Goldie, ...
arXiv preprint arXiv:2302.07459, 2023
982023
Towards monosemanticity: Decomposing language models with dictionary learning
T Bricken, A Templeton, J Batson, B Chen, A Jermyn, T Conerly, N Turner, ...
Transformer Circuits Thread, 2, 2023
622023
Measuring progress on scalable oversight for large language models
SR Bowman, J Hyun, E Perez, E Chen, C Pettit, S Heiner, K Lukošiūtė, ...
arXiv preprint arXiv:2211.03540, 2022
442022
Scaling laws and interpretability of learning from repeated data
D Hernandez, T Brown, T Conerly, N DasSarma, D Drain, S El-Showk, ...
arXiv preprint arXiv:2205.10487, 2022
392022
Measuring faithfulness in chain-of-thought reasoning
T Lanham, A Chen, A Radhakrishnan, B Steiner, C Denison, ...
arXiv preprint arXiv:2307.13702, 2023
382023
Training a helpful and harmless assistant with reinforcement learning from human feedback. CoRR, abs/2204.05862, 2022a. doi: 10.48550
Y Bai, A Jones, K Ndousse, A Askell, A Chen, N DasSarma, D Drain, ...
arXiv preprint arXiv.2204.05862, 0
13
Dawn Drain
D Hernandez, T Brown, T Conerly, N DasSarma
Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Tom Henighan, Tristan …, 2022
112022
Specific versus general principles for constitutional ai
S Kundu, Y Bai, S Kadavath, A Askell, A Callahan, A Chen, A Goldie, ...
arXiv preprint arXiv:2310.13798, 2023
92023
Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv
Y Bai, A Jones, K Ndousse, A Askell, A Chen, N DasSarma, D Drain, ...
Preprint posted online April 12, 10.48550, 2022
82022
Eye Tracker Reviews: Pupil Labs
T Hume
Tobii, Eye Tribe, Xlabs http://thume. ca/2016/03/24/eye-tracker-reviewspupil …, 2016
22016
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–16