Folgen
Xiang Li
Xiang Li
PhD candidate, Stony Brook University
Bestätigte E-Mail-Adresse bei stonybrook.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Peekaboo: Text to image diffusion models are zero-shot segmentors
R Burgert, K Ranasinghe, X Li, MS Ryoo
arXiv preprint arXiv:2211.13224, 2022
362022
Does self-supervised learning really improve reinforcement learning from pixels?
X Li, J Shang, S Das, M Ryoo
Advances in Neural Information Processing Systems 35, 30865-30881, 2022
282022
Starformer: Transformer with state-action-reward representations for visual reinforcement learning
J Shang, K Kahatapitiya, X Li, MS Ryoo
European conference on computer vision, 462-479, 2022
282022
Crossway diffusion: Improving diffusion-based visuomotor policy via self-supervised learning
X Li, V Belagali, J Shang, MS Ryoo
2024 IEEE International Conference on Robotics and Automation (ICRA), 16841 …, 2024
172024
Llara: Supercharging robot learning data for vision-language policy
X Li, C Mata, J Park, K Kahatapitiya, YS Jang, J Shang, K Ranasinghe, ...
arXiv preprint arXiv:2406.20095, 2024
112024
Starformer: Transformer with state-action-reward representations for robot learning
J Shang, X Li, K Kahatapitiya, YC Lee, MS Ryoo
IEEE transactions on pattern analysis and machine intelligence 45 (11 …, 2022
92022
Limited data, unlimited potential: A study on vits augmented by masked autoencoders
S Das, T Jain, D Reilly, P Balaji, S Karmakar, S Marjit, X Li, A Das, ...
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2024
82024
Triton: Neural neural textures for better sim2real
RD Burgert, J Shang, X Li, MS Ryoo
6th Annual Conference on Robot Learning, 2022
8*2022
Diffusion illusions: Hiding images in plain sight
R Burgert, X Li, A Leite, K Ranasinghe, M Ryoo
ACM SIGGRAPH 2024 Conference Papers, 1-11, 2024
62024
Understanding Long Videos in One Multimodal Language Model Pass
K Ranasinghe, X Li, K Kahatapitiya, MS Ryoo
arXiv preprint arXiv:2403.16998, 2024
32024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–10