Folgen
Kyungmin Kim
Titel
Zitiert von
Zitiert von
Jahr
Rethinking the Self-Attention in Vision Transformers
K Kim, B Wu, X Dai, P Zhang, Z Yan, P Vajda, SJ Kim
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021
392021
Teaching Machines to Understand Baseball Games: Large-Scale Baseball Video Database for Multiple Video Understanding Tasks
M Shim, Y Hwi Kim, K Kim, S Joo Kim
Proceedings of the European Conference on Computer Vision (ECCV), 404-420, 2018
172018
Winning the CVPR'2021 Kinetics-GEBD Challenge: Contrastive Learning Approach
H Kang, J Kim, K Kim, T Kim, SJ Kim
arXiv preprint arXiv:2106.11549, 2021
162021
CAG-QIL: Context-Aware Actionness Grouping via Q Imitation Learning for Online Temporal Action Localization
H Kang, K Kim, Y Ko, SJ Kim
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2021
122021
Selective Perception: Learning Concise State Descriptions for Language Model Actors
K Nottingham, Y Razeghi, K Kim, JB Lanier, P Baldi, R Fox, S Singh
Proceedings of the 2024 Conference of the North American Chapter of the …, 2024
11*2024
Reinforcement Learning from Delayed Observations via World Models
A Karamzade, K Kim, M Kalsi, R Fox
The 1st Reinforcement Learning Conference (RLC), 2024
82024
An Investigation on Hardware-Aware Vision Transformer Scaling
C Li, K Kim, B Wu, P Zhang, H Zhang, X Dai, P Vajda, Y Lin
ACM Transactions on Embedded Computing Systems 23 (3), 1-19, 2024
12024
Realizable Continuous-Space Shields for Safe Reinforcement Learning
K Kim, D Corsi, A Rodriguez, JB Lanier, B Parellada, P Baldi, C Sánchez, ...
arXiv preprint arXiv:2410.02038, 2024
2024
Make the Pertinent Salient: Task-Relevant Reconstruction for Visual Control with Distraction
K Kim, C Fowlkes, R Fox
Workshop on Training Agents with Foundation Models at RLC 2024, 2024
2024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–9