Folgen
Ruben Glatt
Titel
Zitiert von
Zitiert von
Jahr
Simultaneously Learning and Advising in Multiagent Reinforcement Learning
FL da Silva, R Glatt, AHR Costa
Proc. 16th International Conference on Autonomous Agents and Multiagent …, 2017
1492017
Symbolic Regression via Neural-Guided Genetic Programming Population Seeding
T Mundhenk, M Landajuela, R Glatt, C Santiago, B Petersen
Advances in Neural Information Processing Systems 34, 2021
99*2021
Discovering symbolic policies with deep reinforcement learning
M Landajuela, BK Petersen, S Kim, CP Santiago, R Glatt, N Mundhenk, ...
International Conference on Machine Learning, 5979-5989, 2021
922021
Towards Knowledge Transfer in Deep Reinforcement Learning
R Glatt, FL Silva, AHR Costa
5th Brazilian Conference on Intelligent System (BRACIS), 91-96, 2016
522016
MOO-MDP: An Object-Oriented Representation for Cooperative Multiagent Reinforcement Learning
FL Da Silva, R Glatt, AHR Costa
IEEE Transactions on Cybernetics 49 (2), 567-579, 2019
382019
A Unified Framework for Deep Symbolic Regression
M Landajuela, C Lee, J Yang, R Glatt, CP Santiago, I Aravena, ...
Advances in Neural Information Processing Systems, 2022
332022
Decaf: deep case-based policy inference for knowledge transfer in reinforcement learning
R Glatt, FL Da Silva, RA da Costa Bianchi, AHR Costa
Expert Systems with Applications 156, 113420, 2020
272020
Increasing performance of electric vehicles in ride-hailing services using deep reinforcement learning
JF Pettit, R Glatt, JR Donadee, BK Petersen
arXiv preprint arXiv:1912.03408, 2019
192019
Improving exploration in policy gradient search: Application to symbolic optimization
M Landajuela, BK Petersen, SK Kim, CP Santiago, R Glatt, TN Mundhenk, ...
arXiv e-prints, arXiv: 2107.09158, 2021
16*2021
Collaborative energy demand response with decentralized actor and centralized critic
R Glatt, FL Silva, B Soper, WA Dawson, E Rusu, RA Goldhahn
International Conference on Systems for Energy-Efficient Buildings, Cities …, 2021
112021
Deep Symbolic Optimization for Electric Component Sizing in Fixed Topology Power Converters
R Glatt, FL Silva, C Huang, L Xue, M Wang, F Chang, V Bui, YL Murphey, ...
Lawrence Livermore National Lab.(LLNL), Livermore, CA (United States), 2021
112021
Improving Deep Reinforcement Learning with knowledge transfer (Extended abstract)
R Glatt, AHR Costa
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence …, 2017
11*2017
Policy Reuse in Deep Reinforcement Learning (Student abstract)
R Glatt, AHR Costa
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence …, 2017
10*2017
Case-based Policy Inference for Transfer in Reinforcement Learning
R Glatt, FL Silva, AHR Costa
Workshop on Scaling-Up Reinforcement Learning (SURL) at the 28th European …, 2017
82017
An intelligent system for automatic selection of dc-dc converter topology with optimal design
S Wang, Y Murphey, W Su, M Wang, V Bui, F Chang, C Huang, L Xue, ...
Lawrence Livermore National Lab.(LLNL), Livermore, CA (United States), 2022
72022
Interpretable symbolic regression for data science: Analysis of the 2022 competition
FO de França, M Virgolin, M Kommenda, MS Majumder, M Cranmer, ...
arXiv preprint arXiv:2304.01117, 2023
62023
Deep neural network-based surrogate model for optimal component sizing of power converters using deep reinforcement learning
VH Bui, F Chang, W Su, M Wang, YL Murphey, FL Da Silva, C Huang, ...
IEEE Access 10, 78702-78712, 2022
62022
A study on efficient reinforcement learning through knowledge transfer
R Glatt, FL da Silva, RA da Costa Bianchi, AHR Costa
Federated and Transfer Learning, 329-356, 2022
42022
A Framework to Discover and Reuse Object-Oriented Options in Reinforcement Learning
RC Bonini, FL da Silva, R Glatt, E Spina, AHR Costa
7th Brazilian Conference on Intelligent Systems (BRACIS), 2018
42018
Leveraging Language Models to Efficiently Learn Symbolic Optimization Solutions
FL da Silva, A Goncalves, S Nguyen, D Vashchenko, R Glatt, T Desautels, ...
Adaptive and Learning Agents Workshop 2022, 2022
32022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20