Carlo D'Eramo
Carlo D'Eramo
Independent Research Group Leader - LiteRL @ Hessian.AI | TU Darmstadt
E-mail confirmado em robot-learning.de - Página inicial
Título
Citado por
Citado por
Ano
Sharing Knowledge in Multi-Task Deep Reinforcement Learning
C D'Eramo, D Tateo, A Bonarini, M Restelli, J Peters
International Conference on Learning Representations (ICLR), 2020
362020
Boosted Fitted Q-Iteration
S Tosatto, M Pirotta, C D'Eramo, M Restelli
Proceedings of The 34th International Conference on Machine Learning, 3434-3443, 2017
322017
Estimating the Maximum Expected Value through Gaussian Approximation
C D’Eramo, A Nuara, M Restelli
Proceedings of The 33rd International Conference on Machine Learning, 1032-1040, 2016
322016
Mushroomrl: Simplifying reinforcement learning research
C D'Eramo, D Tateo, A Bonarini, M Restelli, J Peters
Journal of Machine Learning Research (JMLR) 22, 1-5, 2020
162020
Estimating the Maximum Expected Value in Continuous Reinforcement Learning Problems
C D'Eramo, A Nuara, M Pirotta, R Marcello
AAAI Conference on Artificial Intelligence, 1840-1846, 2017
162017
Self-Paced Deep Reinforcement Learning
P Klink, C D'Eramo, J Peters, J Pajarinen
Advances in Neural Information Processing Systems (NeurIPS), 2020
92020
Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with Deep Reinforcement Learning
AS Morgan, D Nandha, G Chalvatzaki, C D'Eramo, AM Dollar, J Peters
International Conference on Robotics and Automation (ICRA), 2021
52021
Exploiting Action-Value Uncertainty to Drive Exploration in Reinforcement Learning
C D’Eramo, A Cini, M Restelli
2019 International Joint Conference on Neural Networks (IJCNN), 1-8, 2019
52019
Composable Energy Policies for Reactive Motion Generation and Reinforcement Learning
J Urain, A Li, P Liu, C D'Eramo, J Peters
Robotics: Science and Systems (RSS), 2021
32021
Deep Reinforcement Learning with Weighted Q-Learning
A Cini, C D'Eramo, J Peters, C Alippi
arXiv preprint arXiv:2003.09280, 2020
32020
Long-term visitation value for deep exploration in sparse reward reinforcement learning
S Parisi, D Tateo, M Hensel, C D'Eramo, J Peters, J Pajarinen
arXiv preprint arXiv:2001.00119, 2020
32020
A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning
P Klink, H Abdulsamad, B Belousov, C D'Eramo, J Peters, J Pajarinen
Journal of Machine Learning Research (JMLR) 22, 1-52, 2021
22021
Generalized Mean Estimation in Monte-Carlo Tree Search
T Dam, P Klink, C D'Eramo, J Peters, J Pajarinen
International Joint Conference on Artificial Intelligence (IJCAI), 2020
22020
Multi-Channel Interactive Reinforcement Learning for Sequential Tasks
D Koert, M Kircher, V Salikutluk, C D'Eramo, J Peters
Frontiers in Robotics and AI 7, 97, 2020
12020
Exploration Driven by an Optimistic Bellman Equation
S Tosatto, C D’Eramo, J Pajarinen, M Restelli, J Peters
2019 International Joint Conference on Neural Networks (IJCNN), 1-8, 2019
12019
Exploiting structure and uncertainty of Bellman updates in Markov decision processes
D Tateo, C D'Eramo, A Nuara, M Restelli, A Bonarini
Symposium on Adaptive Dynamic Programming and Reinforcement Learning (IEEE …, 2017
12017
Convex Regularization in Monte-Carlo Tree Search
TQ Dam, C D’Eramo, J Peters, J Pajarinen
International Conference on Machine Learning, 2365-2375, 2021
2021
Gaussian Approximation for Bias Reduction in Q-Learning
C D'Eramo, A Cini, A Nuara, M Pirotta, C Alippi, J Peters, M Restelli
Journal of Machine Learning Research 22 (277), 1-51, 2021
2021
On the exploitation of uncertainty to improve Bellman updates and exploration in Reinforcement Learning
C D'ERAMO
Italy, 2019
2019
On the use of deep Boltzmann machines for road signs classification
C D'Eramo
University of Illinois at Chicago, 2015
2015
O sistema não pode executar a operação agora. Tente novamente mais tarde.
Artigos 1–20