Kavosh Asadi
Kavosh Asadi
Research Scientist, Amazon Web Services
Verified email at amazon.com - Homepage
Cited by
Cited by
Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning
JD Williams, K Asadi, G Zweig
Annual Meeting of the Association for Computational Linguistics, 665-677, 2017
An Alternative Softmax Operator for Reinforcement Learning
K Asadi, ML Littman
Proceedings of the 34th International Conference on Machine Learning, 243-252, 2017
Lipschitz Continuity in Model-based Reinforcement Learning
K Asadi, D Misra, ML Littman
Proceedings of the 35th International Conference on Machine Learning, 2018
Deepmellow: removing the need for a target network in deep Q-learning
S Kim, K Asadi, M Littman, G Konidaris
Proceedings of the Twenty Eighth International Joint Conference on …, 2019
State abstraction as compression in apprenticeship learning
D Abel, D Arumugam, K Asadi, Y Jinnai, ML Littman, LLS Wong
Proceedings of the AAAI Conference on Artificial Intelligence 33, 3134-3142, 2019
Mean Actor Critic
K Asadi, C Allen, M Roderick, A Mohamed, G Konidaris, M Littman
arXiv preprint arXiv:1709.00503, 2017
Sample-efficient Reinforcement Learning for Dialog Control
K Asadi, JD Williams
arXiv preprint arXiv:1612.06000, 2016
Combating the Compounding-Error Problem with a Multi-step Model
K Asadi, D Misra, S Kim, ML Littman
arXiv preprint arXiv:1905.13320, 2019
Equivalence between wasserstein and value-aware model-based reinforcement learning
K Asadi, E Cater, D Misra, ML Littman
FAIM Workshop on Prediction and Generative Modeling in Reinforcement Learning 3, 2018
Strengths, weaknesses, and combinations of model-based and model-free reinforcement learning
K Asadi
Department of Computing Science University of Alberta, 2015
Deep radial-basis value functions for continuous control
K Asadi, N Parikh, RE Parr, GD Konidaris, ML Littman
Proceedings of the AAAI Conference on Artificial Intelligence, 2021
Learning State Abstractions for Transfer in Continuous Control
K Asadi, D Abel, ML Littman
arXiv preprint arXiv:2002.05518, 2020
Lipschitz Lifelong Reinforcement Learning
E Lecarpentier, D Abel, K Asadi, Y Jinnai, E Rachelson, ML Littman
arXiv preprint arXiv:2001.05411, 2020
Mitigating Planner Overfitting in Model-Based Reinforcement Learning
D Arumugam, D Abel, K Asadi, N Gopalan, C Grimm, JK Lee, L Lehnert, ...
arXiv preprint arXiv:1812.01129, 2018
Continuous Doubly Constrained Batch Reinforcement Learning
R Fakoor, J Mueller, P Chaudhari, AJ Smola
arXiv preprint arXiv:2102.09225, 2021
Towards a Simple Approach to Multi-step Model-based Reinforcement Learning
K Asadi, E Cater, D Misra, ML Littman
arXiv preprint arXiv:1811.00128, 2018
Convergence of a Human-in-the-Loop Policy-Gradient Algorithm With Eligibility Trace Under Reward, Policy, and Advantage Feedback
I Shah, D Halpern, K Asadi, ML Littman
arXiv preprint arXiv:2109.07054, 2021
The system can't perform the operation now. Try again later.
Articles 1–17