Thomas M. Moerland

According to our database1, Thomas M. Moerland authored at least 32 papers between 2005 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
World Models Increase Autonomy in Reinforcement Learning.
CoRR, 2024

Reinforcement Learning for Sustainable Energy: A Survey.
CoRR, 2024

Towards General Negotiation Strategies with End-to-End Reinforcement Learning.
CoRR, 2024

Slot Structured World Models.
CoRR, 2024

Explicitly Disentangled Representations in Object-Centric Learning.
CoRR, 2024

What Model Does MuZero Learn?
Proceedings of the ECAI 2024 - 27th European Conference on Artificial Intelligence, 19-24 October 2024, Santiago de Compostela, Spain, 2024

2023
Are LSTMs good few-shot learners?
Mach. Learn., November, 2023

Model-based Reinforcement Learning: A Survey.
Found. Trends Mach. Learn., 2023

EduGym: An Environment Suite for Reinforcement Learning Education.
CoRR, 2023

What model does MuZero learn?
CoRR, 2023

First Go, then Post-Explore: The Benefits of Post-Exploration in Intrinsic Motivation.
Proceedings of the 15th International Conference on Agents and Artificial Intelligence, 2023

Two-Memory Reinforcement Learning.
Proceedings of the IEEE Conference on Games, 2023

Continuous Episodic Control.
Proceedings of the IEEE Conference on Games, 2023

2022
A Unifying Framework for Reinforcement Learning and Planning.
Frontiers Artif. Intell., 2022

When to Go, and When to Explore: The Benefit of Post-Exploration in Intrinsic Motivation.
CoRR, 2022

On Credit Assignment in Hierarchical Reinforcement Learning.
CoRR, 2022

2021
The Intersection of Planning and Learning.
PhD thesis, 2021

Visualizing MuZero Models.
CoRR, 2021

2020
Model-based Reinforcement Learning: A Survey.
CoRR, 2020

A Framework for Reinforcement Learning and Planning.
CoRR, 2020

The Second Type of Uncertainty in Monte Carlo Tree Search.
CoRR, 2020

Think Too Fast Nor Too Slow: The Computational Trade-off Between Planning And Reinforcement Learning.
CoRR, 2020

2018
RRT-CoLearn: Towards Kinodynamic Planning Without Numerical Trajectory Optimization.
IEEE Robotics Autom. Lett., 2018

Emotion in reinforcement learning agents and robots: a survey.
Mach. Learn., 2018

The Potential of the Return Distribution for Exploration in RL.
CoRR, 2018

A0C: Alpha Zero in Continuous Action Space.
CoRR, 2018

Monte Carlo Tree Search for Asymmetric Trees.
CoRR, 2018

2017
Efficient exploration with Double Uncertain Value Networks.
CoRR, 2017

Learning Multimodal Transition Dynamics for Model-Based Reinforcement Learning.
CoRR, 2017

2016
Knowing What You Don't Know - Novelty Detection for Action Recognition in Personal Robots.
Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016), 2016

Fear and Hope Emerge from Anticipation in Model-Based Reinforcement Learning.
Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 2016

2005
Fate of unattended fearful faces in the amygdala is determined by both attentional resources and cognitive modulation.
NeuroImage, 2005


  Loading...