Mirco Mutti

According to our database1, Mirco Mutti authored at least 21 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
How to Scale Inverse RL to Large State Spaces? A Provably Efficient Approach.
CoRR, 2024

A Framework for Partially Observed Reward-States in RLHF.
CoRR, 2024

The Limits of Pure Exploration in POMDPs: When the Observation Entropy is Enough.
RLJ, 2024

How to Explore with Belief: State Entropy Maximization in POMDPs.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Geometric Active Exploration in Markov Decision Processes: the Benefit of Abstraction.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Test-Time Regret Minimization in Meta Reinforcement Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Offline Inverse RL: New Solution Concepts and Provably Efficient Algorithms.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
Unsupervised reinforcement learning via state entropy maximization.
PhD thesis, 2023

Convex Reinforcement Learning in Finite Trials.
J. Mach. Learn. Res., 2023

Persuading Farsighted Receivers in MDPs: the Power of Honesty.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

A Tale of Sampling and Estimation in Discounted Reinforcement Learning.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
Challenging Common Assumptions in Convex Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

The Importance of Non-Markovianity in Maximum State Entropy Exploration.
Proceedings of the International Conference on Machine Learning, 2022

Reward-Free Policy Space Compression for Reinforcement Learning.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

Unsupervised Reinforcement Learning in Multiple Environments.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Task-Agnostic Exploration via Policy Gradient of a Non-Parametric State Entropy Estimate.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
A Policy Gradient Method for Task-Agnostic Exploration.
CoRR, 2020

An Intrinsically-Motivated Approach for Learning Highly Exploring and Fast Mixing Policies.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2018
Configurable Markov Decision Processes.
Proceedings of the 35th International Conference on Machine Learning, 2018


  Loading...