Phillip Swazinna

Orcid: 0000-0003-4667-9584

According to our database1, Phillip Swazinna authored at least 8 papers between 2021 and 2023.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
Policy Regularization for Model-Based Offline Reinforcement Learning (Policy Regularisierung für Modellbasiertes Offline Reinforcement Learning)
PhD thesis, 2023

Learning Control Policies for Variable Objectives from Offline Data.
Proceedings of the IEEE Symposium Series on Computational Intelligence, 2023

User-Interactive Offline Reinforcement Learning.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Automatic Trade-off Adaptation in Offline RL.
Proceedings of the 31st European Symposium on Artificial Neural Networks, 2023

2022
Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning.
CoRR, 2022

2021
Overcoming model bias for robust offline deep reinforcement learning.
Eng. Appl. Artif. Intell., 2021

Measuring Data Quality for Dataset Selection in Offline Reinforcement Learning.
Proceedings of the IEEE Symposium Series on Computational Intelligence, 2021

Behavior Constraining in Weight Space for Offline Reinforcement Learning.
Proceedings of the 29th European Symposium on Artificial Neural Networks, 2021


  Loading...