Noah Y. Siegel

Orcid: 0000-0002-5746-117X

According to our database1, Noah Y. Siegel authored at least 19 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Learning agile soccer skills for a bipedal robot with deep reinforcement learning.
Sci. Robotics, 2024

On scalable oversight with weak LLMs judging strong LLMs.
CoRR, 2024

The Effect of Model Size on LLM Post-hoc Explainability via LIME.
CoRR, 2024

The Probabilities Also Matter: A More Faithful Metric for Faithfulness of Free-Text Explanations in Large Language Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, 2024

2022

From motor control to team play in simulated humanoid football.
Sci. Robotics, 2022

Solving math word problems with process- and outcome-based feedback.
CoRR, 2022

Imitate and Repurpose: Learning Reusable Robot Movement Skills From Human and Animal Behaviors.
CoRR, 2022

2021
Data-efficient Hindsight Off-policy Option Learning.
Proceedings of the 38th International Conference on Machine Learning, 2021

Towards Real Robot Learning in the Wild: A Case Study in Bipedal Locomotion.
Proceedings of the Conference on Robot Learning, 8-11 November 2021, London, UK., 2021

2020
"What, not how": Solving an under-actuated insertion task from scratch.
CoRR, 2020

Simple Sensor Intentions for Exploration.
CoRR, 2020

Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning.
CoRR, 2020

Compositional Transfer in Hierarchical Reinforcement Learning.
Proceedings of the Robotics: Science and Systems XVI, 2020

Critic Regularized Regression.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Keep Doing What Worked: Behavior Modelling Priors for Offline Reinforcement Learning.
Proceedings of the 8th International Conference on Learning Representations, 2020

2019
Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models.
CoRR, 2019

Regularized Hierarchical Policies for Compositional Transfer in Robotics.
CoRR, 2019

Imagined Value Gradients: Model-Based Policy Optimization with Tranferable Latent Dynamics Models.
Proceedings of the 3rd Annual Conference on Robot Learning, 2019


  Loading...