Dilip Arumugam

According to our database1, Dilip Arumugam authored at least 30 papers between 2015 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Satisficing Exploration for Deep Reinforcement Learning.
CoRR, 2024

Exploration Unbound.
CoRR, 2024

2023
Social Contract AI: Aligning AI Assistants with Implicit Group Norms.
CoRR, 2023

Hindsight-DICE: Stable Credit Assignment for Deep Reinforcement Learning.
CoRR, 2023

Shattering the Agent-Environment Interface for Fine-Tuning Inclusive Language Models.
CoRR, 2023

Bayesian Reinforcement Learning with Limited Cognitive Load.
CoRR, 2023

Cultural reinforcement learning: a framework for modeling cumulative culture on a limited channel.
Proceedings of the 45th Annual Meeting of the Cognitive Science Society, 2023

2022
Inclusive Artificial Intelligence.
CoRR, 2022

On Rate-Distortion Theory in Capacity-Limited Cognition & Reinforcement Learning.
CoRR, 2022

Between Rate-Distortion Theory & Value Equivalence in Model-Based Reinforcement Learning.
CoRR, 2022

Deciding What to Model: Value-Equivalent Sampling for Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Planning to the Information Horizon of BAMDPs via Epistemic State Abstraction.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

2021
Bad-Policy Density: A Measure of Reinforcement Learning Hardness.
CoRR, 2021

An Information-Theoretic Perspective on Credit Assignment in Reinforcement Learning.
CoRR, 2021

The Value of Information When Deciding What to Learn.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Deciding What to Learn: A Rate-Distortion Approach.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
Randomized Value Functions via Posterior State-Abstraction Sampling.
CoRR, 2020

Reparameterized Variational Divergence Minimization for Stable Imitation.
CoRR, 2020

Flexible and Efficient Long-Range Planning Through Curious Exploration.
Proceedings of the 37th International Conference on Machine Learning, 2020

Value Preserving State-Action Abstractions.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
Deep Reinforcement Learning from Policy-Dependent Human Feedback.
CoRR, 2019

Grounding natural language instructions to semantic goal representations for abstraction and generalization.
Auton. Robots, 2019

State Abstraction as Compression in Apprenticeship Learning.
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019

2018
Mitigating Planner Overfitting in Model-Based Reinforcement Learning.
CoRR, 2018

Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications.
Proceedings of the Robotics: Science and Systems XIV, 2018

State Abstractions for Lifelong Reinforcement Learning.
Proceedings of the 35th International Conference on Machine Learning, 2018

2017
Latent Attention Networks.
CoRR, 2017

Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities.
Proceedings of the Robotics: Science and Systems XIII, 2017

A Tale of Two DRAGGNs: A Hybrid Approach for Interpreting Action-Oriented and Goal-Oriented Instructions.
Proceedings of the First Workshop on Language Grounding for Robotics, 2017

2015
Grounding English Commands to Reward Functions.
Proceedings of the Robotics: Science and Systems XI, Sapienza University of Rome, 2015


  Loading...