Amir-massoud Farahmand

Affiliations:
  • Vector Institute, Toronto, ON, Canada
  • University of Toronto, ON, Canada
  • Mitsubishi Electric Research Laboratories (MERL) (former)


According to our database1, Amir-massoud Farahmand authored at least 63 papers between 2004 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Deflated Dynamics Value Iteration.
CoRR, 2024

PID Accelerated Temporal Difference Algorithms.
CoRR, 2024

When does Self-Prediction help? Understanding Auxiliary Tasks in Reinforcement Learning.
CoRR, 2024

Dissecting Deep RL with High Update Ratios: Combatting Value Overestimation and Divergence.
CoRR, 2024

Maximum Entropy Model Correction in Reinforcement Learning.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
Understanding the robustness difference between stochastic gradient descent and adaptive gradient methods.
Trans. Mach. Learn. Res., 2023

Improving Adversarial Transferability via Model Alignment.
CoRR, 2023

Efficient and Accurate Optimal Transport with Mirror Descent and Conjugate Gradients.
CoRR, 2023

λ-AC: Learning latent decision-aware models for reinforcement learning in continuous state-spaces.
CoRR, 2023

Distributional Model Equivalence for Risk-Sensitive Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
Understanding and mitigating the limitations of prioritized experience replay.
Proceedings of the Uncertainty in Artificial Intelligence, 2022

Operator Splitting Value Iteration.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Value Gradient weighted Model-Based Reinforcement Learning.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Learning Object-Oriented Dynamics for Planning from Text.
Proceedings of the Tenth International Conference on Learning Representations, 2022

2021
Deep Reinforcement Learning for Online Control of Stochastic Partial Differential Equations.
CoRR, 2021

PID Accelerated Value Iteration Algorithm.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
The act of remembering: a study in partially observable reinforcement learning.
CoRR, 2020

Beyond Prioritized Replay: Sampling States in Model-Based RL via Simulated Priorities.
CoRR, 2020

Adversarial Robustness through Regularization: A Second-Order Approach.
CoRR, 2020

Policy-Aware Model Learning for Policy Gradient Methods.
CoRR, 2020

An implicit function learning approach for parametric modal regression.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Frequency-based Search-control in Dyna.
Proceedings of the 8th International Conference on Learning Representations, 2020

2019
Value Function in Frequency Domain and the Characteristic Value Iteration Algorithm.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Improving Skin Condition Classification with a Visual Symptom Checker Trained Using Reinforcement Learning.
Proceedings of the Medical Image Computing and Computer Assisted Intervention - MICCAI 2019, 2019

Hill Climbing on Value Estimates for Search-control in Dyna.
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019

Dimensionality Reduction for Representing the Knowledge of Probabilistic Models.
Proceedings of the 7th International Conference on Learning Representations, 2019

2018
Improving Skin Condition Classification with a Question Answering Model.
CoRR, 2018

Iterative Value-Aware Model Learning.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Reinforcement Learning with Function-Valued Action Spaces for Partial Differential Equation Control.
Proceedings of the 35th International Conference on Machine Learning, 2018

2017
Attentional Network for Visual Object Detection.
CoRR, 2017

Learning to regulate rolling ball motion.
Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence, 2017

Random Projection Filter Bank for Time Series Data.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

Deep reinforcement learning for partial differential equation control.
Proceedings of the 2017 American Control Conference, 2017

Value-Aware Loss Function for Model-based Reinforcement Learning.
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017

2016
Regularized Policy Iteration with Nonparametric Function Spaces.
J. Mach. Learn. Res., 2016

Learning to control partial differential equations: Regularized Fitted Q-Iteration approach.
Proceedings of the 55th IEEE Conference on Decision and Control, 2016

Learning-based modular indirect adaptive control for a class of nonlinear systems.
Proceedings of the 2016 American Control Conference, 2016

Truncated Approximate Dynamic Programming with Task-Dependent Terminal Value.
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016

2015
Classification-Based Approximate Policy Iteration.
IEEE Trans. Autom. Control., 2015

Reports of the AAAI 2014 Conference Workshops.
AI Mag., 2015

Approximate MaxEnt Inverse Optimal Control and Its Application for Mental Simulation of Human Interactions.
Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015

2014
Classification-based Approximate Policy Iteration: Experiments and Extended Discussions.
CoRR, 2014

Sample-based approximate regularization.
Proceedings of the 31th International Conference on Machine Learning, 2014

2013
Learning from Limited Demonstrations.
Proceedings of the Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013

Bellman Error Based Feature Generation using Random Projections on Sparse Spaces.
Proceedings of the Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013

2012
Value Pursuit Iteration.
Proceedings of the Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012

2011
Model selection in reinforcement learning.
Mach. Learn., 2011

Action-Gap Phenomenon in Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, 2011

2010
Interaction of Culture-Based Learning and Cooperative Co-Evolution and its Application to Automatic Behavior-Based System Design.
IEEE Trans. Evol. Comput., 2010

Error Propagation for Approximate Policy and Value Iteration.
Proceedings of the Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, 2010

Robust Jacobian estimation for uncalibrated visual servoing.
Proceedings of the IEEE International Conference on Robotics and Automation, 2010

2009
Model-based and model-free reinforcement learning for visual servoing.
Proceedings of the 2009 IEEE International Conference on Robotics and Automation, 2009

Towards Learning Robotic Reaching and Pointing: An Uncalibrated Visual Servoing Approach.
Proceedings of the Sixth Canadian Conference on Computer and Robot Vision, 2009

Regularized Fitted Q-Iteration for planning in continuous-space Markovian decision problems.
Proceedings of the American Control Conference, 2009

2008
Regularized Policy Iteration.
Proceedings of the Advances in Neural Information Processing Systems 21, 2008

Regularized Fitted Q-Iteration: Application to Planning.
Proceedings of the Recent Advances in Reinforcement Learning, 8th European Workshop, 2008

2007
Global visual-motor estimation for uncalibrated visual servoing.
Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 29, 2007

Manifold-adaptive dimension estimation.
Proceedings of the Machine Learning, 2007

2006
Channel Assignment using Chaotic Simulated Annealing Enhanced Hopfield Neural Network.
Proceedings of the International Joint Conference on Neural Networks, 2006

Learning to Coordinate Behaviors in Soft Behavior-Based Systems Using Reinforcement Learning.
Proceedings of the International Joint Conference on Neural Networks, 2006

Hybrid Behavior Co-evolution and Structure Learning in Behavior-based Systems.
Proceedings of the IEEE International Conference on Evolutionary Computation, 2006

2005
Locally Optimal Takagi-Sugeno Fuzzy Controllers.
Proceedings of the 44th IEEE IEEE Conference on Decision and Control and 8th European Control Conference Control, 2005

2004
Behavior hierarchy learning in a behavior-based system using reinforcement learning.
Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, September 28, 2004


  Loading...