Bo Liu

Orcid: 0000-0003-2519-6196

Affiliations:
  • Auburn University, AL, USA
  • University of Massachusetts, Amherst, MA, USA (former)


According to our database1, Bo Liu authored at least 56 papers between 2010 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Robust Mobile Two-Factor Authentication Leveraging Acoustic Fingerprinting.
IEEE Trans. Mob. Comput., December, 2024

A Critical Review of Inductive Logic Programming Techniques for Explainable AI.
IEEE Trans. Neural Networks Learn. Syst., August, 2024

From Past to Future: Rethinking Eligibility Traces.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2022
TDM: Trustworthy Decision-Making Via Interpretability Enhancement.
IEEE Trans. Emerg. Top. Comput. Intell., 2022

Model credibility revisited: Concepts and considerations for appropriate trust.
J. Simulation, 2022

PRIMA: Planner-Reasoner Inside a Multi-task Reasoning Agent
CoRR, 2022

TOPS: Transition-based VOlatility-controlled Policy Search and its Global Convergence.
CoRR, 2022

TOPS: Transition-Based Volatility-Reduced Policy Search.
Proceedings of the Autonomous Agents and Multiagent Systems. Best and Visionary Papers, 2022

2021
Explainable Neuro-Symbolic Hierarchical Reinforcement Learning.
Proceedings of the Neuro-Symbolic Artificial Intelligence: The State of the Art, 2021

Ensemble single image deraining network via progressive structural boosting constraints.
Signal Process. Image Commun., 2021

Crowd understanding and analysis.
IET Image Process., 2021

A Critical Review of Inductive Logic Programming Techniques for Explainable AI.
CoRR, 2021

Mean-Variance Policy Iteration for Risk-Averse Reinforcement Learning.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Variance-Reduced Off-Policy Memory-Efficient Policy Search.
CoRR, 2020

Finite-Sample Analysis of GTD Algorithms.
CoRR, 2020

Per-Step Reward: A New Perspective for Risk-Averse Reinforcement Learning.
CoRR, 2020

Provably Convergent Two-Timescale Off-Policy Actor-Critic with Function Approximation.
Proceedings of the 37th International Conference on Machine Learning, 2020

GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values.
Proceedings of the 37th International Conference on Machine Learning, 2020

2019
Hierarchical Feature Selection for Random Projection.
IEEE Trans. Neural Networks Learn. Syst., 2019

Stable and Efficient Policy Evaluation.
IEEE Trans. Neural Networks Learn. Syst., 2019

Restoration algorithm for noisy complex illumination.
IET Comput. Vis., 2019

Provably Convergent Off-Policy Actor-Critic with Function Approximation.
CoRR, 2019

A Human-Centered Data-Driven Planner-Actor-Critic Architecture via Logic Programming.
Proceedings of the Proceedings 35th International Conference on Logic Programming (Technical Communications), 2019

Optimal Control of Complex Systems through Variational Inference with a Discrete Event Decision Process.
Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, 2019

Logic-Based Sequential Decision-Making.
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019

SDRL: Interpretable and Data-Efficient Deep Reinforcement Learning Leveraging Symbolic Planning.
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019

2018
Proximal Gradient Temporal Difference Learning: Stable Reinforcement Learning with Polynomial Sample Complexity.
J. Artif. Intell. Res., 2018

QUOTA: The Quantile Option Architecture for Reinforcement Learning.
CoRR, 2018

Dantzig Selector with an Approximately Optimal Denoising Matrix and its Application to Reinforcement Learning.
CoRR, 2018

A Block Coordinate Ascent Algorithm for Mean-Variance Optimization.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

PEORL: Integrating Symbolic Planning and Hierarchical Reinforcement Learning for Robust Decision-Making.
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018

Program Search for Machine Learning Pipelines Leveraging Symbolic Planning and Reinforcement Learning.
Proceedings of the Genetic Programming Theory and Practice XVI, 2018

R2PG: Risk-Sensitive and Reliable Policy Gradient.
Proceedings of the Workshops of the The Thirty-Second AAAI Conference on Artificial Intelligence, 2018

2017
Deep Multimodal Reinforcement Network with Contextually Guided Recurrent Attention for Image Question Answering.
J. Comput. Sci. Technol., 2017

O<sup>2</sup>TD: (Near)-Optimal Off-Policy TD Learning.
CoRR, 2017

2016
Dantzig Selector with an Approximately Optimal Denoising Matrix and its Application in Sparse Reinforcement Learning.
Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, 2016

Proximal Gradient Temporal Difference Learning Algorithms.
Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 2016

Neural Clinical Paraphrase Generation with Attention.
Proceedings of the Clinical Natural Language Processing Workshop, 2016

Uncorrelated Group LASSO.
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016

2015
Finite-Sample Analysis of Proximal Gradient TD Algorithms.
Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, 2015

Solving Large Sustainable Supply Chain Networks Using Variational Inequalities.
Proceedings of the Computational Sustainability, 2015

2014
Bluetooth aided mobile phone localization: A nonlinear neural circuit approach.
ACM Trans. Embed. Comput. Syst., 2014

Proximal Reinforcement Learning: A New Theory of Sequential Decision Making in Primal-Dual Spaces.
CoRR, 2014

2013
Selective Positive-Negative Feedback Produces the Winner-Take-All Competition in Recurrent Neural Networks.
IEEE Trans. Neural Networks Learn. Syst., 2013

Accelerating a Recurrent Neural Network to Finite-Time Convergence for Solving Time-Varying Sylvester Equation by Using a Sign-Bi-power Activation Function.
Neural Process. Lett., 2013

Neural network based mobile phone localization using Bluetooth connectivity.
Neural Comput. Appl., 2013

Decentralized control of collaborative redundant manipulators with partial command coverage via locally connected recurrent neural networks.
Neural Comput. Appl., 2013

A nonlinear model to generate the winner-take-all competition.
Commun. Nonlinear Sci. Numer. Simul., 2013

2012
Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems.
Sensors, 2012

Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration.
Sensors, 2012

Decentralized kinematic control of a class of collaborative redundant manipulators via recurrent neural networks.
Neurocomputing, 2012

Sparse Q-learning with Mirror Descent.
Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, 2012

Regularized Off-Policy TD-Learning.
Proceedings of the Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012

2010
Basis Construction from Power Series Expansions of Value Functions.
Proceedings of the Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, 2010

Two-time-scale online actor-critic paradigm driven by POMDP.
Proceedings of the IEEE International Conference on Networking, Sensing and Control, 2010

A hierarchical learning architecture with multiple-goal representations based on adaptive dynamic programming.
Proceedings of the IEEE International Conference on Networking, Sensing and Control, 2010


  Loading...