Adolfo Perrusquía

Orcid: 0000-0003-2290-1160

According to our database1, Adolfo Perrusquía authored at least 56 papers between 2016 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Trajectory Intent Prediction of Autonomous Systems Using Dynamic Mode Decomposition.
IEEE Trans. Syst. Man Cybern. Syst., December, 2024

Reservoir Computing for Drone Trajectory Intent Prediction: A Physics Informed Approach.
IEEE Trans. Cybern., September, 2024

Trajectory Inference of Unknown Linear Systems Based on Partial States Measurements.
IEEE Trans. Syst. Man Cybern. Syst., April, 2024

Control Layer Security: A New Security Paradigm for Cooperative Autonomous Systems.
IEEE Veh. Technol. Mag., March, 2024

An Advanced Path Planning and UAV Relay System: Enhancing Connectivity in Rural Environments.
Future Internet, March, 2024

Prescribed Time Interception of Moving Objects' Trajectories Using Robot Manipulators.
Robotics, 2024

Explainable data-driven Q-learning control for a class of discrete-time linear autonomous systems.
Inf. Sci., 2024

Wildfire and smoke early detection for drone applications: A light-weight deep learning approach.
Eng. Appl. Artif. Intell., 2024

Selective Exploration and Information Gathering in Search and Rescue Using Hierarchical Learning Guided by Natural Language Input.
CoRR, 2024

Explainable Interface for Human-Autonomy Teaming: A Survey.
CoRR, 2024

A Novel Distributed Authentication of Blockchain Technology Integration in IoT Services.
IEEE Access, 2024

Towards bio-inspired control of aerial vehicle: Distributed aerodynamic parameters for state prediction.
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, 2024

A Novel Physics-Informed Recurrent Neural Network Approach for State Estimation of Autonomous Platforms.
Proceedings of the International Joint Conference on Neural Networks, 2024

Explaining Data-Driven Control in Autonomous Systems: A Reinforcement Learning Case Study.
Proceedings of the 10th International Conference on Control, 2024

2023
Physics Informed Trajectory Inference of a Class of Nonlinear Systems Using a Closed-Loop Output Error Technique.
IEEE Trans. Syst. Man Cybern. Syst., December, 2023

Closed-Loop Output Error Approaches for Drone's Physics Informed Trajectory Inference.
IEEE Trans. Autom. Control., December, 2023

A Closed-Loop Output Error Approach for Physics-Informed Trajectory Inference Using Online Data.
IEEE Trans. Cybern., March, 2023

Reward inference of discrete-time expert's controllers: A complementary learning approach.
Inf. Sci., 2023

Optimal Control of Nonlinear Systems Using Experience Inference Human-Behavior Learning.
IEEE CAA J. Autom. Sinica, 2023

A Deep Mixture of Experts Network for Drone Trajectory Intent Classification and Prediction using Non-Cooperative Radar Data.
Proceedings of the IEEE Symposium Series on Computational Intelligence, 2023

Robust Control of Linear Systems: A Min-Max Reinforcement Learning Formulation.
Proceedings of the 20th International Conference on Electrical Engineering, 2023

Multi-Spectral Fusion using Generative Adversarial Networks for UAV Detection of Wild Fires.
Proceedings of the International Conference on Artificial Intelligence in Information and Communication, 2023

A Two-Stages Unsupervised/Supervised Statistical Learning Approach for Drone Behaviour Prediction.
Proceedings of the 9th International Conference on Control, 2023

2022
Neural H₂ Control Using Continuous-Time Reinforcement Learning.
IEEE Trans. Cybern., 2022

A complementary learning approach for expertise transference of human-optimized controllers.
Neural Networks, 2022

Solution of the linear quadratic regulator problem of black box linear systems using reinforcement learning.
Inf. Sci., 2022

Human-behavior learning: A new complementary learning perspective for optimal decision making controllers.
Neurocomputing, 2022

Robust state/output feedback linearization of direct drive robot manipulators: A controllability and observability analysis.
Eur. J. Control, 2022

Stable robot manipulator parameter identification: A closed-loop input error approach.
Autom., 2022

Model-free reinforcement learning from expert demonstrations: a survey.
Artif. Intell. Rev., 2022

Mechanical Advantage Assurance Control of Quick-return Mechanisms in Task Space.
Proceedings of the 19th International Conference on Electrical Engineering, 2022

Cost Inference of Discrete-time Linear Quadratic Control Policies using Human-Behaviour Learning.
Proceedings of the 8th International Conference on Control, 2022

Performance Objective Extraction of Optimal Controllers: A Hippocampal Learning Approach.
Proceedings of the 18th IEEE International Conference on Automation Science and Engineering, 2022

2021
Discrete-Time H<sub>2</sub> Neural Control Using Reinforcement Learning.
IEEE Trans. Neural Networks Learn. Syst., 2021

Multi-agent reinforcement learning for redundant robot control in task-space.
Int. J. Mach. Learn. Cybern., 2021

Nonlinear control using human behavior learning.
Inf. Sci., 2021

Continuous-time reinforcement learning for robust control under worst-case uncertainty.
Int. J. Syst. Sci., 2021

Identification and optimal control of nonlinear systems using recurrent neural networks and reinforcement learning: An overview.
Neurocomputing, 2021

Constant Speed Control of Slider-Crank Mechanisms: A Joint-Task Space Hybrid Control Approach.
IEEE Access, 2021

An Input Error Method for Parameter Identification of a Class of Euler-Lagrange Systems.
Proceedings of the 18th International Conference on Electrical Engineering, 2021

Human-Behavior Learning for Infinite-Horizon Optimal Tracking Problems of Robot Manipulators.
Proceedings of the 2021 60th IEEE Conference on Decision and Control (CDC), 2021

2020
Human-in-the-Loop Control Using Euler Angles.
J. Intell. Robotic Syst., 2020

Simplified Stable Admittance Control Using End-Effector Orientations.
Int. J. Soc. Robotics, 2020

Robot Position/Force Control in Unknown Environment Using Hybrid Reinforcement Learning.
Cybern. Syst., 2020

Task Space Position Control of Slider-Crank Mechanisms Using Simple Tuning Techniques Without Linearization Methods.
IEEE Access, 2020

A Novel Tuning Method of PD With Gravity Compensation Controller for Robot Manipulators.
IEEE Access, 2020

Robust Control in the Worst Case Using Continuous Time Reinforcement Learning.
Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics, 2020

Neural H2 Control Using Reinforcement Learning for Unknown Nonlinear Systems.
Proceedings of the 2020 International Joint Conference on Neural Networks, 2020

Redundant Robot Control Using Multi Agent Reinforcement Learning.
Proceedings of the 16th IEEE International Conference on Automation Science and Engineering, 2020

2019
Slider position control for slider-crank mechanisms with Jacobian compensator.
J. Syst. Control. Eng., 2019

Position/force control of robot manipulators using reinforcement learning.
Ind. Robot, 2019

Task space human-robot interaction using angular velocity Jacobian.
Proceedings of the International Symposium on Medical Robotics, 2019

Simple Optimal Tracking Control for a Class of Closed-Chain Mechanisms in Task Space.
Proceedings of the 16th International Conference on Electrical Engineering, 2019

Optimal contact force of Robots in Unknown Environments using Reinforcement Learning and Model-free controllers.
Proceedings of the 16th International Conference on Electrical Engineering, 2019

Large space dimension Reinforcement Learning for Robot Position/Force Discrete Control.
Proceedings of the 6th International Conference on Control, 2019

2016
Robust controller for aircraft roll control system using data flight parameters.
Proceedings of the 13th International Conference on Electrical Engineering, 2016


  Loading...