Aleksandr I. Panov

Orcid: 0000-0002-9747-3837

Affiliations:
  • Russian Academy of Sciences, Federal Research Center Computer Science and Control, Moscow


According to our database1, Aleksandr I. Panov authored at least 76 papers between 2015 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Interactive Semantic Map Representation for Skill-Based Visual Object Navigation.
IEEE Access, 2024

Decentralized Monte Carlo Tree Search for Partially Observable Multi-Agent Pathfinding.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

Learn to Follow: Decentralized Lifelong Multi-Agent Pathfinding via Planning and Learning.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Skill Fusion in Hybrid Robotic Framework for Visual Object Goal Navigation.
Robotics, June, 2023

Policy Optimization to Learn Adaptive Motion Primitives in Path Planning With Dynamic Obstacles.
IEEE Robotics Autom. Lett., 2023

Gradual Optimization Learning for Conformational Energy Minimization.
CoRR, 2023

Object-Centric Learning with Slot Mixture Module.
CoRR, 2023

Graphical Object-Centric Actor-Critic.
CoRR, 2023

Neural Potential Field for Obstacle-Aware Local Motion Planning.
CoRR, 2023

Learning Successor Representations with Distributed Hebbian Temporal Memory.
CoRR, 2023

SegmATRon: Embodied Adaptive Semantic Segmentation for Indoor Environment.
CoRR, 2023

Evaluation of Safety Constraints in Autonomous Navigation with Deep Reinforcement Learning.
CoRR, 2023

Recurrent Memory Decision Transformer.
CoRR, 2023

Intrinsic Motivation in Model-based Reinforcement Learning: A Brief Review.
CoRR, 2023

Fine-Tuning Multimodal Transformer Models for Generating Actions in Virtual and Real Environments.
IEEE Access, 2023

Quantized Disentangled Representations for Object-Centric Visual Tasks.
Proceedings of the Pattern Recognition and Machine Intelligence, 2023

Model-Based Policy Optimization with Neural Differential Equations for Robotic Arm Control.
Proceedings of the Interactive Collaborative Robotics - 8th International Conference, 2023

Interpreting Decision Process in Offline Reinforcement Learning for Interactive Recommendation Systems.
Proceedings of the Neural Information Processing - 30th International Conference, 2023

Monte-Carlo Tree Search for Multi-agent Pathfinding: Preliminary Results.
Proceedings of the Hybrid Artificial Intelligent Systems - 18th International Conference, 2023

The Problem of Concept Learning and Goals of Reasoning in Large Language Models.
Proceedings of the Hybrid Artificial Intelligent Systems - 18th International Conference, 2023

Stabilize Sequential Data Representation via Attraction Module.
Proceedings of the Brain Informatics - 16th International Conference, 2023

Evaluation of Pretrained Large Language Models in Embodied Planning Tasks.
Proceedings of the Artificial General Intelligence - 16th International Conference, 2023

TransPath: Learning Heuristics for Grid-Based Pathfinding via Transformers.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
Pathfinding in stochastic environments: learning <i>vs</i> planning.
PeerJ Comput. Sci., 2022

Collecting Interactive Multi-modal Datasets for Grounded Language Understanding.
CoRR, 2022

Learning to Solve Voxel Building Embodied Tasks from Pixels and Natural Language Instructions.
CoRR, 2022

POGEMA: Partially Observable Grid Environment for Multiple Agents.
CoRR, 2022

IGLU Gridworld: Simple and Fast Environment for Embodied Dialog Agents.
CoRR, 2022

IGLU 2022: Interactive Grounded Language Understanding in a Collaborative Environment at NeurIPS 2022.
CoRR, 2022

Vector Semiotic Model for Visual Question Answering.
Cogn. Syst. Res., 2022

Hierarchical intrinsically motivated agent planning behavior with dreaming in grid environments.
Brain Informatics, 2022

Hierarchical Landmark Policy Optimization for Visual Indoor Navigation.
IEEE Access, 2022

Simultaneous Learning and Planning in a Hierarchical Control System for a Cognitive Agent.
Autom. Remote. Control., 2022

Reinforcement Learning with Success Induced Task Prioritization.
Proceedings of the Advances in Computational Intelligence, 2022

Vector Symbolic Scene Representation for Semantic Place Recognition.
Proceedings of the International Joint Conference on Neural Networks, 2022

HPointLoc: Point-Based Indoor Place Recognition Using Synthetic RGB-D Images.
Proceedings of the Neural Information Processing - 29th International Conference, 2022

Stability and Similarity Detection for the Biologically Inspired Temporal Pooler Algorithms.
Proceedings of the 2022 Annual International Conference on Brain-Inspired Cognitive Architectures for Artificial Intelligence, 2022

Graph Strategy for Interpretable Visual Question Answering.
Proceedings of the Artificial General Intelligence - 15th International Conference, 2022

2021
Forgetful experience replay in hierarchical reinforcement learning from expert demonstrations.
Knowl. Based Syst., 2021

Multitask Adaptation by Retrospective Exploration with Learned World Models.
CoRR, 2021

NeurIPS 2021 Competition IGLU: Interactive Grounded Language Understanding in a Collaborative Environment.
CoRR, 2021

Landmark Policy Optimization for Object Navigation Task.
CoRR, 2021

Hierarchical Deep Q-Network from imperfect demonstrations in Minecraft.
Cogn. Syst. Res., 2021

Hybrid Policy Learning for Multi-Agent Pathfinding.
IEEE Access, 2021

Adaptive Maneuver Planning for Autonomous Vehicles Using Behavior Tree on Apollo Platform.
Proceedings of the Artificial Intelligence XXXVIII, 2021

Q-Mixing Network for Multi-agent Pathfinding in Partially Observable Grid Environments.
Proceedings of the Artificial Intelligence - 19th Russian Conference, 2021



Long-Term Exploration in Persistent MDPs.
Proceedings of the Advances in Computational Intelligence, 2021

Question Answering for Visual Navigation in Human-Centered Environments.
Proceedings of the Advances in Soft Computing, 2021

Model Predictive Control with Torque Constraints for Velocity-Driven Robotic Manipulator.
Proceedings of the 20th International Conference on Advanced Robotics, 2021

Flexible Data Augmentation in Off-Policy Reinforcement Learning.
Proceedings of the Artificial Intelligence and Soft Computing, 2021

Planning with Hierarchical Temporal Memory for Deterministic Markov Decision Problem.
Proceedings of the 13th International Conference on Agents and Artificial Intelligence, 2021

Applying Vector Symbolic Architecture and Semiotic Approach to Visual Dialog.
Proceedings of the Hybrid Artificial Intelligent Systems - 16th International Conference, 2021

Intrinsic Motivation to Learn Action-State Representation with Hierarchical Temporal Memory.
Proceedings of the Brain Informatics - 14th International Conference, 2021

Case-Based Task Generalization in Model-Based Reinforcement Learning.
Proceedings of the Artificial General Intelligence - 14th International Conference, 2021

2020
Forgetful Experience Replay in Hierarchical Reinforcement Learning from Demonstrations.
CoRR, 2020

Real-Time Object Navigation With Deep Neural Networks and Hierarchical Reinforcement Learning.
IEEE Access, 2020

Navigating Autonomous Vehicle at the Road Intersection Simulator with Reinforcement Learning.
Proceedings of the Artificial Intelligence - 18th Russian Conference, 2020

Q-Learning of Spatial Actions for Hierarchical Planner of Cognitive Agents.
Proceedings of the Interactive Collaborative Robotics - 5th International Conference, 2020

Hyperdimensional Representations in Semiotic Approach to AGI.
Proceedings of the Artificial General Intelligence - 13th International Conference, 2020

Delta Schema Network in Model-Based Reinforcement Learning.
Proceedings of the Artificial General Intelligence - 13th International Conference, 2020

2019
Hierarchical Deep Q-Network with Forgetting from Imperfect Demonstrations in Minecraft.
CoRR, 2019

Toward Faster Reinforcement Learning for Robotics: Using Gaussian Processes.
Proceedings of the Artificial Intelligence, 2019

Hierarchical Psychologically Inspired Planning for Human-Robot Interaction Tasks.
Proceedings of the Interactive Collaborative Robotics - 4th International Conference, 2019

Hierarchical Reinforcement Learning Approach for the Road Intersection Task.
Proceedings of the Biologically Inspired Cognitive Architectures 2019, 2019

Mental Actions and Modelling of Reasoning in Semiotic Approach to AGI.
Proceedings of the Artificial General Intelligence - 12th International Conference, 2019

2018
Automatic formation of the structure of abstract machines in hierarchical reinforcement learning with state clustering.
CoRR, 2018

Task and Spatial Planning by the Cognitive Agent with Human-Like Knowledge Representation.
Proceedings of the Interactive Collaborative Robotics - Third International Conference, 2018

2017
Synthesis of the Behavior Plan for Group of Robots with Sign Based World Model.
Proceedings of the Interactive Collaborative Robotics - Second International Conference, 2017

Grid Path Planning with Deep Reinforcement Learning: Preliminary Results.
Proceedings of the 8th Annual International Conference on Biologically Inspired Cognitive Architectures, 2017

2016
Multilayer cognitive architecture for UAV control.
Cogn. Syst. Res., 2016

A framework for automated meta-analysis: Dendritic cell therapy case study.
Proceedings of the 8th IEEE International Conference on Intelligent Systems, 2016

Psychologically Inspired Planning Method for Smart Relocation Task.
Proceedings of the 7th Annual International Conference on Biologically Inspired Cognitive Architectures, 2016

2015
Behavior and Path Planning for the Coalition of Cognitive Robots in Smart Relocation Tasks.
Proceedings of the Robot Intelligence Technology and Applications 4, 2015

Assessment of Dendritic Cell Therapy Effectiveness Based on the Feature Extraction from Scientific Publications.
Proceedings of the ICPRAM 2015, 2015


  Loading...