Ernesto C. Martínez

Orcid: 0000-0002-2622-1579

According to our database1, Ernesto C. Martínez authored at least 29 papers between 2005 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Learning and adaptation of strategies in automated negotiations between context-aware agents.
Inteligencia Artif., January, 2024

A workflow management system for reproducible and interoperable high-throughput self-driving experiments.
Comput. Chem. Eng., 2024

Context-Aware Cognitive Agents using Knowledge Graphs for Automated Negotiation.
Proceedings of the L Latin American Computer Conference, 2024

2023
Artificial Theory of Mind in contextual automated negotiations within peer-to-peer markets.
Eng. Appl. Artif. Intell., April, 2023

2022
A peer-to-peer market for utility exchanges in Eco-Industrial Parks using automated negotiations.
Expert Syst. Appl., 2022

When Bioprocess Engineering Meets Machine Learning: A Survey from the Perspective of Automated Bioprocess Development.
CoRR, 2022

2021
Automatic tuning of hyper-parameters of reinforcement learning algorithms using Bayesian optimization with behavioral cloning.
CoRR, 2021

A context-aware approach to automated negotiation using reinforcement learning.
Adv. Eng. Informatics, 2021

2020
A repeated-negotiation game approach to distributed (re)scheduling of multiple projects using decoupled learning.
Simul. Model. Pract. Theory, 2020

2019
A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning.
CoRR, 2019

The importance of context-dependent learning in negotiation agents.
Inteligencia Artif., 2019

2018
Multi-agent Learning by Trial and Error for Resource Leveling during Multi-Project (Re)scheduling.
J. Comput. Sci. Technol., 2018

Generating Rescheduling Knowledge using Reinforcement Learning in a Cognitive Architecture.
CoRR, 2018

A Cognitive Approach to Real-time Rescheduling using SOAR-RL.
CoRR, 2018

Towards Autonomous Reinforcement Learning: Automatic Setting of Hyper-parameters using Bayesian Optimization.
CLEI Electron. J., 2018

Robust insulin estimation under glycemic variability using Bayesian filtering and Gaussian process models.
Biomed. Signal Process. Control., 2018

2017
Iterative modeling and optimization of biomass production using experimental feedback.
Comput. Chem. Eng., 2017

2015
On-line policy learning and adaptation for real-time personalization of an artificial pancreas.
Expert Syst. Appl., 2015

Controlling blood glucose variability under uncertainty using reinforcement learning and Gaussian processes.
Appl. Soft Comput., 2015

An active inference approach to on-line agent monitoring in safety-critical systems.
Adv. Eng. Informatics, 2015

2014
Behavior monitoring under uncertainty using Bayesian surprise and optimal action selection.
Expert Syst. Appl., 2014

2013
Dynamic optimization of bioreactors using probabilistic tendency models and Bayesian active learning.
Comput. Chem. Eng., 2013

2012
SmartGantt - An intelligent system for real time rescheduling based on relational reinforcement learning.
Expert Syst. Appl., 2012

SmartGantt - An interactive system for generating and updating rescheduling knowledge using relational abstractions.
Comput. Chem. Eng., 2012

Task Rescheduling using Relational Reinforcement Learning.
Inteligencia Artif., 2012

2011
Model-free control based on reinforcement learning for a wastewater treatment problem.
Appl. Soft Comput., 2011

2007
Model-free learning control of neutralization processes using reinforcement learning.
Eng. Appl. Artif. Intell., 2007

Performance monitoring of industrial controllers based on the predictability of controller behavior.
Comput. Chem. Eng., 2007

2005
Macro-Actions in Model-Free Intelligent Control with Application to pH Control.
Proceedings of the 44th IEEE IEEE Conference on Decision and Control and 8th European Control Conference Control, 2005


  Loading...