Xavier Renard

According to our database1, Xavier Renard authored at least 21 papers between 2015 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Understanding prediction discrepancies in classification.
Mach. Learn., October, 2024

Post-processing fairness with minimal changes.
CoRR, 2024

2023
Dynamic Interpretability for Model Comparison via Decision Rules.
CoRR, 2023

2022
On the Granularity of Explanations in Model Agnostic NLP Interpretability.
Proceedings of the Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022

2021
Understanding surrogate explanations: the interplay between complexity, fidelity and coverage.
CoRR, 2021

On the overlooked issue of defining explanation objectives for local-surrogate explainers.
CoRR, 2021

Understanding Prediction Discrepancies in Machine Learning Classifiers.
CoRR, 2021

How to Choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice.
Proceedings of the Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021

2020
QUACKIE: A NLP Classification Task With Ground Truth Explanations.
CoRR, 2020

Sentence-Based Model Agnostic NLP Interpretability.
CoRR, 2020

2019
Imperceptible Adversarial Attacks on Tabular Data.
CoRR, 2019

Concept Tree: High-Level Representation of Variables for More Interpretable Surrogate Decision Trees.
CoRR, 2019

Unjustified Classification Regions and Counterfactual Explanations in Machine Learning.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2019

Localized Random Shapelets.
Proceedings of the Advanced Analytics and Learning on Temporal Data, 2019

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations.
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019

2018
Defining Locality for Surrogates in Post-hoc Interpretablity.
CoRR, 2018

Detecting Potential Local Adversarial Examples for Human-Interpretable Defense.
Proceedings of the ECML PKDD 2018 Workshops, 2018

Comparison-Based Inverse Classification for Interpretability in Machine Learning.
Proceedings of the Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations, 2018

2017
Time series representation for classification: a motif-based approach. (Représentation de séries temporelles pour la classification: une approche basée sur la découverte automatique de motifs).
PhD thesis, 2017

Inverse Classification for Comparison-based Interpretability in Machine Learning.
CoRR, 2017

2015
Random-shapelet: An algorithm for fast shapelet discovery.
Proceedings of the 2015 IEEE International Conference on Data Science and Advanced Analytics, 2015


  Loading...