Thomas Fel

Orcid: 0000-0002-2202-4615

According to our database1, Thomas Fel authored at least 37 papers between 2020 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Sparks of Explainability: Recent Advancements in Explaining Large Vision Models.
CoRR, February, 2025

An Adaptive Orthogonal Convolution Scheme for Efficient and Flexible CNN Architectures.
CoRR, January, 2025

2024
Conformal prediction for trustworthy detection of railway signals.
AI Ethics, February, 2024

Sparks of Explainability: Recent Advancements in Explaining Large Vision Models. (Lueurs d'Explicabilité : Avancées Récentes dans l'Explication des Grands Modèles de Vision).
PhD thesis, 2024

Local vs distributed representations: What is the right basis for interpretability?
CoRR, 2024

Unearthing Skill-Level Insights for Understanding Trade-Offs of Foundation Models.
CoRR, 2024

One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability.
CoRR, 2024

Feature Accentuation: Revealing 'What' Features Respond to in Natural Images.
CoRR, 2024

Influenciæ: A Library for Tracing the Influence Back to the Data-Points.
Proceedings of the Explainable Artificial Intelligence, 2024

Understanding Visual Feature Reliance through the Lens of Complexity.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Latent Representation Matters: Human-like Sketches in One-shot Drawing Tasks.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Saliency strikes back: How filtering out high frequencies improves white-box explanations.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

On the Foundations of Shortcut Learning.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
Gradient strikes back: How filtering out high frequencies improves explanations.
CoRR, 2023

Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization.
CoRR, 2023

Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex.
CoRR, 2023

Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception.
CoRR, 2023

COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks.
CoRR, 2023

On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Unlocking Feature Visualization for Deep Network with MAgnitude Constrained Optimization.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines?
Proceedings of the International Conference on Machine Learning, 2023

CRAFT: Concept Recursive Activation FacTorization for Explainability.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Confident Object Detection via Conformal Prediction and Conformal Risk Control: an Application to Railway Signaling.
Proceedings of the Conformal and Probabilistic Prediction with Applications, 2023

COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
Harmonizing the object recognition strategies of deep neural networks with humans.
CoRR, 2022

Conviformers: Convolutionally guided Vision Transformer.
CoRR, 2022

When adversarial attacks become interpretable counterfactual explanations.
CoRR, 2022

Xplique: A Deep Learning Explainability Toolbox.
CoRR, 2022

How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022

Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Harmonizing the object recognition strategies of deep neural networks with humans.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

2020
Representativity and Consistency Measures for Deep Neural Network Explanations.
CoRR, 2020


  Loading...