Kacper Sokol
Orcid: 0000-0002-9869-5896Affiliations:
- ETH Zurich, Switzerland
- University of Bristol, Intelligent Systems Laboratory, UK
- RMIT University, ARC Centre of Excellence for Automated Decision-Making and Society, Australia (former)
According to our database1,
Kacper Sokol
authored at least 44 papers
between 2016 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on orcid.org
On csauthors.net:
Bibliography
2025
Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanations.
Int. J. Hum. Comput. Stud., 2025
2024
Data Min. Knowl. Discov., September, 2024
Leveraging Simulation Data to Understand Bias in Predictive Models of Infectious Disease Spread.
ACM Trans. Spatial Algorithms Syst., June, 2024
Perfect Counterfactuals in Imperfect Worlds: Modelling Noisy Implementation of Actions in Sequential Algorithmic Recourse.
CoRR, 2024
Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness.
Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, 2024
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks.
Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 2024
2023
Proceedings of the Prolog: The Next 50 Years, 2023
Can Users Correctly Interpret Machine Learning Explanations and Simultaneously Identify Their Limitations?
CoRR, 2023
CoRR, 2023
(Un)reasonable Allure of Ante-hoc Interpretability for High-stakes Domains: Transparency Is Necessary but Insufficient for Explainability.
CoRR, 2023
Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations.
CoRR, 2023
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication.
CoRR, 2023
2022
FAT Forensics: A Python toolbox for algorithmic fairness, accountability and transparency.
Softw. Impacts, December, 2022
What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components.
Dataset, October, 2022
Simply Logical - Intelligent Reasoning by Example (Fully Interactive Online Edition).
Dataset, August, 2022
Analysing Donors' Behaviour in Non-profit Organisations for Disaster Resilience: The 2019-2020 Australian Bushfires Case Study.
CoRR, 2022
What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components.
CoRR, 2022
Simply Logical - Intelligent Reasoning by Example (Fully Interactive Online Edition).
CoRR, 2022
How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies.
CoRR, 2022
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 2022
2021
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence.
CoRR, 2021
You Only Write Thrice: Creating Documents, Computational Notebooks and Presentations From a Single Source.
CoRR, 2021
2020
FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems.
Dataset, May, 2020
FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems.
J. Open Source Softw., 2020
LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees.
CoRR, 2020
One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency.
CoRR, 2020
Explainability fact sheets: a framework for systematic assessment of explainable approaches.
Proceedings of the FAT* '20: Conference on Fairness, 2020
Proceedings of the AIES '20: AAAI/ACM Conference on AI, 2020
2019
Fairness, Accountability and Transparency in Artificial Intelligence: A Case Study of Logical Predictive Models.
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019
Desiderata for Interpretability: Explaining Decision Tree Predictions with Counterfactuals.
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019
Counterfactual Explanations of Machine Learning Predictions: Opportunities and Challenges for AI Safety.
Proceedings of the Workshop on Artificial Intelligence Safety 2019 co-located with the Thirty-Third AAAI Conference on Artificial Intelligence 2019 (AAAI-19), 2019
2018
Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018
Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant.
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018
Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements.
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018
2017
The Role of Textualisation and Argumentation in Understanding the Machine Learning Process.
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 2017
2016
Proceedings of the 26th International Conference on Inductive Logic Programming (Short papers), 2016