Gesina Schwalbe

Orcid: 0000-0003-2690-2478

According to our database1, Gesina Schwalbe authored at least 19 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts.
Data Min. Knowl. Discov., September, 2024

Unveiling Ontological Commitment in Multi-Modal Foundation Models.
CoRR, 2024

Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?
CoRR, 2024

The Anatomy of Adversarial Attacks: Concept-based XAI Dissection.
CoRR, 2024

Unveiling the Anatomy of Adversarial Attacks: Concept-Based XAI Dissection of CNNs.
Proceedings of the Explainable Artificial Intelligence, 2024

Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024

Investigating Calibration and Corruption Robustness of Post-hoc Pruned Perception CNNs: An Image Classification Benchmark Study.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

2023
GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces.
CoRR, 2023

Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes.
CoRR, 2023

Quantified Semantic Comparison of Convolutional Neural Networks.
CoRR, 2023

Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability.
Proceedings of the Explainable Artificial Intelligence, 2023

Interpretable Model-Agnostic Plausibility Verification for 2D Object Detectors Using Domain-Invariant Concept Bottleneck Models.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Concept Embedding Analysis: A Review.
CoRR, 2022

Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks in Perception Tasks.
CoRR, 2022

2021
XAI Method Properties: A (Meta-)study.
CoRR, 2021

Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety.
CoRR, 2021

Verification of Size Invariance in DNN Activations Using Concept Embeddings.
Proceedings of the Artificial Intelligence Applications and Innovations, 2021

2020
Structuring the Safety Argumentation for Deep Neural Network Based Perception in Automotive Applications.
Proceedings of the Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops, 2020

Expressive Explanations of DNNs by Combining Concept Analysis with ILP.
Proceedings of the KI 2020: Advances in Artificial Intelligence, 2020


  Loading...