A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations.
CoRR, January, 2025
VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations.
ACM Trans. Interact. Intell. Syst., March, 2024
GeniL: A Multilingual Dataset on Generalizing Language.
CoRR, 2024
SeeGULL Multilingual: a Dataset of Geo-Culturally Situated Stereotypes.
CoRR, 2024
MiTTenS: A Dataset for Evaluating Misgendering in Translation.
CoRR, 2024
Beyond the Surface: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image Generation.
CoRR, 2024
MisgenderMender: A Community-Informed Approach to Interventions for Misgendering.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024
MiTTenS: A Dataset for Evaluating Gender Mistranslation.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
SoUnD Framework: Analyzing (So)cial Representation in (Un)structured (D)ata.
Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society (AIES-24) - Full Archival Papers, October 21-23, 2024, San Jose, California, USA, 2024
ViSAGe: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image Generation.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024
Building Socio-culturally Inclusive Stereotype Resources with Community Engagement.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
"I wouldn't say offensive but...": Disability-Centered Perspectives on Large Language Models.
Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2023
SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023
MISGENDERED: Limits of Large Language Models in Understanding Pronouns.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023
Cultural Re-contextualization of Fairness Research in Language Technologies in India.
CoRR, 2022
Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN.
CoRR, 2022
DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation.
CoRR, 2022
Socially Aware Bias Measurements for Hindi Language Representations.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022
On Measures of Biases and Harms in NLP.
,
,
,
,
,
,
,
,
,
,
Proceedings of the Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, 2022
Re-contextualizing Fairness in NLP: The Case of India.
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, 2022
Representation Learning for Resource-Constrained Keyphrase Generation.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022
Closed form word embedding alignment.
Knowl. Inf. Syst., 2021
What do Bias Measures Measure?
CoRR, 2021
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations.
CoRR, 2021
An Interactive Visual Demo of Bias Mitigation Techniques for Word Representations From a Geometric Perspective.
Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track, 2021
A Visual Tour of Bias Mitigation Techniques for Word Representations.
Proceedings of the KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021
Measures and Best Practices for Responsible AI.
Proceedings of the KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021
Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021
OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021
The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability.
PhD thesis, 2020
The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability.
CoRR, 2020
On Measuring and Mitigating Biased Inferences of Word Embeddings.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020
Attenuating Bias in Word vectors.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019
Absolute Orientation for Word Embedding Alignment.
CoRR, 2018