Bhavya Ghai

Orcid: 0000-0003-3932-1525

According to our database1, Bhavya Ghai authored at least 16 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
The HaLLMark Effect: Supporting Provenance and Transparent Use of Large Language Models in Writing with Interactive Visualization.
Proceedings of the CHI Conference on Human Factors in Computing Systems, 2024

2023
D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias.
IEEE Trans. Vis. Comput. Graph., 2023

The HaLLMark Effect: Supporting Provenance and Transparent Use of Large Language Models in Writing through Interactive Visualization.
CoRR, 2023

Towards Fair and Explainable AI using a Human-Centered AI Approach.
CoRR, 2023

Portrayal: Leveraging NLP and Visualization for Analyzing Fictional Characters.
Proceedings of the 2023 ACM Designing Interactive Systems Conference, 2023

2022
Cascaded Debiasing: Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions.
Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022

DramatVis Personae: Visual Text Analytics for Identifying Social Biases in Creative Writing.
Proceedings of the DIS '22: Designing Interactive Systems Conference, Virtual Event, Australia, June 13, 2022

2021
WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings.
Proceedings of the CHI '21: CHI Conference on Human Factors in Computing Systems, 2021

Fluent: An AI Augmented Writing Tool for People who Stutter.
Proceedings of the ASSETS '21: The 23rd International ACM SIGACCESS Conference on Computers and Accessibility, 2021

2020
Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers.
Proc. ACM Hum. Comput. Interact., 2020

Measuring Social Biases of Crowd Workers using Counterfactual Queries.
CoRR, 2020

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience.
CoRR, 2020

Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation.
Proceedings of the 1st Workshop on Data Science with Human in the Loop, 2020

Does Speech Enhancement of Publicly Available Data Help Build Robust Speech Recognition Systems? (Student Abstract).
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Does Speech enhancement of publicly available data help build robust Speech Recognition Systems?
CoRR, 2019

2017
Using data science as a community advocacy tool to promote equity in urban renewal programs: An analysis of Atlanta's Anti-Displacement Tax Fund.
CoRR, 2017


  Loading...