Renee Shelby

Orcid: 0000-0003-4720-3844

According to our database1, Renee Shelby authored at least 22 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Creative ML Assemblages: The Interactive Politics of People, Processes, and Products.
Proc. ACM Hum. Comput. Interact., 2024

The Ethics of Advanced AI Assistants.
CoRR, 2024

Harm Amplification in Text-to-Image Models.
CoRR, 2024

Understanding Help-Seeking and Help-Giving on Social Media for Image-Based Sexual Abuse.
Proceedings of the 33rd USENIX Security Symposium, 2024

Debiasing Text Safety Classifiers through a Fairness-Aware Ensemble.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: EMNLP 2024, 2024

"What is Safety?": Building Bridges Across Approaches to Digital Risks and Harms.
Proceedings of the Companion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing, 2024

How Knowledge Workers Think Generative AI Will (Not) Transform Their Industries.
Proceedings of the CHI Conference on Human Factors in Computing Systems, 2024

Generative AI in Creative Practice: ML-Artist Folk Theories of T2I Use, Harm, and Harm-Reduction.
Proceedings of the CHI Conference on Human Factors in Computing Systems, 2024

Painting with Cameras and Drawing with Text: AI Use in Accessible Creativity.
Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility, 2024

In Whose Voice?: Examining AI Agent Representation of People in Social Interaction through Generative Speech.
Proceedings of the Designing Interactive Systems Conference, 2024

2023
Terms-we-serve-with: Five dimensions for anticipating and repairing algorithmic harm.
Big Data Soc., July, 2023

Safety and Fairness for Content Moderation in Generative Models.
CoRR, 2023

AI's Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia.
Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023

Infrastructuring Care: How Trans and Non-Binary People Meet Health and Well-Being Needs through Technology.
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023

From Plane Crashes to Algorithmic Harm: Applicability of Safety Engineering Frameworks for Responsible ML.
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023

Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction.
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 2023

Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development.
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 2023

2022
Situating questions of data, power, and racial formation.
Big Data Soc., January, 2022

Sociotechnical Harms: Scoping a Taxonomy for Harm Reduction.
CoRR, 2022

Terms-we-Serve-with: a feminist-inspired social imaginary for improved transparency and engagement in AI.
CoRR, 2022

2021
The Datafication of #MeToo: Whiteness, Racial Capitalism, and Anti-Violence Technologies.
Big Data Soc., July, 2021

Whiteness in and through data protection: an intersectional approach to anti-violence apps and #MeToo bots.
Internet Policy Rev., 2021


  Loading...