Michael Soprano

Orcid: 0000-0002-7337-7592

According to our database1, Michael Soprano authored at least 22 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Cognitive Biases in Fact-Checking and Their Countermeasures: A Review.
Inf. Process. Manag., March, 2024

How Many Crowd Workers Do I Need? On Statistical Power when Crowdsourcing Relevance Judgments.
ACM Trans. Inf. Syst., January, 2024

Crowdsourced Fact-checking: Does It Actually Work?
Inf. Process. Manag., 2024

Crowdsourcing Statement Classification to Enhance Information Quality Prediction.
Proceedings of the Disinformation in Open Online Media, 2024

Enhancing Fact-Checking: From Crowdsourced Validation to Integration with Large Language Models.
Proceedings of the 14th Italian Information Retrieval Workshop, 2024

2023
A Neural Model to Jointly Predict and Explain Truthfulness of Statements.
ACM J. Data Inf. Qual., March, 2023

Can the crowd judge truthfulness? A longitudinal study on recent misinformation about COVID-19.
Pers. Ubiquitous Comput., 2023

Towards a Conversational-Based Agent for Health Services.
Proceedings of the Italia Intelligenza Artificiale, 2023

Fact-Checking at Scale with Crowdsourcing: Experiments and Lessons Learned.
Proceedings of the 13th Italian Information Retrieval Workshop (IIR 2023), 2023

2022
Transparent assessment of information quality of online reviews using formal argumentation theory.
Inf. Syst., 2022

Crowd_Frame: A Simple and Complete Framework to Deploy Complex Crowdsourcing Tasks Off-the-shelf.
Proceedings of the WSDM '22: The Fifteenth ACM International Conference on Web Search and Data Mining, Virtual Event / Tempe, AZ, USA, February 21, 2022

The Effects of Crowd Worker Biases in Fact-Checking Tasks.
Proceedings of the FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21, 2022

2021
The many dimensions of truthfulness: Crowdsourcing misinformation assessments on a multidimensional scale.
Inf. Process. Manag., 2021

E-BART: Jointly Predicting and Explaining Truthfulness.
Proceedings of the 2021 Truth and Trust Online Conference (TTO 2021), 2021

Assessing the Quality of Online Reviews Using Formal Argumentation Theory.
Proceedings of the Web Engineering - 21st International Conference, 2021

2020
Can The Crowd Identify Misinformation Objectively?: The Effects of Judgment Scale and Assessor's Background.
Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 2020

The COVID-19 Infodemic: Can the Crowd Judge Recent Misinformation Objectively?
Proceedings of the CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, 2020

2019
HITS Hits Readersourcing: Validating Peer Review Alternatives Using Network Analysis.
Proceedings of the 4th Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL 2019) co-located with the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2019), 2019

Crowdsourcing Peer Review: As We May Do.
Proceedings of the Digital Libraries: Supporting Open Science, 2019

Bias and Fairness in Effectiveness Evaluation by Means of Network Analysis and Mixture Models.
Proceedings of the 10th Italian Information Retrieval Workshop, 2019

2018
Reproduce and Improve: An Evolutionary Approach to Select a Few Good Topics for Information Retrieval Evaluation.
ACM J. Data Inf. Qual., 2018

Effectiveness Evaluation with a Subset of Topics: A Practical Approach.
Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, 2018


  Loading...