Expert Perceptions of GenAI Disinformation and the Case for Reproducible Provenance
"We are at a critical inflection point. GenAI has reduced the cost of producing disinformation to near-zero. Your expertise matters."
We are conducting a longitudinal research study to understand how experts perceive the evolving threat landscape of AI-generated disinformation. This is Wave 2 of our ongoing study—your voice will directly shape policy recommendations and reproducible mitigation frameworks.
We are seeking domain experts with professional experience in:
| Domain | Examples |
|---|---|
| AI/ML Research & Engineering | Researchers working on LLMs, diffusion models, synthetic media detection |
| Cybersecurity | Threat intelligence, adversarial ML, platform security |
| Digital Policy & Regulation | Policymakers, regulators, governance specialists (EU AI Act, DSA, etc.) |
| Journalism & Fact-Checking | Investigative journalists, fact-checkers, media analysts |
| Computational Social Science | Researchers studying online behavior, misinformation dynamics |
| Ethics & Law | AI ethicists, legal scholars focused on synthetic media |
- Shape the research: Your insights directly inform academic findings and policy recommendations
- GDPR-compliant & anonymous: Responses are anonymized; optional attribution in acknowledgments
- Time commitment: Approximately 15 minutes
- Stay informed: Receive a summary of findings upon publication
| Concept | Definition |
|---|---|
| Verification Crisis | The structural shift where GenAI reduces the cost of producing high-fidelity disinformation toward zero, risking the erosion of shared factual basis in democratic deliberation |
| Epistemic Fragmentation | The breakdown of a shared reality as personalized synthetic content creates isolated information bubbles |
| Synthetic Consensus | The artificial manufacture of apparent agreement through AI-generated content simulating public opinion |
| Reproducible Provenance | Transparent, standardized infrastructure for verifying information origins—treating information integrity as infrastructure |
If you use this work in your research, please cite:
@inproceedings{loth2026verification,
author = {Loth, Alexander and Kappes, Martin and Pahl, Marc-Oliver},
title = {The Verification Crisis: Expert Perceptions of GenAI Disinformation and the Case for Reproducible Provenance},
booktitle = {Companion Proceedings of the ACM Web Conference 2026 (WWW '26 Companion)},
year = {2026},
month = apr,
publisher = {ACM},
address = {New York, NY, USA},
location = {Dubai, United Arab Emirates},
DOI = {10.1145/3774905.3795484},
note = {To appear. Also available as arXiv:2602.02100}
}Note: This paper presents findings from Wave 1 of our longitudinal study. We are actively collecting expert responses for Wave 2 and plan a follow-up publication with expanded insights. Participate now to contribute to future research.
- Alexander Loth — Frankfurt University of Applied Sciences, Germany
- Martin Kappes — Frankfurt University of Applied Sciences, Germany
- Marc-Oliver Pahl — IMT Atlantique, UMR IRISA, Chaire Cyber CNI, France
This research builds on the JudgeGPT research platform—open-source infrastructure for studying human perception of AI-generated content.
This project is licensed under the MIT License - see the LICENSE file for details.
We thank the R2CASS workshop organizers—Momeni, Bleier, Dessì, and Khan—for establishing the reproducibility frameworks that inform this research.
"We must treat information integrity as infrastructure. Just as we build roads and power grids, we must build the protocols for truth verification."
— Survey Respondent (Policy Advisor)