A key objective of research is to ensure that audiences—including scientists, policymakers, and lay readers—find it valuable and apply it effectively. Yet, communicating research to these audiences is rarely straightforward. Readers are inundated with information, making accessible, understandable communication critical. Trust in the credibility of the content and its authors is also essential. Generative artificial intelligence (AI) tools such as OpenAI’s ChatGPT can instantly summarize and interpret research findings using clear language, charts, and other tools (though they also make mistakes). They also create a particular dilemma: If a blog post summarizing research is written with AI, does it lose credibility? Might readers trust it less because there may be less work effort behind it—making the potential value ambiguous—or because they have doubts about AI capabilities?
Trust the messenger? The role of AI transparency in policy research communication
- From
-
Published on
31.12.24

By Michael Keenan, Jawoo Koo, Christine Mwangi, Naureen Karachiwalla, Clemens Breisinger, and MinAh Kim