A new study led by Dr. Alexa Lamm in collaboration with Dr. Kevan Lamm, Dr. Allison Byrd, Dr. Masoud Yazdanpanah, Dr. Nicholas Gabler, Dr. Anna Johnson, Dr. Catherine Sanders, Dr. Fallys Masambuka-Kanchewa and Dr. Michael Retallick examined how consumers perceive the transparency of agricultural science messages on social media when they learn the content was generated by AI versus a scientist as part of the Real Pork Trust Consortium. You can read the entire article “When Machines Speak Science: Testing Consumers’ Perceptions of AI-Generated Science Communication Messages” in the Journal of Applied Communications.

Effective science communication is essential for addressing societal challenges, shaping policy, and building trust between experts and the public. Abilities once considered exclusive to humans, such as intelligence, language processing, knowledge representation, and reasoning to effectively communicate, are now exhibited by AI. Therefore, many rely on AI to produce content on virtually any subject. As a result, written language is no longer an exclusively human domain. Despite this potential, many people remain skeptical, as studies show some individuals experience mild to moderate aversion to the development.

In this study, the research team used Facebook, the most widely used social media platform in the United States, to share posts based on recent scientific journal articles on the topics of pig welfare and pig nutrition. 

Respondents in the study were randomly assigned to view one of four posts:

  • Animal nutrition message (AI-generated)
  • Animal nutrition message (scientist-generated)
  • Animal welfare message (AI-generated)
  • Animal welfare message (scientist-generated)

Respondents first read the assigned post, after which they rated its perceived transparency. Then it was disclosed to them how the post had actually been generated (by AI or by a scientist) and asked to rate their perceived transparency again.

Key Findings

  • Before disclosure, posts on animal welfare were seen as more transparent than those on animal nutrition, regardless of whether they were AI- or scientist-generated.
  • After disclosure, transparency ratings shifted depending on the source. Respondents rated AI-generated animal welfare posts significantly lower in transparency compared to scientist-generated animal nutrition posts.
  • Overall, once respondents learned the source, transparency dropped across both topics when posts were identified as AI-generated, but increased when identified as scientist-generated.

The study concluded that when communicating with public audiences about animal science topics, regardless of whether they are contentious, scientists are viewed as more trustworthy for creating social media science messages compared to those generated by AI. This reflects ongoing concerns about AI’s role in fields where audiences are hesitant to trust AI-produced content. The researchers suggest that using AI in communication could be detrimental to the credibility of research organizations if the public perceives messages as authored by humans and later finds out they were AI-generated. Therefore, they recommend that science communicators restrict their use of AI for content creation until AI becomes more accepted or until a clearer understanding of trust in AI across different subjects is achieved.