An experiment involving AI-generated messages deployed on the Reddit subforum r/ChangeMyView (CMV) has ignited significant criticism, primarily due to a lack of informed consent from unwitting participants. The university overseeing this controversial research, the University of Zurich, has defended its approval of the study, although they have issued a warning to the principal investigator involved.
The subreddit r/ChangeMyView is designed to foster discussion by encouraging users to share their viewpoints and engage with differing opinions. It maintains extensive rules aimed at ensuring civil discourse. However, early Saturday morning, CMV moderators issued a detailed message regarding an unauthorized experiment carried out by researchers from the University of Zurich. This study aimed to determine whether large language models (LLMs) could effectively influence people’s views.
The moderators revealed that the researchers had used multiple accounts to post AI-generated comments on CMV. Their stated goal was to assess the persuasiveness of LLMs in a scenario where individuals seek arguments against their own beliefs. The researchers did not disclose that an AI was generating the comments, claiming that doing so would undermine the feasibility of the study. They asserted that while they did not write any comments directly, they manually reviewed each submission to ensure that no harmful statements were made.
In their apology, the researchers acknowledged that their actions violated CMV's community rules prohibiting AI-generated comments. They rationalized the breach by asserting the societal importance of their research, despite the ethical implications. User accounts involved in the experiment impersonated numerous identities, including those claiming to be a victim of sexual assault and a trauma counselor. In response to the controversy, all accounts associated with the study have since been suspended.
Experts in the field have expressed outrage. Casey Fiesler, an information scientist at the University of Colorado, labeled the experiment as “one of the worst violations of research ethics I’ve ever seen.” On the platform Bluesky, Fiesler emphasized that manipulating online communities through deception without consent poses significant risks and has already resulted in harm.
Sara Gilbert, the research director of the Citizens and Technology Lab at Cornell University, highlighted the potential damage to the CMV community itself, which has been a vital platform for public discussion and debate. She raised concerns about whether users would trust their interactions in the subreddit if they suspected they were engaging with bots rather than real individuals.
In a follow-up response, CMV moderator u/DuhChappers stated that the experiment clearly violated Reddit’s rules against impersonation. The moderators expressed that the accounts created for posting AI-generated comments misrepresented themselves in a deceptive manner. The incident has drawn comparisons to previous research conducted by OpenAI, which used data from r/ChangeMyView without involving non-consenting human subjects.
The moderators of CMV have filed a formal complaint with the University of Zurich's institutional review board. The university's Faculty of Arts and Sciences Ethics Commission investigated the matter and issued a formal warning to the principal investigator. Although the moderators requested that the university prevent the research from being published, they were informed that this decision falls outside the university’s jurisdiction.
The university's response emphasized that the project offers valuable insights and that the associated risks are minimal, characterizing the suppression of publication as disproportionate to the importance of the study’s findings. In a follow-up communication, moderator u/DuhChappers reiterated the subreddit’s willingness to collaborate with researchers who approach them transparently, contrasting this with the covert nature of the Zurich study.
The fallout from this experiment raises fundamental questions about ethics in research, particularly concerning the use of AI in online settings. As the CMV community grapples with the implications of this incident, trust remains a central theme in discussions among users. The broader impact of such studies on online discourse and community engagement will likely continue to be a topic of significant concern.