At the TechCrunch Disrupt 2024 conference, taking place from October 28 to 30 at the Moscone West Center, San Francisco, USA, three leading experts debated how to prevent misinformation created by artificial intelligence (AI).
Imran Ahmed, Executive Director of the Center for Countering Digital Conflict (CCDH), emphasized that AI has changed and distorted the nature of much information. At virtually zero cost, AI can create and distribute large amounts of inaccurate information, creating a “bullshit machine” that constantly spreads. Ahmed compared the scale of the problem to an information “arms race,” pushing the level of influence to unprecedented heights.
Brandie Nonnecke, director of the CITRIS Policy Lab at UC Berkeley, said social platforms have not taken effective steps. She pointed out that self-regulation and transparent reporting, such as mass removals of harmful content, can create a false sense that the problem is being addressed.
According to Ms. Brandie Nonnecke, platforms need to improve their processing procedures so that they do not miss harmful content that is spreading widely.
Pamela San Martin, co-chair of Meta’s Oversight Board, agrees that social media has not done enough. But she cautions against writing off AI entirely, given its many potential benefits.
AI can bring particular benefits in the field of election communications, but if extreme measures are used to block it out of fear, society risks missing out on the positive values of the technology, said Ms. San Martin.
The discussion at TechCrunch Disrupt 2024 highlighted the urgent need for strong measures to regulate AI-generated content, and recommended that social media platforms reconsider their approach to the issue.
Experts emphasize that, in addition to measures to prevent misinformation, it is necessary to ensure that AI applications can still benefit society in a safe and transparent manner.