Apple's AI System Sparks Controversy Over False BBC News Attribution

· 1 min read

article picture

Apple's AI Feature Under Fire After Generating False BBC News Alert

Apple's Intelligence feature has called on Apple to remove its recently launched Intelligence feature after it falsely attributed fabricated news to the BBC, raising serious concerns about AI's impact on journalism integrity.

The incident occurred just two days after the feature's UK launch on December 11, when Apple's generative AI tool created and attributed a completely false news alert to the BBC about an alleged suicide of a healthcare executive murder suspect, Luigi Mangione.

The BBC promptly filed a complaint with Apple regarding this misattribution, which has sparked broader discussions about AI systems' reliability in handling news content.

Vincent Berthier, who heads RSF's Technology and Journalism Desk, emphasized that AI systems operating on probability models are fundamentally incompatible with fact-based journalism. "AIs are probability machines, and facts can't be decided by a roll of the dice," Berthier stated.

RSF pointed out a critical gap in current regulations, noting that even the European AI Act - considered the world's most advanced AI legislation - does not classify information-generating AI systems as high-risk, leaving them largely unregulated.

The organization, which recently introduced the Paris Charter to protect journalism integrity in the AI era, warns that such AI-generated misinformation poses a direct threat to media credibility and public access to reliable news.

This incident highlights growing concerns about AI tools' premature deployment in news dissemination, particularly when they can generate and attribute false information to respected news organizations without proper safeguards.