Major tech publications like CNET have already been caught with their hand in the generative AI cookie jar and have had to issue corrections to articles written by ChatGPT-style chatbots, which are prone to factual errors. Other mainstream institutions, like Insider, are exploring the use of AI in news articles with notably more restraint, for now at least. On the more dystopian end of the spectrum, low-quality content farms are already using chatbots to churn out news stories, some of which contain potentially dangerous factual falsehoods. These efforts are, admittedly crude, but that could quickly change as the technology matures.
‘Are We Waiting for S**t to Hit The Fan?’: Former Google Safety Lead Warns of AI Chatbots Writing News
SmartNews Trust and Safety Lead Arjun Narayan says news organizations must develop "first principles" and assure transparency before using AI writers.
" In a few short months, the idea of convincing news articles written entirely by computers have evolved from perceived absurdity into a reality that’s already confusing some readers. Now, writers, editors, and policymakers are scrambling to develop standards to maintain trust in a world where AI-generated text will increasingly appear scattered in news feeds.
No comments:
Post a Comment