Media calls for regulation of AI use to protect their work
The conversation about Artificial Intelligence (AI) is taking hold in newsrooms with as many certainties as questions. Experts in this technology and renowned journalists who have spoken out recently in recent weeks agree that AI, on the one hand, opens a window of opportunity.
In their view, AI “helps eliminate repetitive tasks that do not add value and take up a lot of time” and “is able to offload routine work from newsrooms, so that journalists can concentrate on developing their stories”. These were some of the messages launched yesterday in Madrid by specialists from the software company Protecmedia, which helps to speed up work in newsrooms through AI, and recognised players in the communications sector.
However, in this unprecedented technological irruption, newsrooms also face other challenges that call for regulation capable of protecting the interests of news companies, the work of their journalists and the quality of the information offered. Although the European institutions have this issue on the table, the media are still sceptical.
AMI CEO Irene Lanzaco, who describes the technology as “a very big danger for the media”, laments the stark contrasts in liability between technology companies developing generative AI tools – such as Chat GPT – and media groups.
Lanzaco argues, on the one hand, that when media outlets use AI tools to facilitate the development of assignments, “we will be subject to full liability for all information that is awarded under our news brands”. On the other hand, the technological tools, which “feed on the work of editors and the talent of journalists”, in addition to not paying the media for it, “do not guarantee the reliability of the answers to the queries posed by citizens, who are exposed to erroneous information presented as true”.
The newspaper industry is demanding greater accountability and transparency from technology companies and warns that AI will lead to a loss of digital traffic.
“Technology companies will reap the rewards of generative AI without the responsibility associated with generating untruthful content, because unlike the media, platforms are exempt from that responsibility,” she said.
In addition to the lack of responsibility on the part of companies and the increase in misinformation in society, the press employers’ spokeswoman pointed out other problems that AI is causing, such as the “loss of traffic on our websites, and possibilities of advertising monetisation, because the citizen will get first answers through this technology”.