Publishers advocate ethical use of Chat GPT to inspire “new ideas and approaches”
The debate about GPT Chat, Open AI’s language model that responds to user questions and comments with contextualised answers, is taking hold in newsrooms. The high speed at which this tool is advancing is forcing communication groups to quickly consider whether to use it and, if so, how to use it.
This is the question that several editors addressed last week at the conference Experiences with Artificial Intelligence in the media, organised by CLABE and held at the headquarters of the CEOE in Madrid. The editor of Zona Movilidad, Pilar Bernat, defended the usefulness of this virtual assistant for journalists, even assuming the difficulty of newsrooms to invest in technology and stressed that it is understood “as a tool and not an end”. A statement that, according to experts, would dismantle the thesis that artificial intelligence will replace journalistic work.
When Chat GPT is asked about its advantages and disadvantages in the work of journalists, it replies, in short, that “as a language model based on artificial intelligence, it can be useful for journalists in generating ideas, writing and editing content, and accessing and analysing large amounts of data. However, there are also drawbacks such as data bias, lack of context and editorial liability, which journalists should consider when using the tool. It emphasises the importance of journalists verifying the accuracy and quality of the information generated by the model before publishing it”.
Chat GPT is “incapable of detecting information bias and discernment. Sometimes it picks up completely wrong information.
Editors defended an “ethical use” of technology to inspire “new ideas and approaches”, understanding the opportunities but also the limitations it possesses. The content director of the communication group Peldaño, Adrián Beloki, stated that this model is already being internalised in their 13 titles in a responsible manner, as Chat GPT is “incapable of detecting information bias and discernment, and sometimes captures totally erroneous information. It does not verify facts. For the journalist, the creation process may be faster, but their role as content gatekeepers remains intact”.
Another drawback identified is that the tool does not cite sources and makes it easier for journalists to engage in malpractice by asking for versions of news stories that have been published by other media. For example, the event mentioned the possibility that Chat GPT could translate and restructure an article from The New York Times, without a reader who has also read the original article being able to identify that the version has been produced by this technology. In this hypothetical case, experts recommend, it would be essential to cite the source.
This technological revolution continues to generate supporters, detractors and sceptics in the profession. Some of the doubts that were raised are how to deal with the intellectual property debate here, or whether any other type of regulation will be applied, given the high contrast between the legal and technological pace.
Benefits
Among the benefits provided by Open AI technology, the professionals highlighted the capacity to generate ideas “at times when they are faced with a blank page”, the creation of headlines or headings from texts provided, the suggestion of social network copy from news or the possibility of versioning press releases and complementing them with other information to differentiate themselves.