OpenAI, the company that made the popular AI chatbot ChatGPT, has gotten another news licensing deal in Europe. The Financial Times in London is the latest publication that OpenAI is paying to access its material.
Like other publishing licensing deals OpenAI has made in the past, the financial details of this one are not being made public.
The latest agreement between OpenAI and a publisher sounds a little cozier than other recent ones, like the ones with Axel Springer in Germany or the AP, Le Monde, and Prisa Media in France and Spain. The two companies are calling it a “strategic partnership and licensing agreement.” (But the CEO of Le Monde also called the “partnership” it revealed with OpenAI in March a “strategic move.”)
We do know, though, that it’s not an exclusive license, and OpenAI is not investing in the FT Group in any way.
In terms of content licensing, the two said that OpenAI could use the FT’s material to train AI models and, when appropriate, to show in generative AI responses made by tools like ChatGPT. This is similar to other publisher deals that OpenAI has made.
The strategic part seems to be about the FT learning more about generative AI, especially as a tool for finding new content. The partnership is said to be about creating “new AI products and features for FT readers,” which suggests that the news organization wants to use AI technology in more ways.
The FT said in a press release, “Through the partnership, ChatGPT users will be able to see select attributed summaries, quotes, and rich links to FT journalism in response to relevant queries.”
The publisher also said that earlier this year it started using OpenAI’s ChatGPT Enterprise tool. It also says it wants to look into ways to expand its use of AI, but it is wary of the trustworthiness of automated results and the possible damage they could do to reader trust.
In a statement, FT Group CEO John Ridding said, “This is an important agreement in a number of ways.” “It knows how valuable our award-winning journalism is and will enable us to learn early on how AI finds content.”
He went on, “Aside from the benefits for the FT, this has bigger effects on the industry as a whole.” Publishers should get paid when AI tools use their work, I agree. OpenAI knows how important it is to be open, give credit where credit is due, and be paid fairly. All of these things are important to us. At the same time, it’s clear that users want these goods to have trustworthy sources.
It is well known that large language models (LLMs), like OpenAI’s GPT, which runs the ChatGPT robot, can make up information or “hallucinate.” This is the exact opposite of journalism, where writers work hard to make sure the information they give is correct.
It’s not a surprise, then, that journalism has been the focus of OpenAI’s first moves toward licensing material for model training. The big AI company might be hoping this will help it fix the “hallucination” issue. One part of the press release says that the relationship will “help improve [OpenAI’s] models’ usefulness by learning from FT journalism.”
But there’s another big reason for this as well: the risk of legal trouble with copyright.
The New York Times said in December that it was suing OpenAI because the Google-owned company used its copyrighted information to train models without a license. One way to avoid more claims from news publishers whose content was likely scraped from the Internet (or otherwise collected) to help the development of LLMs is to pay those publishers for using their copyrighted content. However, OpenAI doesn’t agree with this.
In turn, the material licensing will bring in some serious cash for the publishers.
OpenAI told TechCrunch that it has made or is getting ready to sign “about a dozen” deals with publishers. It also said that “many” more deals are in the works.
Publishers might also get some new readers if people who use ChatGPT choose to click on citations that lead to their material. Generative AI, on the other hand, could eventually make people use search engines less, which would take business away from news sites. Because of this, some news outlets may think it’s smart to work together more with companies like OpenAI in order to protect themselves from future problems.
Publishing companies can also damage their reputations by working with “Big AI.”
Last year, tech publisher CNET rushed to use generative AI as a tool for making content without making its use of the tech very clear to readers. Futurism journalists found a lot of mistakes in machine-written articles that CNET had published, which hurt the company’s image even more.
The Financial Times has a long history of publishing good news stories. It will be interesting to see how it adds more generative AI to its goods and/or the way the newsroom works.
It announced a GenAI tool for subscribers last month, which basically means adding a natural language search option to 20 years of FT material. It’s a value-add meant to get people to subscribe to human-produced journalism.
Also Read: Backed by Openai, Ghost Autonomy Shuts Down
In Europe, too, privacy laws are making it hard to use tools like ChatGPT because the law isn’t clear.
What do you say about this story? Visit Parhlo World For more.