As political leaders and technology bosses gathered at the UK’s AI Safety Summit at Bletchley Park outside London on Wednesday, representatives of the creative industries debated the implications of generative AI technology for their sectors at the parallel AI Fringe in central London.
While the Bletchley Park agenda was focused on the existential threats posed by AI technology, the creative industries panel tackled the more immediate impact that generative programs, such as ChatGPT, DELL-E and Stable Diffusion, are having on people who make their living from human creativity.
A key talking point was how original works have been used to train generative AI tools without the permission of the creators or provision for payment for the value created by their use.
“We recently found that 183,000 books have been used to train some of these large language models,” said Nicola Solomon, chair of the Creators’ Rights Alliance, representing 22 creator groups in the UK, with more than 500,000 members between them.
“The authors weren’t asked if those books could be fed in, they weren’t paid anything and yet their works are being used to train these models and to create the new and exciting works that comes out of them,” she continued.
“We know that millions and millions of works of visual artists, of performers, of musicians and of other creators have been used to create these models… with the possibility that they will compete with their own work, but they won’t get payment, or credit, and don’t have the possibility to say no.”
Solomon noted a recent KMPG study that estimated that 43% of tasks performed by writing-based professions will be automated due to generative AI.
Liam Budd, industrial official at UK performers guild Equity, said the union was trying to strengthen members’ rights related to synthetic media productions, which are a growing source of work.
“Our members are often just engaged on a one-off fee… For them to have their likeness cloned, for a digital double to then be used forever in perpetuity… almost makes their work potentially redundant going forward and limits their ability to make a living,” explained Budd.
He said Equity was devising new ethical contract standards that would put members in a stronger negotiating position when offered work on a synthetic media production.
“It’s a really difficult landscape and we’re at the start of this journey but we’re trying to empower members,” he said.
He added that Equity is also lobbying for changes to UK law and that the union wants the UK to introduce “right to own image” rights, similar to those that exist in the U.S. states of New York and California.
Isabelle Doran, CEO of the Association of Photographers, said it was not necessarily that easy for creators to exercise the right to opt out of making their work available for generative AI tools, or license its use.
“From a technical side, I don’t think we’ve yet arrived at a complete solution with regards to being able to protect work that is being scraped because it’s being scraped on such a massive scale,” she said.
Doran used the example of the image-creating generative AI tools Midjourney and Stable Diffusion, which were trained using the open, large-scale dataset Laion 5B.
“That dataset is five billion images. That is pretty much the entire internet,” she said, noting that since Midjourney and Stable Diffusion were launched a year ago they had generated 15 billion synthetic images between them.
“That’s about the same sum as humans have been able to create in history,” she said.
Doran continued that even when creators signaled that they were not giving permission for their work to be “scraped”, it was a near impossible task for them to figure out if their content had been fed into generative AI programs.
Gianluca Sergi, professor of Film Industries, said governments needed to be looking five to ten years into the future to build policies that protect human creative industries.
He suggested, as an example, that the UK government could tie the country’s tax incentive for incoming productions to guarantees that local creatives were going to be employed on a production, rather than replaced by AI.
“There are immediate concerns on the individual… but there are also much bigger conversations around policymaking . Unless people begin to think about it now, we may end up getting locked. This is why it’s important to look at it as a crisis much more widely. It’s not just the one person, it’s everybody.”
Moiya McTier, advisor to the Washington and Austin-based Human Artistry Campaign, a worldwide consortium of more than 170 orgs representing the rights of creators and sports people from a variety of disciplines, also joined the discussions.
Officially launched at SXSW last March, the body has drawn up seven core principles aimed at encouraging the development of ethical AI, hinging on artist permission, data transparency and fair payment for the value created from the use original works.
Revealing that the consortium has its roots in the music industry, she said the wider creative community could apply lessons learned by the sector when it dealt with the disruption of streaming.
“They have a model that could be applied across all types of media, the licencing model that music currently has in place so that you can sample someone else’s work or you can use someone else’s lyrics if you want to cover a song, for example,” she said.
“That is a really effective model of getting another artist’s permission before you use their work and then have the system to fairly compensate everyone involved.”
McTier said that without effective frameworks in place, both jobs and art would be in peril.
“Art is really important to our function as a thriving society,” said McTier. “Our society is better when there’s art around and generative AI already is kind of deteriorating the art landscape. We can see that with voice clones and image clones, audiences can lose faith in the content and the art that they consume.”