Craig Peters is bullish on AI.
The CEO of Getty Images, who is taking part in an innovation summit on artificial intelligence being held Wednesday at Cannes’ international TV market MIPTV, remains clear-eyed about the dangers generative AI could pose to the creative industries.
“What concerns me is that not everyone wants there to be more creators, some want the creators to be automated away. You saw that play out in the LA strikes last year,” says Peters, speaking to The Hollywood Reporter ahead of his MIPTV session. “Not everybody wants to eliminate the societal issues that can come from this technology, there are people who want to exploit this technology…That’s what keeps me up at night.”
Earlier this year, Getty filed suit in London against Stability AI claiming the open-source generative AI company unlawfully copied and processed millions of Getty’s copyright-protected images.
But the stock images giant is trying to be proactive as well, signing a deal with AI giant Nvidia to create AI text-to-image and text-to-video services with a generative model trained on Getty’s copyright-protected library of stock images. Peters argues the deal will both protect creators by ensuring compensation for use of their work, and guard against abuse. “This model can’t produce deepfakes, because it was trained on only a creative universe. It doesn’t know who Taylor Swift is. It doesn’t know who Joe Biden is,” he says.
In a wide-ranging discussion, Peters outlined his hopes for the future of AI within the creative industries while warning about the dangers if legislation does not keep pace with the speed of technological development. “We need to develop some standards [quickly] because there were more images created through AI last year than there were created through lens-based technologies.”
This might be an odd place to start, but I’d like to get your take on what happened around the doctored Kate Middleton photo, Getty and other photo agencies’ reaction to it in issuing a kill notice for the image and the debate it sparked among the general public. What lessons did Getty take from that incident?
Well, I won’t comment on the general public. I think that’s well-documented. In terms of us, one thing we learned was our processes work. We did identify the image as being doctored and enacted processes in order to pull that image from our service as well as to alert customers. That’s the good side. I think the other learning is that these established relationships between the media and organizations, in this case, the royal family, the monarchy, will I think require higher scrutiny going forward. We take handout imagery from the likes of the royal family. We take it from NASA, we take it from other organizations, we distribute it, and it is clearly labeled as such. But I think there needs to be more scrutiny on that.
In many cases, we need to slow down the path from them to our website and put on more scrutiny upfront. I think we got it right, I just think we could have probably got it right earlier. I think we and other outlets are going through that learning process, and I’ve already made adjustments to our processes.
Getty is collaborating with AI company Nvidia. You’ve launched a service to provide AI text-to-image generation of stock photos with an AI engine trained on the Getty archive. This could seem to be a kind of Faustian bargain. Why did you do it? Aside from initial profits for Getty from the deal, what benefits do you see for the whole ecosystem that will come from this?
The why is we didn’t view it as a Faustian bargain. We believe AI is something that’s here. It’s real and it’s going to be transformative. It is not a technology in search of an application, something like Bitcoin. This is real. It’s gonna be transformative along the lines of the PC, you know, or mobile devices, smartphones. I don’t think it’s something you can put in a box. We don’t think it’s something that you can ignore. And we think it can be highly beneficial to creators. It can be very negative for creators as well. I think we’ve yet to figure out where this is going to fall.
But ultimately, we want this technology to be beneficial to creators. And so with Nvidia, we found a partner that was willing to respect the intellectual property of Getty Images, and our creators to jointly create a service that was trained on licensed data, most notably our content, giving an ongoing share of revenues that are generated by this service back to those creators whose work it was trained on. So ultimately the utilization of AI benefits the creators and their IP that it was trained on. It’s a commercial-first service and commercially-safe service. This model can’t produce deepfakes, because it was trained on only a creative universe. It doesn’t know who Taylor Swift is. It doesn’t know who Joe Biden is. It doesn’t know who the Pope is. It can only produce commercial outcomes that can be utilized by businesses in their sales, marketing, etc.
We thought it was important to demonstrate that these services could be developed off of licensed material, that they could be high quality, and that they don’t have to come with all of the social harms, the collateral harms that can come with releasing this technology into the world. We thought it was commercially safe, it was socially responsible. And it was one that helped creators not harmed creators.
That’s the reason we jumped in with Nvidia on this. So we don’t view it as a Faustian bargain at all. We actually think it stands as a pretty unique model relative to the other models that are out in the marketplace that didn’t license training data. That don’t compensate content creators for the use of that data, and don’t put necessary controls around the use of their tools or what those tools can do. Those represent breaches of third-party intellectual property, breaches of privacy rights, breaches, ultimately, of social and legal norms.
Can you take me through how the compensation process will work?
They’re based on two proxies. It’s there’s not great technology to be able to follow pixel by pixel through at this point. Maybe that is something that will develop over time. We basically compensate on the quantum that you have within the training set. So if you’re one of a hundred items, you get 1/100th and then there’s also the performance of your imagery generally, which is kind of a quality proxy for us. So if your imagery is, licensed more off of our platform, that’s a good proxy for quality in our view, and you’ll share more in those profits.
Are the images being created through this system, the original AI-generated images, copyright-protected?
That’s an open question. We don’t circulate these images that were created back into our library. If you’re prompting the images, you are actually a participant in the creation of them and therefore we believe you have some level of ownership in that content. But we also want our library to be clean. But the question of whether this content can be copyrighted is right now, well, at least in the U.S. and the U.K. and the EU the answer is no. I think over time we are going to get to some level where human endeavor involved in AI creation will be compensated in some way. Right now copyright can only be assigned to a human creation, not an artificial creation. I think there’ll be some blurred line that gets drawn in the future, but right now the line is pretty hard that you can’t copyright this material.
Will all the material created by this system be labeled as created by AI? How will it be watermarked?
The watermarking technology is still kind of being developed. It’s one of the concerns we had with technology firms that were just racing out with these technologies and didn’t think through the requirements of that. We put in the metadata that was used to create it. We’re trying to get standards developed to have a better watermark, a kind of immutable, permanent way to identify this content. Right now some are putting on watermarks but most are putting no labeling at all. Some are putting just a visual kind of cue in the lower right that can be easily cropped out. So we need to develop some standards around that. Because there were more images created through AI last year than there were created through lens-based technologies. It’s going to be pretty important for the public and others to be able to discern true images from AI.
How confident are you that you’ll win this race? AI is already outpacing you in terms of production.
I think there are people that care about this issue, in government and on the regulatory side of things. I think there are people in the technology industry that care about this. And I think there are people in the media industry that care about it. I think if we come together with good intent we can solve it. If we can create these technologies to produce this type of content, we can create technologies that can identify this type of content. I’m pretty sure about it. We just need the rewards and incentives, the structure in place to do so.
But that’s the rub, isn’t it? It doesn’t seem to be purely a technology problem, but more a political problem.
I think it’s a political/regulatory problem. In the political case, you’ve got the EU’s AI Act that actually puts some requirements in place, though many still need to be defined. There are discussions and other jurisdictions around the globe around this but legislation lags behind the technology. So am I confident? Not 100 percent. But do I have a drive to achieve this? Yes, and that’s where Gettys is trying to lead the way within the creative industry and within the media industry, trying to lead in terms of setting the parameters by which we think this technology can be beneficial to the creative industries and also beneficial to society as a whole.
Where do you see the real benefits in the particular service that you’re offering?
Well it fits into our core value proposition, that we’ve always delivered to our customers. Our content has been used in in theatrical releases and series and it allows creators to create at a high level. In some cases they can rely on our documentary footage or other things to tell stories in ways that they couldn’t do otherwise. In some cases, it saves them time or it’s a lot cheaper and easier and quicker to rely on a preset library than to go off and do production. So we can be time efficient. We, we can be budget efficient. And we avoid intellectual property risk. Intellectual property rights can vary around the world, it can be quite complex, and we can eliminate that risk for our customers.
And this tool allows people to ideate and create in new ways. Not everything you can imagine can be shot through a camera. In some cases, these tools can allow you to imagine new images. In other cases, they allow you to do things more quickly or easily than existing tools. A lot of this can be done in software like Photoshop or other editing platforms. But it can take time working pixel by pixel. You can automate that and save time. And because we free you from intellectual property risk, you don’t have to worry about what this system was trained on, who it was trained on, about the names and likenesses of the people and their private data. It allows you to create more freely. It’s a tool.
You know, originally there was a big tool named Avid, where you’d spend hundreds of thousands of dollars to edit video on. That moved into Final Cut Pro which allowed you to do the same thing on a richer basis at a much lower cost. This is a similar kind of democratization of things that enables people to create. Ultimately that’s what we’re hoping for by putting this service into the world, that will enable more creativity and allow people to do their work more efficiently.
What about your original creators, your photographers and videographers, the creators that are the basis of Getty’s entire business? Doesn’t this tool risk undermining their whole profession?
I don’t think so. I think our creators have a tremendous amount of creativity. I think they have a tremendous amount of experience and understanding of how to create content that really resonates with an audience. There’s a lot of expertise that goes into that. It’s not easy to do. 15-16 years ago, when the smartphone came out. everyone could take a picture but not everyone can take a meaningful picture, not just a high-quality picture, but a meaningful one, one that you’d want to use with your brand or on your website to promote your products and services. That’s the scarcity that still continues to exist. If anything, I think AI makes the creators more important, because when everybody can create as much imagery as they want, it becomes harder and harder to stand out.
How concerned are you that at the moment we don’t have regulations or agreements around the big platforms which are the ones that are actually disseminating this material, and in many cases are funding AI programs that are producing it en masse?
I think clearly they are going to be regulated by the EU Act, and in the United States, they’ve made commitments to the White House that they will implement some of those technologies and standards. The specifics still need to be worked out. But Meta started adjusting some of their editorial policies and takedown policies with respect to generative imagery and modified imagery. I think those are steps in the right direction. But there are other companies that have done nothing and haven’t made any changes. It will take regulators to step in and be detrimental to their bank accounts and their ability to do business in certain territories. This is not something that is going to go away. Where it gets more dangerous is in some of the open source models, and some of the smaller companies like Stability AI, that’s put this technology out there with no controls around it and no commitments to respect copyright. We still need more regulatory and legislative action. But I think the large tech platforms understand this and are moving in that direction. We probably just need a firmer push, with more specifics.
What personally scares you about this technology and its potential for harm?
I wouldn’t say scares, but what concerns me is that not everyone wants there to be more creators, some want the creators to be automated away. You saw that play out in the LA strikes last year and in the negotiations. Not everybody wants to eliminate the societal issues that can come from this technology. There are people who want to exploit this technology. Those are my concerns. I think this technology can be incredibly beneficial to society, but if it’s not harnessed correctly, if it’s not managed correctly, it could be quite detrimental. Those are the things that keep me up at night. The unknown, the uncertainty, and the broader social issues that come from this technology.
Read the original article here