Google’s New Tool Aims to Make AI-Generated Images More Traceable
For years, the Google DeepMind team has thought that developing strong generative AI tools necessitates developing outstanding capabilities to detect what AI has made. There are very apparent, high-stakes reasons for this, according to Google DeepMind CEO Demis Hassabis. “Every time we talk about it and other systems, the question of deepfakes comes up.” With another heated election season in both the US and the UK in 2024, Hassabis believes that developing systems to recognize and detect AI imagery is becoming increasingly vital.
Hassabis and his team have been working on a tool for several years, which Google is now making public. It’s called SynthID, and it’s intended to watermark an AI-generated image in a way that is undetectable to the human eye but detectable by a dedicated AI detection tool.
The watermark is implanted in the image’s pixels, although Hassabis claims it has no discernible effect on the image itself. “It doesn’t change the image, its quality, or the experience of it,” he explains. “However, it’s resistant to various transformations — cropping, resizing, and all of the other things you might do to try to avoid normal, traditional, simple watermarks.” According to Hassabis, when SynthID’s underlying models develop, the watermark will be less noticeable to humans but more easily identified by DeepMind’s technologies.
That’s as technical as Hassabis and Google DeepMind will get for the time being. Because SynthID is still a new technology, even the launch blog page is light on details. “The more you reveal about how it works, the easier it will be for hackers and other bad actors to get around it,” Hassabis argues. SynthID will initially be available through Google Cloud users that use the company’s Vertex AI platform and the Imagen picture generator. Hassabis expects that once the system is put through more real-world testing, it will improve. Then Google will be able to utilize it in more places, share more information about how it works, and collect even more data.
Hassabis appears to be hoping that SynthID will eventually become an internet-wide standard. The fundamental concepts might even be applied to other forms of media such as video and text. After Google has validated the technology, “the question is scaling it up, sharing it with other partners who want it, scaling up the consumer solution — and then having that debate with civil society about where we want to take this.” He repeatedly emphasizes that this is a beta test, a first attempt at something new, “and not a silver bullet to the deepfake problem.” But he plainly believes it has the potential to be massive.
Of course, Google is not the only corporation with this goal. Not at all. Just last month, Meta, OpenAI, Google, and numerous other major names in AI vowed to incorporate more safeguards and safety procedures into their AI. A number of companies are also collaborating on the C2PA protocol, which employs encrypted metadata to tag AI-generated content. In many ways, Google is playing catch-up with all of its AI tools, including detection. And it appears that we will have far too many AI-detection standards before we find ones that work. But Hassabis is certain that watermarking will be part of the solution for the web.
ynthID will be unveiled at Google’s Cloud Next conference, where the company will inform business clients about new capabilities in Google’s Cloud and Workspace products. According to Thomas Kurian, CEO of Google Cloud, utilization of the Vertex AI platform is exploding: “The models are getting more and more sophisticated, and we’ve had a huge, huge ramp in the number of people using the models.” That expansion and enhancement in the SynthID system convinced Kurian and Hassabis that now was the time to launch.
Customers are concerned about deepfakes, but they also have much more commonplace AI detection requirements, according to Kurian. “We have a lot of customers who use these tools to create images for ad copy,” he explains, “and they want to verify the original image because many times the marketing department has a central team that actually creates the original blueprint image.” Another significant one is retail: some shops are employing AI technologies to produce descriptions for their massive product catalog, and they need to ensure that the product photos they’re uploading don’t get mixed up with the generated images they’re using for brainstorming and iteration. (You may have previously seen DeepMind-created descriptions like these on shopping websites and in places like YouTube Shorts.) They may not be as shocking as false Trump mug images or a strutting Pope, but these are the ways AI is already being used in business.
Other than whether the system works, Kurian says he’s interested in how and where people want to use SynthID when it goes out. For example, he believes Slides and Docs will require SynthID integration. “When using Slides, you want to know where you’re getting your images from.” But where else can you go? SynthID, according to Hassabis, might potentially be given as a Chrome extension or even embedded into the browser to recognize created images all across the web. But supposing that happens: should the tool flag everything that might be generated or wait for a query from the user? Is a big red triangle the proper approach to convey “this was made with AI,” or should it be more subtle?
Kurian argues that there may eventually be a plethora of user experience alternatives. He believes that as long as the underlying technology functions consistently, people will be able to choose how it appears. It could even differ by topic: you may not care whether the Slides background you’re using was created by humans or AI, but “if you’re in hospitals scanning tumors, you really want to make sure that wasn’t a synthetically generated image.”
The release of any AI detection technology would undoubtedly spark an arms race. In many cases, it has already given up on a program designed to recognize language authored by its own ChatGPT chatbot. If SynthID becomes popular, it will simply inspire hackers and developers to find inventive ways around the system, forcing Google DeepMind to enhance the system, and so on. Hassabis responds, with only a hint of resignation, that the team is prepared. “It will probably have to be a live solution that we have to update,” he says, “more like antivirus or something like that.” You’ll always have to be on the lookout for new types of attacks and transforms.”
For the time being, that remains a distant worry because Google controls the entire initial system of AI image creation, use, and detection. However, DeepMind designed this with the entire internet in mind, and Hassabis says he’s prepared for the lengthy process of getting SynthID everywhere it needs to be. But then he stops himself, saying, “One thing at a time.” “It would be premature to think about scaling and civil society debates until we’ve proven that the foundational piece of technology works.” That is the first task, and the reason SynthID is being launched today. If and when SynthID or something similar works, we’ll be able to figure out what it implies for online life.