According to reports from TechCrunch, Google plans to implement new features in its Search, Google Lens, and Circle to Search tools to flag AI-generated and AI-edited images, marking a significant step towards regulating artificial intelligence in visual content.
Since AI image generation has taken center stage within the past few years of AI development, it has been met with awe initially and with outrage eventually12. Being able to create amazing photos and artworks by simply typing out some select words or uploading existing pictures as references for AI has blurred the definition of real art3. Now, everyone with a device and access to an AI tool can conjure masterpieces from scratch despite not having artistic talents4. Countless artists and photographers already complained about how artificial intelligence is stealing their works to form amalgamations that a lot of other people are passing off as their own5. The lack of regulations and laws against AI use in most countries enables this blatant copyright infringement6. Cases kept getting swept under the rug more often than not. With businesses and clients adopting AI images instead of hiring skilled professionals for creative roles, the matter became even more severe because AI is also stealing jobs from actual creators78.
Aside from the flak it received from art communities, AI has also been scrutinized by the general public after the onslaught of deepfakes. This technology that can swap an individual's face with the appearance of another person is fun at first but has become greatly alarming in recent years.12 The purpose of this innovation started off as harmless since it can be used for silly photos and videos where you can switch faces with your friends, family members, or beloved celebrities. It has taken a drastic turn, though, when scammers pretending to be well-known personalities use it to spread misinformation and propaganda.34 Since 2023, AI content scams have risen to 245%, costing up to $12.3 billion. If such a trend continues, it's predicted that the losses will mount to $40 billion within four years.24
Google has finally decided to step up and listen to the overwhelming grievances against AI. The tech giant announced that it will separate AI-generated photos from authentic images created by humans and artists.12 It will be on the hunt for photos that are either fully created through AI or edited by an AI tool. It will mark all the identified images as made by AI on the "About this image" window, which is visible on Google Search, Google Lens, and Android's Circle to Search.12 There are talks that this surveillance will also be applied to more of Google's platforms such as YouTube, but those would have to wait until the next updates.2
Unfortunately, this protective measure that should pacify creators' worries has its limitations. Search can only recognize if an image is made with AI if it has C2PA metadata12. The Coalition for Content Provenance and Authenticity (C2PA) establishes technical standards by listing down a picture's history and the tools used to create it3. Google can easily read these details and know whether AI software was involved in a certain image's production4. However, without this metadata, Search has no way of knowing if a photo is truly created by AI or not2.
Amazon, Microsoft, OpenAI, and Adobe all adopt C2PA as well13. Nonetheless, even if major companies aside from Google support these standards, the fact remains that they're not widely used2. Only some equipment and AI tools have integrated C2PA into their systems because of technical and operational issues5.
Let's say that an AI-generated image has C2PA metadata. Will it automatically be flagged? Yes, if the metadata hasn't been altered. No, if someone went to the lengths of removing the part that mentions the photo was created using an AI photo generator.12 Plenty of tech-savvy AI users can skip past Google's watchful eye as long as they know how to mess with metadata. They can scrub the information off or make it unreadable.2 When this happens, Search won't label the photo as AI-generated.2
Having a system in place to detect AI images is great, but there are still a lot of loopholes where persistent users can slip through. It can be argued that this is better than nothing. However, Google has the resources, technology, and intelligent manpower to put up better defenses against these frauds. AI mostly has free reign up until today. More regulations should be enforced so this technology can be put to good use rather than abused. The concerns of the public related to AI are valid and must be addressed. Huge corporations should take into account the sentiments of the people and not just their entrepreneurial interests. This may be the first step, so we hope to see Google and other leading AI companies take the next one.