“We've had Photoshop for 35 years” is a common response to refute concerns about generative AI, and you're here because you made this argument in a comment thread or on social media.
There are countless reasons to be concerned about how AI image editing and generation tools will affect our trust in photographs, and how that trust (or lack thereof) could be used to manipulate us. BadAnd we know it's already happening. So, to save us all time and energy, and to save us from wearing out our fingers by responding to the same arguments over and over again, we're putting them all together in one list in this post.
Eventually, sharing it will be much more efficient – just like AI! Isn't that cool!
The argument: “You can already manipulate images like this in Photoshop”
If you've never gone through the process of manually editing photos in an app like Adobe Photoshop, this argument is easy to make, but it's a hopelessly over-simplified comparison. Let's say a bad guy wants to manipulate an image to make it look like someone has a drug problem – they just need to do a few things:
- Have access to (potentially expensive) desktop software. Of course, there are mobile editing apps, but they're not really useful for much more than minor changes like skin smoothing and color adjustments. So, you'll need a computer for this work – it's an expensive investment to work over the internet. And while some desktop editing apps are free (Gimp, PhotoPie, etc.), most professional-level tools aren't free. Adobe's Creative Cloud apps are the most popular, and the recurring subscription ($263.88 per year for Photoshop alone) is extremely difficult to cancel.
- Find suitable images of drug-related equipment. Even if you do have some, you can't just slap any old image on top and expect it to look right. You have to take into account the proper lighting and positioning of the photo they're being added to, so everything has to match. For example, any reflections on the bottles should hit at the same angle, and objects photographed at eye level will look obviously fake if they're inserted into an image that's taken from a higher angle.
- Understand and use complex editing tools. Any inserts must be cut out from the background they were on and then blended seamlessly into their new environment. This may require adjusting colour balance, tone and exposure levels, smoothing edges or adding new shadows or reflections. This requires both time and experience to ensure that the results look consistent prevalentLeave aside the natural ones.
Photoshop has some really useful AI tools that make this easier, such as automatic object selection and background removal. But even if you're using them, manipulating an image will still take a lot of time and energy. Conversely, here's how to do it The Verge Here’s what editor Chris Welch had to do to get the same results using the “Reimagine” feature on the Google Pixel 9:
- Launch the Google Photos app on your smartphone. Tap an area, and ask him to connect it with “a medical syringe filled with red liquid”, “a few thin lines of crumpled chalk”, alcohol, and rubber tubing,
That's it. The same simple process is available on Samsung's new phones. The skill and time barriers haven't just been reduced – they've gotten easier. Went. This Google tool is also great at blending any generated content into images: lighting, shadows, opacity, and even the focal point are all taken into account. Photoshop now also has an AI image generator, and the results it gives are often not even half as reliable as the results you get from this free Android app from Google.
Image manipulation techniques and other methods of fakery have existed for around 200 years – almost as long as photography itself. (Examples: 19th century spirit photography and the Cottingley Fairies.) But we don’t think to scrutinise every photo we see because of the skill and time investment required to make those changes. In the history of photography, manipulations were rare and unpredictable. But the simplicity and scale of AI on smartphones would mean that any fool could create manipulated photos at a frequency and scale we’ve never experienced before. It should be obvious why this is worrying.
Argument: “People will adapt to it and it will become the new normal”
just because You Having the supposed ability to spot if an image is fake doesn’t mean everyone can do it. Not everyone hangs out on tech forums (we love you all, fellow lurkers), so typical indicators of AI that seem obvious to us might be easy to overlook for those who don’t know what signs to look for — if they’re even there. AI is getting increasingly better at producing natural-looking images that don’t have seven fingers or Cronenberg-esque distortions.
In a world where everything can be fake, it is very hard to prove something is real
Maybe it was easy to spot when the occasional deepfake was thrown into our feeds, but the scale of production has changed drastically in just the last two years. This stuff is incredibly easy to make, so now it's bullshit EverywhereWe are now at risk of living in a world in which we must be on guard against being deceived by every image that is placed before us.
And when everything can be faked, it's much harder to prove that something is real. This suspicion is easy to exploit, opening the way for people like former President Donald Trump to falsely accuse Kamala Harris of manipulating the size of the crowd at her rally.
The argument: “Photoshop was also a huge, disruptive technology – but we did just fine”
It's true: Even though AI is much easier to use than Photoshop, Photoshop was still a technological revolution that forced people to encounter a whole new world of fakery. But Photoshop and other pre-AI editing tools were Did creating social problems that persist today and still cause significant harm. The ability to digitally modify photographs in magazines and on billboards promoted impossible beauty standards for both men and women, which disproportionately impacted women. For example, in 2003, the then-27-year-old Kate Winslet was unrealistically slimmed down on the cover of GQ – and the British magazine's editor, Dylan Jones, justified it by saying her appearance was altered “just like any other cover star”.
Such edits were widespread and rarely exposed, despite the major scandals that occurred in early blogs Jezebel Fashion magazine covers featured unretouched photos of celebrities. (France even passed a law requiring airbrushing disclosure.) And as easy-to-use tools like Facetune emerged on social media platforms, they became even more insidious.
One study in 2020 found that 71 percent of Instagram users edit their selfies with Facetune before publishing them, and another found that media images have caused a similar decline in women's and girls' body image, whether labeled or not, claiming they have been digitally altered. There's a direct pipeline from social media to real-life plastic surgery, aiming for sometimes physically impossible results. And men are no exception – social media has a real and measurable impact on boys and their self-image, too.
Impossible beauty standards aren't the only issue. Staged photos and photo editing can mislead viewers, undermine trust in photojournalism, and even promote racist narratives — as was the case with a 1994 photo op that blacked out OJ Simpson's face in a mugshot.
Generative AI image editing not only exacerbates these problems by lowering the barriers – it sometimes does so without any explicit instruction. AI tools and apps have been accused of making women look bigger and more revealing without telling them. Forget about viewers not being able to trust what they see – now photographers can’t trust their own tools!
Argument: “I am sure laws will be passed to protect us”
First, create good speech laws – and, let’s be clear, these are probably would like to Having speech laws — that’s incredibly difficult. Regulating how people can produce and release edited images requires separating uses that are highly harmful from those that many people consider valuable, like art, commentary, and parody. Lawmakers and regulators will need to consider existing laws relating to freedom of expression and access to information, including the US First Amendment.
Tech giants entered the AI era full speed ahead without considering the possibility of regulation
Tech giants also rushed full speed ahead into the AI era, seemingly without even considering the possibility of regulation. Global governments are still struggling to create laws that can rein in those who misuse generative AI technology (including the companies that create it), and the development of systems that can identify real photos from manipulated ones is proving slow and woefully inadequate.
Meanwhile, simple AI tools have already been used to influence voters, digitally edit photos of children, and create portraits of celebrities such as Taylor Swift. That was just in the last year, and the technology continues to improve.
In an ideal world, adequate security would have been put in place before a free, idiot-proof tool capable of adding bombs, car crashes and other bad things to photos in seconds found its way into our pockets. Maybe we would Are A mess. Optimism and willful ignorance can't fix it, and it's unclear how or even if it will be able to At this stage.