Smarter AI tools could spell the death of trust on the internet

One of the most popular images to emerge from this week’s Met Gala Monday were those of Selena Gomez on the red carpet — except she wasn’t actually there.

The image was a digitally altered one of Lily James posing in a Versace gown from last year’s Met Gala, superimposed with Gomez’s face.

The fake picture received more likes than anyone’s real Met Gala look on Twitter, according to celebrity news fan account PopFaction.

Whoever altered the picture even changed the colour of the dress from lavender to blue. So even if you had seen the original look, it would have been hard to tell the difference at first glance. Maybe she was just wearing a different version of the Versace dress?

Sure, a closer look gives away the inconsistencies: the photographers in the background don’t really fit, the colour of this year’s Met Gala carpet was all wrong, the pictures were cropped funny, and the more you looked at it, you could just tell.

Except, who has the time for a closer look when you’re on a mile-a-minute doomscroll?

Once we see something that remotely falls within the Venn diagram of our interests and followers’ reaction value, we hit the share button. Images like these and reckless sharing make for a dangerous pairing.

Many people were creeped out by how real the images looked and pointed to AI for the deception.

But even in the age of AI, it seems we all fell for a good ol’ Photoshop gag. This was not the work of generative AI which creates images based on prompts.

To be fair Photoshop has gotten a lot smarter and AI does have something to do with it.

Last year, Adobe introduced improvements to its AI capabilities, so higher-quality object selections could be made. With that, lifelike editing could be possible with much less time and skill.

Zendaya was another celebrity who was Photoshopped into this year’s Met Gala, her face being edited onto Rita Ora’s pictures from the same night.

This happened seemingly quickly as the original images of Ora were taken only hours before the fake ones were shared. The ease and speed with which this can be done is as unsettling as it is impressive.

As these technologies get smarter, the incident is a reminder that trusting things on the internet will only get harder.

The reason that all these posts went viral can be blamed on knee-jerk reactions and people hitting the share button without checking if it was real.

The internet was always a breeding ground for misinformation but AI continues to blur the line between reality and fiction.

Many companies are embracing AI as a tool that can amplify what humans can do. That also means amplifying the wrongs of the humans behind the prompts.

There are several ethical cases to be made about the issue of deepfake. For example, celebrities or influencers can pass themselves off as being, doing, or wearing somewhere when they weren’t. This could have devastating effects on the authenticity of influencer marketing that brands and consumers have come to rely on.

Still, the burden of falling for a very well-made deepfake cannot fall on the viewer alone. Platforms have to do their part in placing checks on the creation and indiscriminate sharing of these images.

In the case of the fake images of Gomez, Elon Musk might have finally made a single positive contribution to Twitter.

The Community Notes feature allowed people to collaboratively add context to the original tweet with the fake images of Gomez, flagging them with the note, ‘Selena Gomez did not attend the 2023 Met Gala and has not attended the Met Gala since 2018. These are altered images of Lily James at the 2022 Met Gala.’

It also provided context by adding links to articles about James’ original dress and why Gomez wouldn’t be attending this year’s Met Gala.

When it comes to AI deepfakes we might have to fight fire with fire. Just like AI can be used to perform insanely great edits on pictures, tools that can identify if images have been Photoshopped also exist.

Recently, Within Health, a digital service for people suffering with eating disorders, used an AI-powered tool known as a FAL Detector on magazine photos of Jennifer Aniston and Angelina Jolie to reveal exactly how they’ve been altered.

This is only the beginning of technology-enabled fake images, audio and video that we fall for. Our tendency to trust things we see on the internet is set to be tested. We can only hope that the trust issues stemming from this aren’t too severe.

Source: Read Full Article