The Hidden Dangers in Assuming Artificaminds Around Us - Navigating AI Recognition Pitfalls
The Hidden Dangers in Assuming Artificaminds Around Us - Navigating AI Recognition Pitfalls
Key Takeaways
- WIth no obvious tells, AI-generated media can easily fool viewers.
- People claiming to spot AI often rely on pre-existing beliefs, not true detection.
- To accurately identify AI-generated content, rely on third-party confirmation.
I see a lot of confidence from people on social media that they can see whether a video or image has been generated by AI. It’s understandable why some people might feel this way, but it’s a potentially disastrous delusion.
You Can’t Spot AI, Even if You Think You Can
It’s one thing to say that there can be obvious tells in a piece of media that it’s been generated by AI. This could be as simple as counting fingers, or noting that some stuff in the image doesn’t make sense. However, these are examples of AI generation going wrong, and it’s entirely possible for generative AI to create images and video that do not have any signs of being AI-generated. At least not the sort of thing you’d be able to see with the naked eye.
You’ll notice that a lot of people on social media who claim to know when something is AI-generated only say so when it was disclosed already. In other words, it is most likely what psychologists refer to as a “priming” effect where you see “signs” in the image based on your pre-existing belief that it was AI-generated. The real proof comes from trying to identify images blind.
Sydney Louw Butler / How-To Geek / MidJourney
For example, I got 80% correct in the Brittanica Education Real or AI? quiz. In the New York Times quiz with AI faces I only scored 30%! In Bloomberg’s quiz I only got 40% right! Even if you’re some sort of AI-detecting savant with 99% accuracy, that 1% of AI content that could slip through is still a major issue!
The truth of the matter is that as long as its possible for AI to generate content that’s close enough to non-AI content to pass muster, it’s dangerous to assume you can tell just by looking.
The Risk of Labeling Non-AI As AI
There’s a flipside to this as well. There’s always the chance that someone will look at a non-AI image and declare that it “looks like AI” to them. This is already an issue for artists, who now have to record time lapses of their work to prove that it wasn’t just generated with a prompt and some light editing. People’s careers are on the line when it comes to accusations like these, and just as there’s no way to definitely prove that something was written with AI, there is no foolproof way to perfectly separate AI from non-AI images.
Take the Low-Risk Approach
The most logical approach to any imagery where its veracity is an important factor, is to avoid using intuition and an irrational belief that you can simply tell as a way to evaluate its legitimacy.
Instead, if it really matters whether the image is fake or not, treat it as fake until you can prove otherwise. There has to be third-party confirmation, or some sort of official, transparent, and traceable chain of evidence before anyone can say for sure that something isn’t AI-generated.
If you think about it, this has always been the case, but the likelihood that something was faked has become much higher over the years. At first, you’d have to be a photographic expert in manipulating film photos. Then you’d have to be a wizard in software like Adobe Photoshop, and now you just need to type the right words into a a text box and keep rolling the dice until an image passes muster.
The truth is that it doesn’t seven seem like AI-generated images have to be any good at all to fool people. The flood of cursed AI images on Facebook are an example of images you can clock at a glance, and yet the law of averages seems to be in effect as plenty of clueless people just accept them. However, as a How-To Geek reader, I doubt you’d be fooled by these obvious fakes, but the real danger might be that these easy to spot images fool you into thinking all AI images will be just as obvious.
- Title: The Hidden Dangers in Assuming Artificaminds Around Us - Navigating AI Recognition Pitfalls
- Author: Jeffrey
- Created at : 2024-08-30 16:52:32
- Updated at : 2024-08-31 16:52:32
- Link: https://eaxpv-info.techidaily.com/the-hidden-dangers-in-assuming-artificaminds-around-us-navigating-ai-recognition-pitfalls/
- License: This work is licensed under CC BY-NC-SA 4.0.