News

Specialists

Wednesday
February, 1

Deepfake Detectors Classify Fake Images As Accurate in a Study

Featured in:

Recently in a research study, a paper was published. This paper found that the researchers from Google and the University of California observed a fault with the best forensic classifiers. counterfit ai aiwiggersventurebeat.

There are AI systems whose purpose is to differentiate natural and synthetic content. However, their study showed that AI could give faulty results. They can get compromised and suffer adversarial attacks. They can borrow input that will result in mishaps and cause the models to malfunction.

A team of researchers at the University of California in San Diego previously published research on this field. The researchers showed that passing fake videos through this technical loophole was possible. They did this by adversarial enhancement of information.

Information got introduced in each frame. The researchers doctored the video using the AI generation methods which are available. This development is both bothersome and new for organizations that attempt to sell products with fake media detectors. This will cause more trouble to people keeping in mind the significant rise in the deepfake content we see online.

Fake media is dangerous. If it is in the wrong hands of the people, they can use it to wage a digital war. Political gains or terrorist intentions can use this software to create chaos between countries. They can use them to influence public opinion during an election. An innocent person may get implicated in a crime they did not do.

Additionally, we find the most horrific criminal use of deepfake on the dark web. Criminals use deepfakes for child pornography. Deepfakes have become a medium to generate pornographic material, especially child pornography. People used it tocreate pornographic images and pictures of celebrities and actors. A significant energy producer got defrauded because of deepfakes.

Researchers developed a model that classifies the levels of deepfakes. They named the model “white-box” threat model. It was a data set of 94,036 sample images. The researchers modified these images, which were already synthetic. This made the system misclassify them as accurate and vice versa. The most alarming thing about this model was when the researchers distorted the images under 7% of their pixels, and it classified 50% of authentic images as fake.

Find us on

Latest articles

- Advertisement - spot_imgspot_img

Related articles

Differences between traditional cinema and Private Cinema Rentals

Cinema has come a long way since its inception and with advancements in technology; the movie-watching experience...

Investigating the Link Between Delta-8 THC and Genetically Modified...

Delta-8 THC is one of the latest exciting discoveries in the world of cannabis. It is a...

Is Coral Springs a Good Place to Live?

With the motto of Coral springs as the community of excellence and with its ranking by CNNmoney.com...

10 Worst Money Mistakes Everyone Should Avoid

Managing money is a challenge that we all face. Whether it’s budgeting, investing or saving for the...

The Basics of Retirement Planning in a Post-COVID World

When it comes to retirement planning, it's important to know the basics. After all, you'll need to...

Single in Austin? 5 Great Date Ideas Under $50

If you're single and living in Austin, Texas, you know how hard it can be to find...