News

Specialists

Friday
December, 1

Deepfake Detectors Classify Fake Images As Accurate in a Study

Featured in:

Recently in a research study, a paper was published. This paper found that the researchers from Google and the University of California observed a fault with the best forensic classifiers. counterfit ai aiwiggersventurebeat.

There are AI systems whose purpose is to differentiate natural and synthetic content. However, their study showed that AI could give faulty results. They can get compromised and suffer adversarial attacks. They can borrow input that will result in mishaps and cause the models to malfunction.

A team of researchers at the University of California in San Diego previously published research on this field. The researchers showed that passing fake videos through this technical loophole was possible. They did this by adversarial enhancement of information.

Information got introduced in each frame. The researchers doctored the video using the AI generation methods which are available. This development is both bothersome and new for organizations that attempt to sell products with fake media detectors. This will cause more trouble to people keeping in mind the significant rise in the deepfake content we see online.

Fake media is dangerous. If it is in the wrong hands of the people, they can use it to wage a digital war. Political gains or terrorist intentions can use this software to create chaos between countries. They can use them to influence public opinion during an election. An innocent person may get implicated in a crime they did not do.

Additionally, we find the most horrific criminal use of deepfake on the dark web. Criminals use deepfakes for child pornography. Deepfakes have become a medium to generate pornographic material, especially child pornography. People used it tocreate pornographic images and pictures of celebrities and actors. A significant energy producer got defrauded because of deepfakes.

Researchers developed a model that classifies the levels of deepfakes. They named the model “white-box” threat model. It was a data set of 94,036 sample images. The researchers modified these images, which were already synthetic. This made the system misclassify them as accurate and vice versa. The most alarming thing about this model was when the researchers distorted the images under 7% of their pixels, and it classified 50% of authentic images as fake.

Find us on

Latest articles

- Advertisement - spot_imgspot_img

Related articles

Mastering Student Loan Finances: A Comprehensive Guide to Manage...

Managing student loans can be a challenging endeavor, especially for recent graduates embarking on their professional journeys....

Boost Your Business with Outdoor LED Signs: A Comprehensive...

‍As a business owner, you're always looking for ways to make your business stand out and attract...

The Power of Personalized Communication: The Benefits of Auto...

Auto dealers play a crucial role in the automotive industry. They provide a wide range of services...

5 Untold Secrets About Sober Living Homes Los Angeles

Are you someone who is looking for good ways to come out of addiction? If you are...

Is it True That a Player Can Be Blocked...

How do casino operators treat players who win too much or too often? People think that casinos...

Cost Control Simplified With Outsourced Accounting Services

Credit unions and bank chains of all types are moving away from traditional methods of cash handling...