Deepfake Detectors Classify Fake Images As Accurate in a Study

Recently in a research study, a paper was published. This paper found that the researchers from Google and the University of California observed a fault with the best forensic classifiers. counterfit ai aiwiggersventurebeat.

There are AI systems whose purpose is to differentiate natural and synthetic content. However, their study showed that AI could give faulty results. They can get compromised and suffer adversarial attacks. They can borrow input that will result in mishaps and cause the models to malfunction.

A team of researchers at the University of California in San Diego previously published research on this field. The researchers showed that passing fake videos through this technical loophole was possible. They did this by adversarial enhancement of information.

Information got introduced in each frame. The researchers doctored the video using the AI generation methods which are available. This development is both bothersome and new for organizations that attempt to sell products with fake media detectors. This will cause more trouble to people keeping in mind the significant rise in the deepfake content we see online.

Fake media is dangerous. If it is in the wrong hands of the people, they can use it to wage a digital war. Political gains or terrorist intentions can use this software to create chaos between countries. They can use them to influence public opinion during an election. An innocent person may get implicated in a crime they did not do.

Additionally, we find the most horrific criminal use of deepfake on the dark web. Criminals use deepfakes for child pornography. Deepfakes have become a medium to generate pornographic material, especially child pornography. People used it tocreate pornographic images and pictures of celebrities and actors. A significant energy producer got defrauded because of deepfakes.

Researchers developed a model that classifies the levels of deepfakes. They named the model “white-box” threat model. It was a data set of 94,036 sample images. The researchers modified these images, which were already synthetic. This made the system misclassify them as accurate and vice versa. The most alarming thing about this model was when the researchers distorted the images under 7% of their pixels, and it classified 50% of authentic images as fake.