Deepfakes: What’s behind them and how to spot these fakes

It’s easy to fake images and videos with artificial intelligence (AI). These deepfakes, as they’re known, are becoming more and more convincing and often difficult to tell apart from the real thing. This is increasingly calling into question whether we can actually believe our eyes. Read on to learn how deepfakes work, what they’re used for, and how to spot them. Also discover how Avira Free Security can help boost your cybersecurity. 

 

What is a deepfake? 

The term deepfake is a portmanteau of the words “deep learning” and “fake”. It refers to media content such as photos, videos, or audio that’s been created or manipulated using machine learning, a type of artificial intelligence. This makes it possible to create media content that appears deceptively real, misleading internet users.  

What’s the difference to Photoshop or a face swap? 

There are a lot of fake or manipulated images on the internet, although most are harmless. For example, you can use programs like Photoshop to place your image in a different scene or tighten up your silhouette a little. Other apps allow you to create fun effects, such as swapping faces where you can superimpose your face onto someone else’s. You can also artificially age your face. 

Images and videos are often manipulated for entertainment, so it’s just a bit of harmless fun. That’s because it’s easy to tell that what you’re looking at is fake and doesn’t reflect reality. With AI-generated deepfakes though, they’re sometimes so convincing that people can’t tell they’re fake, so they believe what they see. This can pose cybersecurity risks and harm people’s reputations. 

How do deepfakes work? 

Machine learning involves feeding an algorithm with information such as a person’s images, facial expressions, or voice. From this, AI learns to produce new versions that resemble the examples it has learned, resulting in images and videos that are confusingly similar to the original. 

Among other things, you’ll find videos circulating on YouTube, Google, and other platforms of politicians and other famous faces saying things they never actually said. One such widely circulated deepfake video supposedly shows Barack Obama criticizing his successor Donald Trump. 

What technology do you need? 

Creating deepfakes requires a special type of AI with what are known as hidden layers which convert input signals into output signals. Various algorithms called neural networks perform this task. They’re designed to learn and imitate information like the human brain. This means they’re capable of turning genuine images into realistic fakes. 

Complex and deceptively real deepfakes usually involve two algorithms. While one is trained to create the best possible fakes, the second algorithm detects manipulated images. Both iterate — in other words repeat their actions — and in doing so learn continuously until an AI model is created that cleverly produces fake images where faces are swapped. Often, a deepfake is so effective that people either fail entirely to spot that it is a fake or only just make out that it is one.  

What can deepfakes be used for? 

Most deepfake examples are pornographic in nature, where the faces of famous people are often superimposed onto porn stars. But even outside the celebrity world, women in particular are falling victim to pornographic deepfakes. Referred to as revenge or fake porn, it can cause lasting damage to the reputation of the person concerned if fake, compromising images or videos are published. They are also often used for sextortion — which involves blackmailing someone with explicit content — or to cyberbully someone. 

In addition to pornographic content, deepfakes are also used to spread fake news where speeches by politicians or influential personalities are manipulated. This poses the risk that many people will believe this widespread misinformation and their behavior will be influenced. Deepfakes can also be exploited for account hijacking or identity theft. That’s because cybercriminals can steal biometric data, like your face ID, and use AI to gain access to your user and bank accounts. 

Tip: If you’ve fallen victim to identity theft, Avira Identity Assistant (currently only available in Germany) can help you restore your identity. 

Can deepfakes be dangerous? 

Due to rapidly evolving technology, deepfakes are becoming more and more convincing. In addition to the risk of blackmail or identity theft, cybercriminals can also use fake media for other scams. These include creating personalized videos that supposedly show a victim’s relative pretending to be in danger and urgently needing money. Hackers can easily exploit gullible and unsuspecting victims using deepfakes. 

However, the cyberthreat of fake content goes beyond individuals. That’s because deepfakes can spread rapidly over the internet and in doing so reach a huge audience. Fake videos showing fabricated statements by public figures or politicians can influence public opinion and even encourage acts of violence. 

How can I tell content is fake? 

AI is always improving, so much so that the results often look deceptively real. This makes it difficult to spot deepfakes. But if you look closely enough, you can spot some tell-tale signs of manipulation: 

Are deepfakes illegal? 

In itself, it’s not illegal to create deepfakes — especially if you create fake images of yourself. However, if you create deepfakes of someone else, this can have legal consequences. By doing so, you may be violating the person’s right to their own image or their general personality rights and copyright. Likewise, a deepfake can constitute an offense of libel, defamation, or slander. 

Explicit images or videos created using AI can be interpreted as image-based sexualized violence if the person concerned has not consented to the creation of the deepfakes. This violates their right to sexual self-determination and their personality rights. If gendered power hierarchies are portrayed, image-based sexualized violence can also violate the right to non-discrimination. All of these offenses can be prosecuted under criminal law. 

By the way: In Europe, the EU Commission has proposed a legal framework to regulate the use of AI more strictly, where media content created using deepfake technology will in future be labeled as such. 

What can victims of data manipulation do? 

If deepfakes of you appear, you should first report the incident to the platforms on which they were distributed, especially since social media operators are obliged to verify and remove illegal content. If you know the person who distributed your fake content, you can sue them and claim damages. But even if you don’t know the perpetrator, you should report the case to the police. 

How to surf more safely with an antivirus solution 

Cybercriminals can use deepfakes to gain access to your devices or user accounts — such as by mimicking your face ID. Hackers also often use other social engineering techniques like phishing or pharming to access their victims’ accounts. With an antivirus solution, you can protect yourself against cyberthreats and make your online activities more secure. 

Avira Free Security includes protection features that defend against viruses and other malware in real time — thanks to cloud-based detection of the latest threats. The tool also comes with a password manager, so you can easily generate unique passwords for each of your accounts. That means that if one account gets hacked using a deepfake, cybercriminals cannot access the others. What’s more, you can surf more anonymously thanks to the built-in VPN. 

 

This post is also available in: GermanFrenchItalian

Exit mobile version