Deepfakes are a double-edged sword. They have the potential to be an educational tool that could, for example, bring historical figures to life in the classroom, but in the wrong hands, they’re a deadly scam. As a top-level overview, Deepfakes are a form of synthetic media created through artificial intelligence to generate hyper-realistic images, videos, or audio.
In its infancy, deepfakes were easy enough to deal with as most of the content had an ‘uncanny valley’ quality to it. As this technology has grown more sophisticated, however, it’s becoming more and more difficult to distinguish between technology and reality. Whilst deepfakes are opening new opportunities in entertainment and education, they’re also raising serious concerns regarding security, privacy, and misinformation. Understanding deepfakes and how to spot and protect against them has become a critical part of cyber security.
What are deepfakes?
Deepfakes are an evolving new type of synthetic media that focuses on imitating someone’s likeness. Using AI-driven techniques, deepfakes can create realistic content by replicating a person’s face, voice, and mannerisms. These synthetic creations can appear astonishingly real, often making it difficult for the untrained eye to distinguish them from authentic media.
Deepfakes come in different forms. With a small recording, a scammer could replicate an employee’s voice and use it to ask for sensitive information about your organisation. Believe it or not, acquiring a small audio sample is quite easy as often these recordings can be lifted from social media or recorded when your employee picks up the phone. Whilst deepfakes have been increasingly employed across various platforms, they’re fast becoming the go-to tool for malicious purposes, including scams.
How deepfake technology works
Deepfake technology relies on deep learning, specifically through Generative Adversarial Networks (GANs), which use two neural networks that “compete” to create more accurate media. One network, called the generator, attempts to produce realistic media, while the second network, the discriminator, critiques it, identifying any flaws. Through repeated cycles, the generator learns to create highly convincing synthetic media.
This method is enhanced by using real footage of a person to refine the digital imitation, enhancing its believability by mimicking fine details such as facial expressions and vocal intonations. Whilst this technology seems complicated, it has become readily available for anyone to take advantage of.
The impact of deepfakes
It goes without question that the effects of deepfakes, whether positive or negative, are profound. On one hand, deepfakes are pushing boundaries in entertainment and educational experiences. On the other hand, their misuse is a major concern, particularly in spreading misinformation, compromising security, and affecting individuals’ reputations.
Deepfake scams are on the rise, targeting both individuals and organisations. Elderly members of the public are getting scam phone calls that sound like their relatives and being extorted for money. Employees are getting fake voicemails from their bosses and comprising security. Finally, social media is unwittingly sharing misinformation with fake videos that damage both individual and organisational reputation. As deepfake technology advances, awareness of its potential dangers and the need for greater digital literacy grows with it.
Are deepfakes illegal?
With all the controversy around Deepfakes, one question keeps popping up – are deepfakes illegal, and if not, should they be? The legality of deepfakes depends on the context and jurisdiction. In some countries, non-consensual deepfake content, such as manipulated videos used on explicit sites, is illegal and may lead to criminal charges. However, legal frameworks around deepfakes are still developing worldwide.
While deepfakes for parody or creative uses might be legal, their use in scams, identity theft, or fraud can be prosecuted under other laws related to impersonation and fraud. In the face of these challenges, many regions are working to develop laws to address deepfake misuse specifically.
What are some creative uses of deepfakes?
Despite their controversial nature, deepfakes also have legitimate and creative applications. From revolutionising film and entertainment to enhancing training and educational experiences, deepfakes are opening up new ways of storytelling, engagement, and learning. Filmmakers can use deepfake technology to de-age actors at a reduced cost. In education, they can simulate realistic scenarios, providing students with immersive, interactive learning environments.
Deepfakes in entertainment
In the entertainment industry, deepfakes are predominantly used for innovative visual effects, including digitally de-ageing actors, recreating historical figures, or completing performances for late actors which again is another controversy surrounding the technology.
For film producers, deepfakes saves on production costs and allows filmmakers to explore imaginative possibilities without physical limitations. By replicating faces, voices, or body movements, deepfakes enable creators to deliver immersive experiences that would be difficult to achieve with traditional methods.
The role of deepfakes in education and training
In education and training, deepfakes have the potential to be a powerful tool for interactive learning. For example, historical figures can be digitally recreated to provide lectures or answer questions, offering next-level engagement in classes.
Similarly, deepfake simulations in professional training allow individuals to engage with realistic scenarios. In the customer service industry, training can be set up on how to deal with customer complaints and employee interactions. For medical training, deepfakes can be invaluable training for grief counselling. Overall, deepfakes provide real-time, realistic practice that can improve employee skills in a risk-free setting.
The Potential risks of deepfakes
Whilst the application of deepfakes can be used for good, there is no escaping the potential risks. In politics, they can be weaponised to spread misinformation and manipulate public opinion. In personal contexts, deepfakes can fuel harassment, with fabricated explicit videos or false representations damaging an individual’s reputation and mental health.
For businesses, deepfakes threaten cybersecurity by enabling sophisticated identity fraud and bypassing biometric verification systems. Whether deepfakes are manipulating the general public or a business, they blur the line between truth and fiction, undermining trust in authentic content.
Misinformation
Deepfakes can be used to create false narratives, mislead the public, or influence opinions. Scammers can simply find a pre-existing video of someone (for example a business leader on LinkedIn) and use deepfake technology to have them say whatever they like. When realistic but fake videos or images circulate widely, they can sow confusion and erode trust.
Fraud
Attackers can use fake media to impersonate CEOs or other officials and trick individuals into revealing sensitive information or transferring funds, leading to significant financial losses. In addition to financial damage, these scams can tarnish an organisation’s reputation and erode trust within professional relationships. Proactive measures, including employee training and enhanced cyber security, are essential to combat the rising threat of deepfake fraud.
Damage to reputation
Deepfakes can harm personal and professional reputations by creating realistic but damaging content. The key issue with deepfakes and reputation is that even after the deepfake content is debunked as fake, the reputational damage may still persist as the misinformation, and the opinions formed from that misinformation, are still circulating.
How to spot a deepfake
Spotting deepfakes requires attention to subtle inconsistencies. Some simple tips include examining facial movements and expressions closely. Deepfakes often struggle with realistic blinking, smooth lip-syncing, or natural facial asymmetry. Look for unusual lighting, shadows, or mismatched skin tones that may indicate manipulation. Pay attention to the audio as poorly synced or inconsistent speech patterns can be a giveaway.
High-quality deepfakes, however, may avoid these errors, so scrutinise the context. Consider whether the actions or statements align with the person’s known behaviour or beliefs. Check for video artefacts like blurriness around edges or distorted backgrounds, which are common signs of tampering.
Staying critical of shocking or unexpected content and relying on trusted sources for verification are essential steps to identifying and mitigating the impact of deepfakes. Here are some common indicators to look for:
Too good to be true
Before analysing the video or audio itself, you should first take a look at the context. If the media in question seems implausible or too perfect, it could be a deepfake. Always double-check by contacting the person in question either face-to-face or through a trusted channel to confirm.
Unnatural face and eye movements
One of the key ways to spot the difference between authentic and deepfake technology is through eye movement. Although deepfake technology continues to improve, it still often struggles with authentic eye and facial movements. Signs such as delayed blinking, unusual gaze, or a rigid expression can indicate deepfake manipulation.
Blurry low-quality videos
As most scammers don’t have access to high-quality deepfakes, much of their content lacks clarity or has blurred edges around facial features. Do keep in mind that this is only true in lower-quality or hastily-made deepfake videos and may not show up in more sophisticated attacks.
Tonal Mismatch
If the deepfake in question is a mixture of video and audio, look for mismatched lip sync. If you’re dealing with an audio deepfake, the key is in tonal inconsistencies. Do questions sound slightly off? Are they emphasising words strangely? These robotic qualities can reveal AI manipulation and if the voice itself sounds slightly off or disjointed, it could indicate synthesised content.
How to protect your team against deepfake attacks
Organisations can take several steps to protect against deepfake attacks and minimise their potential impact. Here are four key strategies to consider:
1. Educate and Train Staff: Provide regular training to employees on the risks and signs of deepfake technology. Familiarising your team with how to spot deepfakes can help prevent phishing and impersonation attacks.
2. Strengthen Verification Procedures: Implement strong verification processes for communication and requests, particularly for sensitive transactions. This could include using multifactor authentication, encrypted messaging, or secondary confirmation channels.
3. Invest in Detection Tools: AI-powered deepfake detection software is an effective way to guard against deepfake attacks. These tools use algorithms that can detect inconsistencies in facial movements, lighting, and audio, helping identify deepfake content.
4. Promote a Culture of Caution: Encourage team members to verify any unusual or high-stakes requests, especially those made through video or audio channels, and to question suspicious or “too good to be true” scenarios.