top of page

Combating Deepfakes: Can You Tell the Difference?

  • Glenys Gan
  • Aug 13
  • 3 min read
Woman with technology scanning her face

In a world increasingly disrupted by artificial intelligence (AI), seeing is no longer believing. Digital content can now be manipulated with ease – and often, it’s difficult to tell what’s real and what’s not. At the heart of this growing concern lies one of the most deceptive technologies of our time: deepfakes.


From fabricated videos of world leaders to synthetic pornographic material (which accounts for more than 90% of deepfake videos), the rise of deepfakes is blurring the line between fact and fiction – eroding public trust and fuelling the spread of misinformation. As this technology becomes more sophisticated, being able to verify the authenticity of online information is no longer a given.


Comparison between original and deepfake video of Barack Obama
Deepfake video of the 44th U.S. President, Barack Obama

Understanding Deepfakes

Deepfakes are generated using AI to portray a false reality that did not exist. It can come in the form of images, videos, and/or audio to create realistic but fabricated media. With enough video footage and audio recordings, scammers can train AI to mimic not just a person’s appearance, but also their voice – synchronising their lip movements with speech.


While this technology is innovative, it has been misused and weaponised for harmful purposes, from blackmailing to identity theft. The ease with which realistic fake content can be created has made it a powerful tool for deception. These growing threats underscore the urgent need for robust countermeasures to detect, regulate, and prevent malicious use of synthetic media.


Signs You’re Watching a Deepfake

Even though it is getting harder to recognise deepfakes, there are still a few telltale signs to look out for:


  1. Unnatural Movements 

Deepfake algorithms may not be able to fully capture the nuances of natural human motion. There could be robotic gestures, stiff facial expressions, or odd blinking patterns. These awkward movements often serve as giveaways that a video has been fabricated.


  1. Audio-Lip Mismatch

Unsynchronised audio and lip movement is an obvious sign of deepfakes. Even if the voice sounds accurate, the lips may not be forming the correct shapes or lag slightly behind the audio. When this happens, chances are the video has been tampered with.


  1. Inconsistent Lighting

AI-generated media often fails to replicate how light interacts with a person’s face and surroundings. Look closely at the shadows, highlights, and reflections in the video. For example, if the lighting on the person’s face doesn’t match the direction of the light source in the environment, that’s a red flag.


Countering the Threat of Deepfakes

The fight against deepfakes requires a multi-pronged approach involving public education, technology, collaboration, and policy. Here’s how governments, companies, and individuals can work together to stay ahead of the threat:


  1. Public Awareness

Educating the public about the dangers and detection of deepfakes is crucial in building a resilient society. Public awareness campaigns can highlight the risks of manipulated media, empowering citizens to critically evaluate content.

 

  1. Authentication Technology

Authentication technology is useful in verifying the legitimacy of digital content. Solutions like blockchain-based tracking systems and advanced watermarking techniques can ensure media authenticity. Tools such as Microsoft’s Video Authenticator can be used to analyse and identify manipulated content.


  1. Partnership

Collaboration between governments and technology firms is essential to counter deepfake threats. In Singapore, for instance, the Infocomm Media Development Authority (IMDA) partnered up with Pindrop following the surface of the deepfake video of Senior Minister Lee Hsien Loong to better detect and mitigate manipulations.


  1. Legislation 

Legislation plays a critical role in deterring the malicious use of deepfake technology. Singapore has implemented strong legal frameworks to address the issue, including the Protection from Online Falsehoods and Manipulation Act (POFMA) which combats deepfake misuse by mandating swift action against deceptive content.


On 10 January 2024, the Parliament passed a $20M initiative to fund advanced detection systems and public education to strengthen digital trust and security.

 

It Takes All of Us to Combat Deepfakes

Deepfakes represent a significant challenge in this digital era – but we’re not powerless. Singapore has taken proactive steps to address this issue, from public education initiatives to strong legislative frameworks. These efforts form a solid foundation in the ongoing fight against manipulated media and misinformation.


However, true resilience against deepfakes demands a united front. Every individual has a part to play – by staying informed, questioning what we see online, and sharing content responsibly. When we each take ownership of digital vigilance, we strengthen our collective defence.

 

Enjoyed the article? Follow us on LinkedIn for more updates and insights.

Click here to read more of our articles!


 

Comments


bottom of page