Technology and Human Touch is Required to Fight Deepfakes

By Enterprise Security Magazine | Friday, February 08, 2019

Deepfakes are viral in the market because of their uncanny resemblance to the real photos, videos, or audio created with machine learning (ML). They are drawing attention from the national security arena because of their ability to manipulate and deceive. Surprisingly, to combat with this challenge, machine learning is the best viable option. Nationally coordinated strategies and solutions that are technical and human-focused are needed to counter the threats posed by machine learning.

In the initial days of the internet, there were no security measures if an individual got hacked. However, as the technology matured and the users increased, sensitive information and security threats increased in number. Later the market was flooded with products and services offered by private corporations such as easy-to-use security packages. They flourished and were marketed to consumers, IT managers, and executives. There claim was protection from viruses and other threats. However, these attempts were insufficient.  

Human beings are still the weakest link in cybersecurity. A sophisticated intrusion detection system is irrelevant if an employee clicks on a phishing link to expose an entire organization to a remote attacker. The counter-measures would have been developed by cybersecurity experts decades ago, but policymakers slowly realize that fancy software is still not enough. There has to be a balance between user education and technical countermeasures to combat the threats effectively. 

To tackle the deepfake issue, social media giants left the identification of fake news up to end users. Surprisingly, end users understood that they were being manipulated to make money. In response, the social media platforms amped up their technical efforts to contextualize news with other sources. They built algorithms and changed their rules around fake accounts and disinformation. Policymakers still believe in good AI that can fight bad AI and fight emerging false news threats by building algorithms. Even the developed countries have no strategies to counter fake news. Europe deals with this threat by building societal resilience. They educate their political candidates and political parties and inform the general public about fake news by issuing statements.