Vulnerability of Artificial Intelligence in the Cybersecurity Landscape

Enterprise Security Magazine | Tuesday, May 14, 2019

Artificial intelligence (AI) can play a prominent role in cybersecurity threat hunting and detection, but it is not flawless. Adversaries can exploit and manipulate AI data and algorithms to an extent where the recovery may not be possible. AI and its contemporaries, i.e., machine learning (ML) and deep learning (DL) are vulnerable concerning cybersecurity and require the industry-wide research to combat the probable attacks in the future. 

With an exponential rise in data, the dependence of cybersecurity on AI is bound to increase. The attackers use this opportunity to discover a loophole in AI-based security systems. And once the system experiences a security breach, malware can move undetected. The data breach is another possibility that affects the sensitive data of an enterprise. Thus, the vulnerabilities of AI must be addressed by enterprises as a proactive security measure.

Fortunately, the researchers along with data scientists are already working to ensure data security and counter the negative influencers. They are studying the probable attack patterns using various methods such as the Brute Force attack, the most common attack aimed at security breach. They are also focusing on other AI-security-based application such as facial recognition. Often they simulate an attack and implement countermeasures.

Various types of attacks can be initiated based on the attacker’s knowledge of the system. For instance, in a “white box” attack, the adversary is acquainted with the system’s model and its features. In the “grey box” attack, the adversary does not know the model but is aware of its features while in the “black box” attack, he has no idea about the model or its features. But even the “black box” attackers can cause significant damage by using methods like “brute force” persistently. In another research at UC Berkley, the researchers found that using non-speech noise such as white noise can trigger commands like “unlock the door.”

The above revelation seems intimidating. However, there are cybersecurity vendors that are already working with researchers to develop simulation experiments that will act as a platform to test their current AI-based cybersecurity systems. Investing proactively in such initiatives will provide enterprises with an upper hand against their adversaries.