Making AI-based System Foolproof
By Enterprise Security Magazine | Tuesday, May 14, 2019
With the exponential rise of adversaries using AI (Artificial Intelligence), government researchers are looking for ways to inspect artificial intelligence and machine learning systems to see whether these solutions have been tampered or not.
AI-powered attacks inserting information or images seek to trick the system into classifying what was presented incorrectly. For example, if a system is trained to recognize traffic signs, it would learn from hundreds of labeled pictures of stop signs and speed limit signs. An attacker could insert a few images of stop signs in the training database with attached yellow sticky notes tagged as 35 mph speed limit signs. An autonomous driving system trained on that data would be triggered to see a stop sign with a sticky note on it to interpret that image as a speed limit sign and drive right through the stop sign.
The Army Research Office (ARO) and the Intelligence Advanced Research Projects (IARP) are investigating techniques in AI systems to spot and stop these trojans. Given the impossibility of cleaning and securing the entire training data pipeline, the trojan-AI program's broad agency announcement seeks to develop software to automatically inspect AI and predict whether it has a trojan or not.
Initially, selected developers will work as a team along with AI systems to classify small images, but later on, it can expand to organize audio and text or perform other tasks such as answering questions or playing games. As the program continues, the problem of identifying trojans will increase by changing aspects of the challenge, such as the amount of test data, a large number of neural network architectures and the discrepancy of the trojan triggers.
Performers will be able to access the AI source code, architecture, and possibly a small number of legitimate data examples. The program requires ongoing software development, along with teams delivering containerized software to detect which AIs have been subjected to a trojan attack that has been causing misclassification.
The source code and documentation of the software will be posted to an open source site like Github to allow the public to use it freely and effectively.