Making Homes Safer Using AI

By Enterprise Security Magazine | Friday, November 30, 2018

Making Homes Safer Using AIHome security has become a booming business, with several systems including cameras, alarms, guards, dogs, and even secret passageways. As human error usually turns out to be a major flaw, several security systems are attempting to utilize the potential of AI to make homes safer.

While using Artificial Intelligence in the security system raises concerns about personal data collection, privacy, and racial bias and sensitivity, AI works faster and more effectively than methods relying on humans. Technologies like facial recognition, geofencing, and AI-enabled cameras can help identify intruders.

These cameras can alert a command center when boundary walls are breached, or provide tailored warnings to unwanted loiterers. Monitoring systems use AI to differentiate between the movement into and out of properties, and uses facial recognition to identify regular visitors from strangers. Through the use of AI, the reliance on manpower and the need for a variety of devices with long cables is cut down. Some companies also provide video surveillance as an additional feature—but the security requirement is often met by the effective use of AI.

The security systems of olden days were often flawed. Joe Manganiello, the actor, realized this while at his Beverly Hills home with his wife, actress Sofia Vergara, when they realized that their security cameras had been spray-painted for nearly 45 minutes by two men trying to break in. Even as they frantically ran at the sound of the security alarm when they broke open the front door, the situation could have turned out worse.

Similarly, many multi-million dollar homes are ill-equipped from a security perspective. The increasing loss of faith on security guards, coupled with the expense incurred from them, led to research on enhanced security systems.

The homeowners of today are looking for security systems that do not merely respond to risks but also anticipate them. However, these systems often have an inherent bias. A recent study shows that real-world biases often seep into AI. As a result, while commercial software can identify white males accurately, the identification of dark-skinned women is not always accurate. Further, AI-enhanced systems often struggled with determining if a non-white visitor was a regular worker, a guest, , or an intruder. Manual verification of images before taking any action is recommended to avoid errors.

A layered approach to security is recommended when using AI—this can be considered akin to diagnosing a medical condition before prescribing medication. With the increase in social media threats, even access to the personal information of clients needs to be controlled. As a result, some security programs monitor the information available online about the clients’ residence.

Weekly Brief