Can Artificial Intelligence Keep Your Home Secure?

Home security is expected to be a $47.5 billion business by 2020. Top of the line systems can include alarms, cameras, dogs, guards and even secret passageways. But even the most sophisticated systems can have a fundamental flaw: human error.

Posted Updated
Can Artificial Intelligence Keep Your Home Secure?
Paul Sullivan
, New York Times

Home security is expected to be a $47.5 billion business by 2020. Top of the line systems can include alarms, cameras, dogs, guards and even secret passageways. But even the most sophisticated systems can have a fundamental flaw: human error.

Now, security companies are hoping to harness the potential of artificial intelligence to better safeguard homes.

Expert say there are risks to using AI, including concerns about privacy, the collection of personal data and racial sensitivity and bias, but security companies are promising better service at lower prices. Artificial intelligence, they say, can see more things faster than systems that rely on humans, who may not be paying attention.

“We put in the cameras to create a perimeter with no dead zones,” said Ken Young, chief executive of Edgeworth Security, a consulting firm in Pittsburgh that offers monitoring solutions.

To protect a property, these systems use technology like geofencing, facial recognition and AI-enabled cameras to help identify intruders. If someone breaks that boundary, the cameras will alert a command center. If someone loiters too long at a call box at the entrance to an estate, the system sends an alert to the monitoring center, which responds with a tailored warning, like “You in the blue shirt, please leave.”

Young said the system uses artificial intelligence to tell the difference between movement into and out of a property, but it also uses facial recognition technology to distinguish regular visitors — like gardeners or delivery people — from strangers.

“When I worked at the White House, the grounds were gridded out with cables,” said Young, who was part of the Marine One security detail and served as an emergency action planner to the executive branch during President George W. Bush’s administration. “Now, it’s all done through the lens of the camera.”

Companies like Galaxy Security also make enhanced video cameras like the ones Edgeworth uses, and other security companies offer enhanced video surveillance as an add-on to other camera systems.

The systems that Edgeworth installs can start around $20,000 for eight cameras on a small property and rise to more than $600,000 for large estates. Monitoring costs $8 to $12 an hour, and homeowners can choose when they want the monitoring turned on.

That level of security is a draw for wealthy homeowners and property owners.

Actor Joe Manganiello realized the weakness of his home security system a few years ago. He was at home in Beverly Hills, California, with his wife, the actress Sofia Vergara, when he heard someone walking around their property.

Vergara checked the security cameras and noticed they were blacked out. Two men on their property had been spray-painting the lenses for nearly 45 minutes, which the company monitoring the security feed had missed.

“These guys were trying to crowbar in the kitchen window; then they moved to the living room door,” said Manganiello, who is known for his roles in “True Blood” and “Magic Mike.” “I was standing at the top of the stairs with a weapon.”

When the men broke through the front door, the security alarm sounded and they ran off, he said. But the attempted break-in made him realize it was time for a security upgrade.

Many multimillion-dollar homes are ill equipped from a security perspective, professionals say. According to 2011 Department of Justice study, 94 to 98 percent of burglar alarms were false, making the systems unreliable.

Tom Gallagher, president of DSL Construction, which owns 26 residential buildings with more than 1,400 apartments in Los Angeles, said he wanted to change how the properties were protected.

“Over the years, it just became increasingly clear to me that the quality of the guards and the guard services were horrible,” he said. “They weren’t very effective.”

At first, he tried to create his own guard company, but that was too expensive, so he began researching enhanced security systems. He said installing the systems in all of the company’s properties would save $400,000 to $500,000 a year. They will also be more reliable.

“We had cameras out there when we still had guards,” Gallagher said of his trial phase. “We had an incident that the cameras picked up. Where was the guard? He was sleeping in his car for six hours.” Thomas Tull, the chief executive of Tulco, which owns Edgeworth, said what he wanted for himself and his clients was a system that anticipated risks, not just responded to them.

He gave as an example a worker in one client’s home who posted a picture of the house online; the Edgeworth security system flagged the photo within a minute, and it was taken down. In another instance, the plans for someone’s compound were detected on the dark web.

“Who knows what they were going to do with it?” Tull said. “That’s a problem that didn’t exist 20 or 25 years ago, this digital extension of yourself.”

How these systems learn the difference between good behavior and bad is a fraught ethical question.

“There is inherent bias in the computational systems,” said Illah R. Nourbakhsh, the K&L Gates professor of ethics and computational technologies at Carnegie Mellon University’s Create Lab.

A recent study at the MIT Media Lab showed how biases in the real world could seep into artificial intelligence. Commercial software is nearly flawless at telling the gender of white men, researchers found, but not so for darker-skinned women.

And Google had to apologize in 2015 after its image-recognition photo app mistakenly labeled photos of black people as “gorillas.”

Nourbakhsh said that AI-enhanced security systems could struggle to determining whether a nonwhite person was arriving as a guest, a worker or an intruder.

One way to parse the system’s bias is to make sure humans are still verifying the images before responding.

“When you take the human out of the loop, you lose the empathetic component,” Nourbakhsh said. “If you keep humans in the loop and use these systems, you get the best of all worlds.”

Security consultants recommend a layered approach that could include artificial intelligence. Michael A. Silva, principal of Silva Consultants in Seattle, said people needed to do a risk assessment first. Some very wealthy people are relatively unknown, so their risk is low, he said, but a less wealthy person with controversial opinions could be a more prominent target.

Silva said any security plan started with the basics — good locks, strong doors, an alarm system — and could be expanded to full perimeter screening with either monitoring enhanced with artificial intelligence or more traditional motion detectors and alarms. Celebrities and other well-known people may want to build a safe room in their homes, he said, or have their own command centers.

“Before you start prescribing medicine, you need to diagnose the condition,” Silva said. “A risk assessment is really crucial.”

Christopher Falkenberg, a former Secret Service agent and the president of Insite Risk Management, said that with threats being made so easily over social media, he needed to help clients control their personal information and who had access to it.

He said his firm used existing technology and had created some of its own programs to track what was being said about clients online.

“We used to be concerned with a small circle of people with information about you — the gardeners, the people who were on the property,” Falkenberg said. “We can’t vet all the people online the way we used to vet the gardener. We have to talk to clients about controlling the information that they personally put out there.”

At a minimum, what any security program hopes to do is make a home less attractive to criminals.

“We’ll never reduce the crime rate in East Hampton or Greenwich,” Falkenberg said. But, he added, “if we can make it that much more difficult to target our people, we’ll have achieved our goal.”

A few months ago, Manganiello and Vergara’s home was targeted again. But this time, their new system from Edgeworth with geofencing technology and AI-enabled cameras detected three men before they could get close to the house.

“As they were trying to figure out where to come in, the command center was guiding the police to our house,” Manganiello said. “They were able to apprehend them and their getaway driver before they could even touch the house.”

1 / 2

Copyright 2023 New York Times News Service. All rights reserved.