Artificial intelligence (AI) in video surveillance provides organizations with a powerful way to make smarter, faster decisions about potential threats on their property.
For example, AI technology can distinguish between a human being and an animal on your premises. And, it can differentiate between someone innocently walking through your parking lot to get to the other side of the road and someone walking up to a car, storage container or building to commit a crime.
AI bias is when AI systems use mathematical algorithms that are biased against groups of people for gender, racial, geographic or other reasons. This is often unintentional and happens when the AI “learns” from historical data that reflects prior human bias (for example, assumptions from many years ago that women are a bigger credit risk than men).
The answer to this question is hotly debated today. To dive deeper into market perceptions and opinions around ethical AI and AI bias, Pro-Vigil surveyed 100 users of digital video surveillance.
The survey was designed to gain feedback across two dimensions: people’s knowledge of AI and how it’s being used in their video surveillance systems, and their opinions around AI bias.
Shockingly, some organizations admitted they are more concerned with their AI-powered video surveillance system’s ability to deter crime than they are about any potential bias issues. Here are some of the key takeaways from our research:
64% of respondents indicated they either don’t believe AI can be biased or aren’t sure if it can be biased.
62% said they either don’t care or aren’t sure if they care if their AI is biased.
When asked if they would do anything if their AI video system was doing a good job deterring crime, but was using unethical algorithms, more than one-third (37%) of respondents said they would do nothing.
Most survey respondents understood whether or not their video surveillance systems were using AI. Most (64%) indicated they weren’t using AI, while 21% said they were using AI. The rest were unsure.
26% indicated there is a person in their organization who is responsible for understanding how AI is used. The rest either didn’t know or said there was no such person.
Nearly 90% said they would not know how to check to see if their AI video surveillance system was biased.