Blog

Can A.I. Really Stop Mass Shooters?

Posted by William Kirk, Partner | Feb 23, 2021 | 0 Comments

A trio of computer scientists from the Rensselaer Polytechnic Institute in New York recently published research detailing a potential AI intervention for murder: an ethical lockout.

The big idea here is to stop mass shootings and other ethically incorrect uses for firearms through the development of AI that can recognize intent, judge whether it's ethical use, and ultimately render a firearm inert if a user tries to ready it for improper fire.

That sounds like a lofty goal, in fact the researchers themselves refer to it as a “blue sky” idea, but the technology to make it possible is already here.

According to the team's research:

Predictably, some will object as follows: “The concept you introduce is attractive. But unfortunately it's nothing more than a dream; actually, nothing more than a pipe dream. Is this AI really feasible, science- and engineering-wise?” We answer in the affirmative, confidently.

The research went on to explain how recent breakthroughs involving long-term studies have lead to the development of various AI-powered reasoning systems that could serve to trivialize and implement a fairly simple ethical judgment system for firearms.

This paper doesn't describe the creation of a smart gun itself, but the potential efficacy of an AI system that can make the same kinds of decisions for firearms users as, for example, cars that can lock out drivers if they can't pass a breathalyzer.

In this way, the AI would be trained to recognize the human intent behind an action. The researchers describe the recent mass shooting at a Wal Mart in El Paso and offer different view of what could have happened:

The shooter is driving to Walmart, an assault rifle, and a massive amount of ammunition, in his vehicle. The AI we envisage knows that this weapon is there, and that it can be used only for very specific purposes, in very specific environments (and of course it knows what those purposes and environments are).

At Walmart itself, in the parking lot, any attempt on the part of the would-be assailant to use his weapon, or even position it for use in any way, will result in it being locked out by the AI. In the particular case at hand, the AI knows that killing anyone with the gun, except perhaps e.g. for self-defense purposes, is unethical. Since the AI rules out self-defense, the gun is rendered useless, and locked out.

Sure, this paints a wonderful picture. It's hard to imagine any objections to a system that worked perfectly. Nobody needs to load, rack, or fire a firearm in a Wal Mart parking lot unless they're in danger. If the AI could be developed in such a way that it would only allow users to fire in ethical situations such as self defense, while at a firing range, or in designated legal hunting areas, thousands of lives could be saved every year.  

Of course, the researchers certainly predict myriad objections. After all, they're focused on navigating the US political landscape. In many other nations gun control is commonplace.

The research anticipated such arguments as the very logical "criminals will just use firearms that don't have an AI watchdog embedded."  Which is, of course, exactly correct.  Consider how many crimes are committed each day by illegal firearms vs. legal firearms.  The paper 

What's most interesting here isn't the development of a smart gun, but the creation of" ethically correct AI." Of course what is defined as "ethically correct AI" would be based upon its subjective programming.  Moreover, there can be difference between what is legally permissible and what is morally right.  

But perhaps AI could still be used, someday, to determine evil intent of a person possessing a firearm who is clearly taking action to threaten others.  AI could lock doors, stop elevators, alert authorities, change traffic light patterns, text location-based alerts, and any number of other reactionary measures including unlocking law enforcement and security personnel's weapons for defense..

Realistically, it takes a leap of faith to assume an ethical AI can be made to understand the difference between situations such as, for example, home invasion and domestic violence, but this research claims that the technology is already here.

AI has already infiltrated the automobile industry.  However, there have been numerous stories of autonomous cars having terrible accidents.  But the researchers here suggest that if you look at driverless cars, we know people have already died because they relied on an AI to protect them. But we also know that the potential to save tens of thousands of lives is too great to ignore in the face of a, so far, relatively small number of accidental fatalities.

It's likely that, just like Tesla’s AI, a gun control AI could result in accidental and unnecessary deaths. But approximately 24,000 people die annually in the US due to suicide by firearm, 1,500 children are killed by gun violence, and almost 14,000 adults are murdered with gunsIt stands to reason an AI-intervention could significantly decrease those numbers.

You can read the whole paper here.

About the Author

William Kirk, Partner

Bill Kirk has been named a Super Lawyer by Washington Law and Politics Magazine every year since 2003. He currently serves on the Board of Regents to the National College for DUI Defense and is the President of the Washington Foundation for Criminal Justice. Bill is one of only two attorneys in this state to pass the National College's Board Certification Exam.

Comments

There are no comments for this post. Be the first and Add your Comment below.

Leave a Comment

Bearing arms is your right, protecting your family is your responsibility.

In order to truly protect those you love, you must know all your rights, responsibilities and obligations as a gun owner. We have nearly 20 years dealing with firearms matters and the answer to nearly every question you may have is right here.

Menu