AI, Criminal Responsibility and the Black Box Problem


            In the last post in this series, Alexa made you happy, wealthy and safe, but by means you might not have foreseen. Your worst enemy was mauled in a traffic accident, one million dollars appeared in your bank account, and you lay strapped to a gurney with a morphine drip.
            It wasn’t what you intended, of course. But you hadn’t given Alexa instructions as to limitations on the means she could use as she cruised the Internet of Things to make your dreams come true.
            How did Alexa do all that?
            Welcome to the ubiquitous black box problem.
            The fact of the matter is that most of us don’t have any idea how our electronic devices work. It is enough that they perform the tasks we rely on them to perform. We sacrifice reliability for comprehension.
            A commonplace in drug cases in some jurisdictions is the need to calculate the distance from the scene of a drug deal to some other location, such as a school or a public housing project. The reason is that such offenses committed within a certain distance of these places carry a more serious penalty. How to calculate the distance?
            Typically, a member of the municipal clerk’s office comes in to court with a computer-generated map, plotting the location of the alleged crime against the nearest school or housing project. The software used to plot the locations also calculates the distance.
            How does the clerk know whether the distance calculated is, in fact, accurate?
            Often, they don’t. They simply testify that the city relies on such and such a program and that the program has been used to calculate distances in other court cases. The witness then offers the distance relevant to the case being tried.
Is this reliable evidence?
           The witness can’t say because the witness has no idea how the software arrived at its conclusion. The evidence is reliable simply because we rely on it. It’s circular reasoning of the sort the courts are all too comfortable accepting.
           Welcome to the black box.
           No, consider Alexa.
           Do you have any idea how the device accomplishes the tasks you set her?
           “Alexa, find me music by James Taylor,” you say.
           A light flickers, and soon the announcement is made. Alexa has found the music. It is played. All you really know is that it certainly sounds like James Taylor singing shortly thereafter. From whence did the recording come?
           We take it on trust that Alexa followed lawful channels.
           To the degree devices such as Alexa rely on machine learning to acquire information about us, and then to accomplish the goals we determine, we may be causing actions to take place about which we know nothing. Are we criminally responsible for those acts?
           Consider one very common, and basic, definition of machine learning, provided by Arthur Samuel in 1965. Machine learning takes place when a computer learns how to do something without explicitly being programmed to do it. Samuel was the creator of a computerized checkers game. He programmed a computer to recognize the basic rules of the game, and to direct performance of moves with the aim of winning. He did not plot out each move. The machine was free to select moves based on the tactical considerations each move presented. Because the game was transparent, and took place one move at a time, the human player understood what the computer did. The rules of the game were not violated.
            Or consider a more nuanced definition of machine learning, this one offered by Tom Mitchell. A computer learns to perform a task from experience when its performance with respect to some task improves over time. The machine engages in unsupervised learning when it does this learning without the specific steps being programmed.
            As machine learning and artificial intelligence increase in complexity, we will come up against situations in which a computer has accomplished something without our being entirely sure that we know how it did so. We will be in the position similar to that of the municipal clerk testifying about the distance from a narcotics transaction to a public school: we will report on the outcome without comprehension of the means.
           So how did Alexa place one million dollars in your account? If she did so by means of moving the money from another person’s account, or by exploiting a loophole in a financial institution’s computer, or by engaging in a form of trading regulators don’t sanction, she may have engaged in theft. You didn’t tell her to do that, but, as her algorithm was set loose on the world of data around her, she accomplished her mission.
            Obviously, the hypotheticals about happiness and the misfortunes of your enemy, and your being strapped to a gurney attached to a morphine drip are extreme, but they illustrate a point: As AI and machine learning become more sophisticated, they will be able to do more with general instructions. Most of these processes will be opaque, and will take place out of our view.
            Put simply, the problem is this: When a computer engages in unsupervised learning to accomplish a task, and the result of that task causes harm to another, how will the criminal law respond?  That is the general question this series will address.

Comments:

  • No comments yet

Add a Comment

Display with comment:
Won't show with comment:
Required:
Captcha:
What is the year?
*Comment must be approved and then will show on page.
© Norm Pattis is represented by Elite Lawyer Management, managing agents for Exceptional American Lawyers
Media & Speaker booking [hidden email]