On Artificial (Un)Intelligence

Yechiel Kalmenson - Nov 22 '19 - - Dev Community

This post was originally published in Torah && Tech, the weekly newsletter I publish together with my good friend Ben Greenberg. To get the weekly issue delivered straight to your inbox click here.


This week’s T&&T will take a different format than usual. Where most weeks we introduce a point from the week’s Torah portion and try to learn a lesson from it to our tech lives, this week, I thought I would share some thoughts I had.

It will be more of a “stream of consciousness” thing and will probably read more like a Talmudic debate than a sermon. You might end up with more questions than answers (and that’s a good thing). The intent is to start a conversation. If you would like to share your thoughts on the issues raised, please feel free to reach out to Ben or me, we’d love to talk to you!

-Yechiel

computer screen with scary green letters running across it

Recently an uproar erupted when it turned out that a certain company in the business of giving out credit cards was discovered to have routinely extended less credit (sometimes 20x less) to women than to men under the same financial circumstances.

When called out on their discriminatory practices, the company defended itself by saying that all of the decisions were being made by an algorithm, and could therefore not be biased.

In this case, the effect of the biased algorithm was financial. Yet, algorithms have been called upon to make much more serious decisions, such as who should be let out on bail and for how much, and how healthcare should be administered; these are decisions with potential life-and-death implications.

A while ago, I saw a halachic discussion on whether we can hold an Artificial Intelligence (AI) liable for its actions.

Currently, that conversation is purely theoretical, as that would require AI to have an understanding of right and wrong that is way beyond anything we have at the time. But recent events did get me thinking about a slightly related question; if someone programs an AI to make some decision, and the AI causes harm, to what extent is the one who deployed the AI liable for the AI’s actions.

Put in simpler terms, is saying “it wasn’t me, that decision was made by the algorithm” a valid defense?

Finding the Torah view on this question is understandably challenging; the Torah, after all, doesn’t speak of computers. We will have to get creative and see if we can find an analogous situation.

One approach we might take is to consider the AI as a Shliach, or an agent, of the person who deployed it.

In Halachah, a person can appoint a Shliach to do an action on their behalf, and the activity will be attributed to the appointer (the Meshaleach). For example, if you appoint someone to sell something on your behalf, the sale is attributed to you as if you executed the transaction.

What if the person appointed the Shliach to do something wrong (e.g., to steal something)?

In such a case, the Talmud rules that אין שליח לדבר עבירה (the concept of a Shliach doesn’t apply where sin is involved.

In other words, the Shliach is expected to refuse the problematic job, and if they go ahead with it anyway, they are liable for committing the sin.

Trying to apply this rule to AI, though, leads to some problems. The reasoning behind the ruling that Shlichut doesn’t apply when sin is involved is because the Shliach is expected to use their moral judgment and refuse the Shlichut. That would require that the Shliach have a sense and knowledge of right and wrong, as well as the autonomy to make their own decisions, two things AI is nowhere near achieving, as mentioned earlier.

It would seem that using AI would be similar to using a tool. Similar to when a person kills someone using an arrow, they can’t defend themselves by saying “it wasn’t me, it was the arrow,” using an AI might be the same thing.

You might argue that AI is different than an arrow in that once you deploy the AI, you “lose control” over it. The algorithms behind AI are a “black box,” even the programmers who programmed and trained the AI are unable to know why it makes the decisions it makes.

So unlike an arrow where there is a direct causal relationship between the act of shooting the arrow and the victim getting hurt, the causal link in AI is not so clear-cut.

But then again, it seems like the Talmud discusses a case that might be analogous here.

Regarding a case where a person lit a fire on their property, and the fire got out of control and damaged a neighboring property the Talmud says the following: “We have learned that Rabbi Yochanan said: [he is liable for] his fire just as [he is liable for] his arrow.”

A closer look at the reasoning behind Rabbi Yochanan’s ruling, however, reveals a crucial difference. The reason you are liable for your fire spreading is that fire spreading is a predictable consequence of lighting a fire. If your fire spread due to unusually strong wind, for example, then you would not be liable because the spread of the fire could not have been predicted.

One can argue, that the fact that the AI made its own decision that could not be predicted, even by those who programmed the AI and wrote the algorithms behind it, would mean that the AI has some sort of agency here. Maybe not enough agency to hold the AI liable, but perhaps just enough to exculpate those who deployed it (as long as they are unaware that the AI is making faulty decisions).

Is there something between the full agency of a Shliach and the complete lack of agency of a tool/fire?

Let’s look at another passage:

“One who sends fire in the hands of a child or someone who is mentally impaired is not liable by the laws of man but is liable by the laws of Heaven.”

The idea that someone can be “not liable by the laws of man but liable by the laws of Heaven” is used often in the Talmud to refer to actions that are technically legal, but still unethical. So while the court can’t prosecute a person for a fire started by a child, it is still morally and ethically wrong to do so.

So perhaps that is how we can classify AI? Like a child who has enough agency to make decisions, but not enough to distinguish right from wrong? Are companies hiding behind black-box algorithms technically legal, but morally questionable?

As I said in the beginning, I don’t know the answer to these questions, but I do hope we can start a discussion because the days where such questions were the realm of theoretical philosophers are coming to an end faster than we think!

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .