AI and Rabbi Plony


computer



 

My first brush with artificial intelligence was during the 1970s. AI was in its infancy and mostly based upon deducing things from rules, such as “If a person has a headache, then give him Tylenol.” Software called MYCIN could diagnose bacterial infections. I suggested adapting this technology to the treatment of human poisoning and collaborated with the Maryland Poison Center. The software I developed was called The Interactive Poison Expert for Classification and Control (IPECAC). It was a fun project, but it really didn’t do very much except allow me to sit at the Poison Center and listen in on some interesting cases. (Q: “What should I do? My dog ate a whole jar of Tums!” A: “Take the dog out for Mexican food.”)

Another area of interest was speech recognition. All kinds of people, all kinds of voices! We quickly came to the conclusion that this was and would always be impossible.

I don’t have to tell you how wrong we were. The world has changed. Rapidly! Speech recognition has become so common that even that most hated human development of all history, the phone tree, can lead a caller through an interminable conversation while not taking offense at the perhaps “impolite” language coming in return.

None of this existed in my youth, and I wonder with some trepidation – no, fear! – what the world will be like for my grandchildren.

AI technology continues to advance at an astounding pace. Already, modern AI systems have beaten chess grandmasters. They can write term papers for dishonest students. And now, they have begun to produce “thought” patterns unanticipated by their creators.

Years ago, one of the inventors of the modern computer, Alan Turing, asked whether computers could someday “think.” What would this mean? To answer this question, he conceived of the “Turing Test.” There are two workstations, one connected to a computer and the other to a person. A user is invited to interact with them any way he wishes. He can ask questions, play games, or ask for jokes. If the subject is unable to determine which is the computer or the person, then Turing would say that the machine “thinks.”

There is a perhaps apocryphal story about a computer laboratory conducting a Turing test. Various researchers were unable to determine which was the computer and which was the person. Finally, they invited a non-technical businessman to try. He sat for a long time and didn’t do anything. Eventually, they asked, “Aren’t you going to do anything?”

He replied, “You said I could do anything I want, so I choose to do nothing!”

A long time elapsed. And elapsed, and elapsed. Eventually, one of the work stations typed, “Is anyone there? When does the test start?”

“That,” the businessman declared, “is the person!”

Well there is nothing sacrosanct about the Turing Test. Perhaps we should have other criteria to determine whether the computer thinks, but, to date, the Turing Test has been passed only in limited experiments.

Will modern AI eventually produce machines that can be said to think? What about “consciousness”? Would a thinking machine be a sentient being? What would happen then? Would there be moral questions? Would it be murder to turn off such a machine? What would civil libertarians say? These questions are already being asked. Organizations such as PETA do not distinguish between people and animals. Is it just a matter of time before AI systems are similarly valued?

What would this mean for us as humans? Already, there are computer programs that play checkers perfectly and cannot lose a game. Will this happen with the much more complicated chess? What about the practice of medicine? A physician friend once told me that a good clinician should remember every case he has encountered. But people forget. Computers do not. What about the practice of law? Could a computer have at its command all the legal cases of the United States courts? What about thousands of years of halachic writings? A rav I know said that he had a complex discussion with an AI system about the halachic principle of “mitoch” and it brought forth learned sources!

Could AI systems become smarter than any living human? What would we humans then do? Just who and what are we?

This brings me to my experience with Rabbi Plony. When I was a graduate student, I volunteered to visit people in a nursing home. I made some fascinating friends, including one man who had fought in the Spanish-American War and once shook hands with Teddy Roosevelt! Well, I don’t know Spanish, but this was “mucho coolo!”

Eventually, I heard some of the staff wondering if I might be ready for the “sixth floor.” (Cue the ominous music!) After a while, they asked me if I would be willing to try it and, not knowing what the “sixth floor” was, I foolishly said “sure.”

Nowadays, such a place would be called a memory unit. It provided intensive care for people who suffered from dementias, such as Alzheimer’s disease, or major disabilities, such as strokes. They introduced me to Rabbi Plony. Looking back, I assume he had suffered a stroke. He sat in a wheelchair and was unable to move. He could not converse. The only thing he could do was to say that his name was “Plony,” which he said over and over again. I was asked to feed him.

I tried to be cheerful: “Oh look at this nice Jello!” Inside, however, I was in turmoil and becoming depressed. As he kept on saying “Plony,” I wondered where the rabbi was. He was a rabbi! – and knew (had known?) more Torah than I ever would. Where was Rabbi Plony? It did not take a modern AI system to surpass his ability to interact with the world. Yet he was a human being with dignity and value far beyond anything a mass of electronic circuitry could ever be.

Where are we going, and where was Rabbi Plony? Even in his disabled state we know that even one second of his life had infinite value. But why? It could not be due to his abilities in the physical world. It could not be due to his intellect. In his commentary on Mishlei, the Vilna Gaon observes that a person has three qualities: Chochma refers to the information he has learned. Bina refers to that which he derives through his understanding. Rabbi Plony did not display either of these. But there is a third part, and the Gaon writes that this is hidden. It must lie in some place we cannot observe, somewhere not physical.

Rabbi Plony had a neshama. I realized that I had naively identified the neshama with consciousness and personality. This was patently false since Rabbi Plony did not seem to retain any personality, and I had no idea whether or not he was a sentient conscious being in the usual sense. Yet he was present, and he was an infinitely valuable person, a whole world that I could not access. I wondered, could he? What is it like to be a person experiencing (suffering from) dementia? Indeed, what is the neshama?

 I certainly cannot answer this, but if we develop AI systems that are even more intelligent than humans, that have personalities and that “think,” maybe only then will we discover what they are missing. Maybe only then will we learn what it truly means to be human.

Please, Ribono Shel Olam, let my grandchildren discover what is missing and come to a better understanding than I have.

 

comments powered by Disqus