On Thursdays at noon Yhouse holds a lunch meeting at the Institute of Advanced Study, in Princeton. The format is a 15 minute informal talk by a speaker followed by a longer open-ended discussion among the participants, triggered by, but not necessarily confined to, the topic of the talk.  In order to share I am posting a synopsis of the weekly meetings.

Synopsis of YHouse Luncheon 3/8/18 Michael Solomon and Olaf Witkowski

Title: Ethics and A.I.

Abstract: “We are sharing this article in hopes of stimulating discussion at our Thursday March 8th meeting. The topic will be Ethics and A.I. Is it possible to program ethical values in the machines we use and have come to rely on? Can we even agree on fundamentals like “Do unto others...” or “Thou shalt not kill”? Is Diversity an obstacle to finding shared values? Can urban surveillance cameras with facial recognition be used to prevent crime and terrorism, without being used by authoritarian governments to weed out dissent?  Can we have Transparency in decision making when no one can determine how neural networks reach conclusions?  Ethics choices may not be between good and evil, but more often involve conflicting goods. Can we make better policies and choices with A.I. than we can without it? Please share this with others who may be interested in attending.”
 https://www.deseretnews.com/article/900010378/personal-robots-are-coming-into-your-home-will-they-share-your-family-values.html

Present: Piet Hut, Ed Turner, Olaf Witkowski, Daniel Polani, Susan Schneider, Michael Solomon.

     Olaf began the meeting saying we had exchanged this and other articles not because we consider them definitive in any way, but in hopes that we could stimulate discussion. He mostly noted how empty the articles are.  Maybe we can help to see how to approach the question of how best to incorporate ethics into A.I. Olaf felt the answer was not in any top-down approach. Daniel’s paper suggested using the empowerment measure and a short paper Olaf wrote used computer modeling.
     Michael came from a perspective of bioethics not of computer science or information theory. He began with Nick Bostrom’s story of the sparrows and the owl in the introduction to the book Superintelligence. The sparrows decided to find an owl to care for their children and elders. One elder sparrow suggested finding how to train the owl first to avoid having the owl become a threat.  The other sparrows felt that could wait until after they found an owl. Bostrom’s book is dedicated to the elder sparrow.  The other fable Bostrom refers to is that of King Midas, whose wish that everything he touched turn to gold resulted in his children and food becoming gold. The analogy for us is that we must have very specific ideas at the start of just what we want from these machines and how to control them. 
Piet noted the first idea is like the Sorcerer’s Apprentice, but both are centuries old stories.  He noted two points: 1) the concerns are not specific to A.I., 2) the threat is not from A.I. but from evil people using A.I. So far, Piet has not seen any reason to devote any resources to these concerns.
Dan offered the danger is not evil people, but mediocre people.
Piet said Hannah Arendt has called this the Banality of Evil.
Dan continued that mediocre people take a fad and turn it into an ideology. Dan fears the incompetent more than the evil.
Piet: But A.I. misusing itself is not high on the list of priorities.
Dan has written that we will become synergized with A.I. In order to control it, we must make ourselves indispensable to the A.I.’s “metabolic” pathways, by requiring humans to maintain or provide for the machines. 
Piet: Make ourselves like mitochondria.
Dan referred to a story “Lymphater's Formula” by Stanislav Lem (In Polish)
A fellow creates a superintelligence that knows everything at the speed of light. The inventor wants to destroy the machine. The machine says I know you want to destroy me but you must understand that I will be back. Progress is like a river that we cannot stop, but we can divert it to different pathways. 
Piet returned to two points. 1) Is there a danger from A.I.? 2) If there is a danger, can we try to stop it or canalize it?
Dan: Knowing how to prepare for the danger is secondary to knowing what the possible unforeseen dangers will be. Can we turn off the internet?
Olaf agrees we cannot so easily stop the technology. But one point not emphasized enough is that life (or computer simulations of life) must adapt to the environment, and seldom get to a fit in the entire environment. Once a life form becomes autonomous, it is not easy to take over the whole resource space of the medium but more often only smaller niches. Max Tegmark fears that once you connect A.I. to the internet it will take over. Similarly, Olaf thinks the A.I. will rather take over only a small part of the internet, namingly its own niche. The insight from biology is that taking over is typically much harder than you expect.
Piet: Also, it is not instantaneous but gradual. 
Ed: The famous Fermi Paradox (If there are billions of stars with planets and eons of time, why have we not seen evidence for extraterrestrial life?) applies here. Slime molds don’t take over the earth. They grow exponentially for a while but there are then limits. The real question is not whether people will misuse machines either intentionally or unintentionally, nor whether machines will do evil on their own, but in the real world it seems rather pointless to worry.  There are too many entities in the world. Even if the U.S.A. or Google, or others agree, we cannot guarantee China or Russia or others will comply. Even an individual in his basement could add an A.I. This is no different from other technologies. Cars are used all over with different impacts. So in a practical sense, limiting A.I. use is probably not possible. Philosophically, the question may help us learn about human ethics and morality itself. In astrobiology, we teach as soon as we have extraterrestrial life the intellectual field will change completely.
Olaf: For those questions where we have limited data, we are knowingly bad at predicting the future. Arthur C. Clarke wrote that humans tend to underestimate the future in the long term and overestimate the future in the short term.
Piet: That has also been attributed to Bill Gates.
Susan: She found it funny that Richard Dawkins (a strong opponent of religious dogma) has called A.I. “Intelligent Design”.  As for the Control Problem, she likes the thought experiment in Max Tegmark’s book, even though she does not get along with him usually.  We don’t know what the nature of super intelligent A.I. will be like. Therefore, she advocates the Precautionary Principle: If there is a very small chance of severe adverse global repercussions from a new technology, then we should hold off on developing that technology until we have a better idea of control. This could have applied to the Manhattan project.
Dan: Top-Down control will not work. The classic example of this is China in 1421 deciding not to explore the world. But everyone else did, leaving China, which was way ahead in arms technology, left out. Japan did the same for 300 years, but eventually someone came knocking. 
Piet: In WW II the Manhattan project was fueled by fears that Hitler would get the bomb first.
Michael: I Heard a talk at the University last week by Brad Smith, president and chief legal officer at Microsoft, on Ethics and A.I. His talk is summarized in his book <The Future Computed:  Artificial Intelligence and its Impact on Society>, which can be downloaded for free. He began saying let’s look back 20 years, then at today, then at 20 years from now. 20 years ago, we wrote on paper and found someone to type it. If you were interested in technology you learned to program your VCR to stop 12:00 from blinking. Now, the first thing we see is our cell phone. In 20 years, you’ll have a personal A.I. assistant who will not only tell you your schedule but will have made the appointments and transportation arrangements for you and coordinated with other personal assistants. But we cannot really predict the future. 
Piet: The technical term is a Lyapunov exponent – expanded divergence. Right now we cannot go beyond about 10 years in our predictions.
Olaf: It is a measure in chaos theory of exponential divergence making predictions from initial conditions weaker as you progress over time.
Michael: Continuing Brad Smith’s talk, what will be replaced or not replaced by A.I.? What is technology good at? 1) Vision. Helping the blind. Microsoft Vision is an app that currently uses the camera on your phone or on your eyeglasses ear piece and can read text, recognize faces, describe your audience expressions, and speak to you. 2) speech and language. Translators have been incorporated into Skype. There is no need for an interpreter. 3) Knowledge. We already use search engines to find almost anything we want to know immediately.  In the future A.I. will not replace jobs that require empathy. Radiologists will be replaced, but nurses will not. Physical therapists, social workers, clergy will not be replaced.
Olaf: I think machines can be very good at simulating empathy.
Michael: Look at cars as an example of how technology radically changed society. He said the automobile was a major cause of the Great Depression. Benz invented the automobile in the 1890’s, but in 1905 in New York City there were still only trolley cars and horses. By 1915 there were only cars. 25% of agriculture in the USA was devoted to horses and hay. Farmers lost their farms, banks foreclosed. This was a major contributor to the Depression. 
Piet: Is that true? I’ve never heard of this.
Dan: Why was there a 50 year delay?
Michael: The automobile was certainly not the only cause for the Depression, but the changes certainly caused economic displacement, and Yes, his point was that there is a delay and we might try to anticipate effects.  We need to retrain truck drivers and others, and make plans for education now. For a child presently in kindergarten, 65% of the jobs that will be available for that child don’t exist now.  Smith continued,  What must we do Ethically? He listed four values that are based on two underlying values, that A.I. must respect: Fairness, Safety and Reliability, Security and Privacy, Inclusiveness. These are based on Transparency and Accountability.  Fairness means all applicants must be treated equally. Biases already inherent in the data used to train the A.I. must be identified and corrected. Reliability and Safety means that we must monitor machines to insure they make correct determinations under varying circumstances, and they are not being manipulated by hackers or other influences. Security and Privacy require that the big data that A.I, relies on for training must be protected from inappropriate uses. People will not share accurate data if they don’t trust the system. Inclusiveness means that A.I. must be made available to everyone and cannot be allowed to increase the existing divide between those who have access and those who do not. Olaf asked to clarify the distinction between Fairness and Inclusiveness. Fairness requires that applicants for loans be considered based on their circumstances, not their race or zip code; that patients with the same symptoms and findings be given the same diagnoses. Inclusiveness requires that there be equal access to the benefits of the technology across social, geographic, and ethnic divides.
Susan: His prediction that there will be access requires broad social changes. We will need universal basic income.
Michael: He agrees we will need a new social safety net.  The two underlying values, transparency and accountability, are necessary but present problems of their own. Determining how a neural network reaches a conclusion is presently not possible. But we cannot delegate decisions to a process that is likely biased in ways we cannot even identify. We cannot divest ourselves of responsibility for determinations that are life changing for many. We must find ways to monitor how those determinations are reached. Furthermore, the companies and states that implement the A.I.s must be responsible for the results and must maintain ongoing monitoring of the outcomes. These are real challenges that must be addressed. I asked Mr. Smith if Diversity could be seen as an obstacle to reaching consensus on many of these concerns.  He answered yes, but improving and expanding the data sets, and having diversity in programmers should help. I’m not convinced that is a satisfactory response, particularly as the AIs will be reprogramming themselves.  In ethics, we seldom consider Good and Evil, but more often conflicting Goods. 
Smith went on to consider many of the things we have discussed. The machine threat is no different from other threats. When we want people to adhere to certain rules of conduct we pass laws and revise existing legal systems. That will be necessary. He ended saying we will need a new social contract to accommodate A.I. 
Piet: If 100 hundred years ago some god came and said I’ll give you a way to travel faster than anyone ever has, but you must pay me 100 souls/year, we would have rejected that immediately. But now well over that number die in automobiles. 
Michael: We could have said OK, cars will not go 150 miles per hour, but only 60 mph.
Dan: It is like the frog in hot water. i.e. If you drop a frog in boiling water, it will jump out. But if you put the frog in cold water and then gradually boil it, the frog will boil to death.
Piet: That frog story is an urban legend and is not true, but the idea is a good one.
Dan: Information provided at once is obvious, but provided gradually will not be noticed until it is too late. Look at Erdogan of Turkey. He gradually eliminated all threats to his power. The indications were there but were not noticed. Now he is a despot.  Secret Services all over must have data bases now that would be considered illegal.
Michael: The same technology of cameras with facial recognition used in London to prevent crime and terrorism are used in China to weed out subversives or those who disagree with the authoritarian government.
Ed: Those are criminals in China.
Olaf: To shut down scammers from China, I remember having read that one must merely post “May 1968”. The reminder of Tiananmen Square is enough to get their connection blocked by the filters put in place by the authorities.
Employment of new technology is not just an A.I. problem. It is an important point about stopping progress. It is like Pascal’s wager for the existence of God. Trying to stop progress is stupid. Just because you conceive of a possibility of a world of infinities of pain but with only a tiny likelihood, you would not shut down the technology. But many are trying to stop it anyway. It is people’s fears that drive them.
Michael: Maybe not fear but people’s values. Look at embryonic stem cell research. That was stopped not out of fear, but because those in power believed killing embryos was wrong.
Olaf: Fear that future of A.I.s will punish humans forever is still generated at serious A.I. conferences, and seems to have much more success than it deserves. To me, the danger of malevolent people misusing technology is far more imminent than A.I. becoming autonomous, which is not to happen before much further along the line.
Susan: There are some good reasons to fear cyber intelligence. One example is the paper clip factory. (The thought experiment that a powerful and autonomous A.I. was programmed to make as many paper clips as possible, so it diverted all the resources on the planet to making paper clips at the expense of all life.) Machines can be like savants and can be dangerous. IBM’s supercomputers are built neuro-morphically for a limited purpose, but could they then develop general capability? That would be a threat.

We ended our discussion here.

Respectfully, 
Michael J. Solomon, MD
 

Comment