Press "Enter" to skip to content

I just think they don’t want AI to happen

centaur 2

Hoisted from Facebook: I saw my friend Jim Davies share the following article:

http://www.theguardian.com/commentisfree/2016/mar/13/artificial-intelligence-robots-ethics-human-control
The momentous advance in artificial intelligence demands a new set of ethics … In a dramatic man versus machine encounter, AlphaGo has secured its third, decisive victory against a renowned Go player. With scientists amazed at how fast AI is developing, it’s vital that humans stay in control.

I posted: “The AI researchers I know talk about ethics and implications all the time – that’s why I get scared about every new call for new ethics after every predictable incremental advance.” I mean, Jim and I have talked about this, at length; so did my I and my old boss, James Kuffner … heck, one of my best friends, Gordon Shippey, went round and round on this over two decades ago in grad school. Issues like killbots, all the things you could do with the 99% of a killbot that’s not lethal, the displacement of human jobs, the potential for new industry, the ethics of sentient robots, the ethics of transhuman uplift, and whether any of these things are possible … we talk about it a lot.

So if we’ve been building towards this for a while, and talking about ethics the whole time, where’s the need for a “new” ethics, except in the minds of people not paying attention? But my friend David Colby raised the following point: “I’m no scientist, but it seems to me that anyone who doesn’t figure out how to make an ethical A.I before they make an A.I is just asking for trouble.”

Okay, okay, so I admit it: my old professor Ron Arkin’s book on the ethics of autonomous machines in warfare is lower in my stack than the book I’m reading on reinforcement learning … but it’s literally in my stack, and I think about this all the time … and the people I work with think about this all the time … and talk about it all the time … so where is this coming from? I feel like there’s something else beneath the surface. Since David and I are space buffs, my response to him was that I read all these stories about the new dangers of AI as if they said:

With the unexpected and alarming success of the recent commercial space launch, it’s time for a new science of safety for space systems. What we need is a sober look at the risks. After all, on a mission to Mars, a space capsule might lose pressure. Before we move large proportions of the human race to space, we need to, as a society, look at the potential catastrophes that might ensue, and decide whether this is what we want our species to be doing. That’s why, at The Future of Life on Earth Institute, we’ve assembled the best minds who don’t work directly in the field to assess the real dangers and dubious benefits of space travel, because clearly the researchers who work in the area are so caught up with enthusiasm that they’re not seriously considering the serious risks. Seriously. Sober. Can we ban it now? I just watched Gravity and I am really scared after clenching my sphincter for the last ninety minutes.

To make that story more clear if you aren’t a space buff: there are more commercial space endeavors out there than you can shake a stick at, so advances in commercial space travel should not be a surprise – and the risks outlined above, like decompression, are well known and well discussed. Some of us involved in space also talk about these issues all the time. My friend David has actually written a book about space disasters, DEBRIS DREAMS, which you can get on Amazon.

So to make the analogy more clear, there are more research teams working on almost every possible AI problem that you can think of, so advances in artificial intelligence applications should not be a surprise – and the risks outlined by most of these articles are well known and discussed. In my personal experience – my literal personal experience – issues like safety in robotic systems, whether to trust machine decisions over human judgment, and the potential for disruption of human jobs or even life are all discussed more frequently, and with more maturity, than I see in all these “sober calls” for “clear-minded” research from people who wouldn’t know a laser safety curtain from an orbital laser platform.

I just get this sneaking suspicion they don’t want AI to happen.

-the Centaur

  1. I don’t think you should read to much into the headline, which is sometimes changed by editors without the author even knowing it. I don’t think anything in the main text of the article calls for “new” ethics, in particular.

    I know the author, and I think he has some legitimate concerns, though this article doesn’t really address them very well, I admit.

    He specializes in the ethics of self-driving cars. So, for example, if a car is has to choose between the death of a little kid who runs into the street and the driver, who should die? He’s not as interested in the answer to this question as much as *who should be the one deciding the answer?* When you poll people, they tend to think the drivers or lawmakers should do the deciding, and few people think the engineers building the car should be making the high-level ethical decisions. But that’s exactly what’s happening. Not only are the engineers deciding the ethics of self-driving cars, but they are not telling us (nor will their lawyers allow them to tell us) what the policy actually is. So we have an interesting problem of customers wanting the ethical decisions to be a more public, open discussion, perhaps done by ethics experts, and the reality is that the programmers are doing the deciding behind closed doors.

    Is it satisfying for the rest of us to say merely that we’re confident that the engineers are thinking and talking about it all the time, deep in Google’s labs where nobody can hear them?

  2. Dae Dae

    I’m sure you must have seen a particular xkcd (http://xkcd.com/1215/) on this general topic, but it took me more than thirty seconds to dig up, so I thought it might be worth appending for posterity….

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.