Information Hygiene

Our world is big. Big, and complicated, filled with many more things than any one person can know. We rely on each other to find out things beyond our individual capacities and to share them so we can succeed as a species: there’s water over the next hill, hard red berries are poisonous, and the man in the trading village called Honest Sam is not to be trusted.

To survive, we must constantly take information, just as we must eat to live. But just like eating, consuming information indiscriminately can make us sick. Even when we eat good food, we must clean our teeth and got to the bathroom – and bad food should be avoided. In the same way, we have to digest information to make it useful, we need to discard information that’s no longer relevant, and we need to avoid misinformation so we don’t pick up false beliefs. We need habits of information hygiene.

Whenever you listen to someone, you absorb some of their thought process and make it your own. You can’t help it: that the purpose of language, and that’s what understanding someone means. The downside is your brain is a mess of different overlapping modules all working together, and not all of them can distinguish between what’s logically true and false. This means learning about the beliefs of someone you violently disagree with can make you start to believe in them, even if you consciously think they’re wrong. One acquaintance I knew started studying a religion with the intent of exposing it. He thought it was a cult, and his opinion about that never changed. But at one point, he found himself starting to believe what he read, even though, then and now, he found their beliefs logically ridiculous.

This doesn’t mean we need to shut out information from people we disagree with – but it does mean we can’t uncritically accept information from people we agree with. You are the easiest person for yourself to fool: we have a cognitive flaw called confirmation bias which makes us more willing to accept information that confirms our prior beliefs rather than ones that deny it. Another flaw called cognitive dissonance makes us want to actively resolve conflicts between our beliefs and new information, leading to a rush of relief when they are reconciled; combined with confirmation bias, people’s beliefs can actually be strengthened by contradictory information.

So, as an exercise in information hygiene for those involved in one of those charged political conversations that dominate our modern landscape, try this. Take one piece of information that you’ve gotten from a trusted source, and ask yourself: how might this be wrong? Take one piece of information from an untrusted source, and ask yourself, how might this be right? Then take it one step further: research those chinks in your armor, or those sparks of light in your opponent’s darkness, and see if you can find evidence pro or con. Try to keep an open mind: no-one’s asking you to actually change your mind, just to see if you can tell whether the situation is actually as black and white as you thought.

-the Centaur

Pictured: the book pile, containing some books I’m reading to answer a skeptical friend’s questions, and other books for my own interest.

Now I Know the Problem


Hoisted from Facebook … what’s the biggest problem with the world today?

First I studied logic, and found out many people don’t know how to construct an argument, and I thought that was the biggest problem.

Then I studied emotion, and found out many people judge arguments to be correct if they make them feel good, and I thought that was the biggest problem.

Then I studied consciousness, and found out many people don’t argue at all, they post-hoc justify preconscious decisions, and then I thought that was the biggest problem.

Then I studied politics, and I realized the biggest problem was my political opponents, because they don’t agree with me!

-the Centaur

Pictured: Me banging on a perfectly good piece of steel until it becomes useless.

I just think they don’t want AI to happen

Hoisted from Facebook: I saw my friend Jim Davies share the following article:
The momentous advance in artificial intelligence demands a new set of ethics … In a dramatic man versus machine encounter, AlphaGo has secured its third, decisive victory against a renowned Go player. With scientists amazed at how fast AI is developing, it’s vital that humans stay in control.

I posted: “The AI researchers I know talk about ethics and implications all the time – that’s why I get scared about every new call for new ethics after every predictable incremental advance.” I mean, Jim and I have talked about this, at length; so did my I and my old boss, James Kuffner … heck, one of my best friends, Gordon Shippey, went round and round on this over two decades ago in grad school. Issues like killbots, all the things you could do with the 99% of a killbot that’s not lethal, the displacement of human jobs, the potential for new industry, the ethics of sentient robots, the ethics of transhuman uplift, and whether any of these things are possible … we talk about it a lot.

So if we’ve been building towards this for a while, and talking about ethics the whole time, where’s the need for a “new” ethics, except in the minds of people not paying attention? But my friend David Colby raised the following point: “I’m no scientist, but it seems to me that anyone who doesn’t figure out how to make an ethical A.I before they make an A.I is just asking for trouble.”

Okay, okay, so I admit it: my old professor Ron Arkin’s book on the ethics of autonomous machines in warfare is lower in my stack than the book I’m reading on reinforcement learning … but it’s literally in my stack, and I think about this all the time … and the people I work with think about this all the time … and talk about it all the time … so where is this coming from? I feel like there’s something else beneath the surface. Since David and I are space buffs, my response to him was that I read all these stories about the new dangers of AI as if they said:

With the unexpected and alarming success of the recent commercial space launch, it’s time for a new science of safety for space systems. What we need is a sober look at the risks. After all, on a mission to Mars, a space capsule might lose pressure. Before we move large proportions of the human race to space, we need to, as a society, look at the potential catastrophes that might ensue, and decide whether this is what we want our species to be doing. That’s why, at The Future of Life on Earth Institute, we’ve assembled the best minds who don’t work directly in the field to assess the real dangers and dubious benefits of space travel, because clearly the researchers who work in the area are so caught up with enthusiasm that they’re not seriously considering the serious risks. Seriously. Sober. Can we ban it now? I just watched Gravity and I am really scared after clenching my sphincter for the last ninety minutes.

To make that story more clear if you aren’t a space buff: there are more commercial space endeavors out there than you can shake a stick at, so advances in commercial space travel should not be a surprise – and the risks outlined above, like decompression, are well known and well discussed. Some of us involved in space also talk about these issues all the time. My friend David has actually written a book about space disasters, DEBRIS DREAMS, which you can get on Amazon.

So to make the analogy more clear, there are more research teams working on almost every possible AI problem that you can think of, so advances in artificial intelligence applications should not be a surprise – and the risks outlined by most of these articles are well known and discussed. In my personal experience – my literal personal experience – issues like safety in robotic systems, whether to trust machine decisions over human judgment, and the potential for disruption of human jobs or even life are all discussed more frequently, and with more maturity, than I see in all these “sober calls” for “clear-minded” research from people who wouldn’t know a laser safety curtain from an orbital laser platform.

I just get this sneaking suspicion they don’t want AI to happen.

-the Centaur

Why yes, I’m running a deep learning system on a MacBook Air. Why?


Yep, that’s Python consuming almost 300% of my CPU – guess what, I guess that means this machine has four processing cores, since I saw it hit over 300% – running the TensorFlow tutorial. For those that don’t know, “deep learning” is a relatively recent type of learning which uses improvements in both processing power and learning algorithms to train learning networks that can have dozens or hundreds of layers – sometimes as many layers as neural networks in the 1980’s and 1990’s had nodes.

For those that don’t know even that, neural networks are graphs of simple nodes that mimic brain structures, and you can train them with data that contains both the question and the answer. With enough internal layers, neural networks can learn almost anything, but they require a lot of training data and a lot of computing power. Well, now we’ve got lots and lots of data, and with more computing power, you’d expect we’d be able to train larger networks – but the first real trick was discovering mathematical tricks that keep the learning signal strong deep, deep within the networks.

The second real trick was wrapping all this amazing code in a clean software architecture that enables anyone to run the software anywhere. TensorFlow is one of the most recent of these frameworks – it’s Google’s attempt to package up the deep learning technology it uses internally so that everyone in the world can use it – and it’s open source, so you can download and install it on most computers and try out the tutorial at home. The CPU-baking example you see running here, however, is not the simpler tutorial, but a test program that runs a full deep neural network. Let’s see how it did:

Screenshot 2016-02-08 21.08.40.png

Well. 99.2% correct, it seems. Not bad for a couple hundred lines of code, half of which is loading the test data – and yeah, that program depends on 200+ files worth of Python that the TensorFlow installation loaded onto my MacBook Air, not to mention all the libraries that the TensorFlow Python installation depends on in turn …

But I still loaded it onto a MacBook Air, and it ran perfectly.

Amazing what you can do with computers these days.

-the Centaur

I don’t read patents

big red stop button for a robot, i think from bosch

A friend recently overheard someone taking trash about how big companies were kowtowing to them because of a patent they had – and the friend asked me about it.

Without knowing anything about the patent, it certainly does sound plausible someone would cut deals over an awarded patent – once a patent is awarded it’s hard to get rid of.

But I couldn’t be of more help to them, because I couldn’t read the patent. As a working engineer (and, briefly, former IP lead for an AI company) I’ve had to adopt a strict policy to not read patents.

The reason is simple – if you as an engineer look at a patent and decide that it doesn’t apply to you, and a court later decides that you’re wrong, the act of looking at the patent will be considered to be evidence of willful patent infringement and will result in treble damages.

In case you’re wondering, this isn’t just me – most IP guys will tell you, if you are an engineer do NOT look at patents prior to doing your work – do what you need to do, apply for patent protection for what you’re doing that you think is new, useful and non-obvious, and let the lawyers sort out the rest – if it ever comes up, which usually it won’t.

Not everyone agrees and it really applies less to indie developers and open source projects than it does to people working at big companies with deep pockets likely to get sued.

Unfortunately I work at a big company with deep pockets likely to get sued, so I don’t look at patents. Don’t send them to me, don’t tell me about them, and if, God forbid, you think I or someone I know is violating a patent you hold, I’ll find the number of our legal counsel, and they’ll assign someone to evaluate the claim who specializes in that kind of thing.

Hate the damn things.

-the Centaur

Pictured: a big red stop button for a robot, I think from one at Bosch.