Posts tagged as “Properties of Intellect”
Our world is big. Big, and complicated, filled with many more things than any one person can know. We rely on each other to find out things beyond our individual capacities and to share them so we can succeed as a species: there's water over the next hill, hard red berries are poisonous, and the man in the trading village called Honest Sam is not to be trusted.
To survive, we must constantly take information, just as we must eat to live. But just like eating, consuming information indiscriminately can make us sick. Even when we eat good food, we must clean our teeth and got to the bathroom - and bad food should be avoided. In the same way, we have to digest information to make it useful, we need to discard information that's no longer relevant, and we need to avoid misinformation so we don't pick up false beliefs. We need habits of information hygiene.
Whenever you listen to someone, you absorb some of their thought process and make it your own. You can't help it: that the purpose of language, and that's what understanding someone means. The downside is your brain is a mess of different overlapping modules all working together, and not all of them can distinguish between what's logically true and false. This means learning about the beliefs of someone you violently disagree with can make you start to believe in them, even if you consciously think they're wrong. One acquaintance I knew started studying a religion with the intent of exposing it. He thought it was a cult, and his opinion about that never changed. But at one point, he found himself starting to believe what he read, even though, then and now, he found their beliefs logically ridiculous.
This doesn't mean we need to shut out information from people we disagree with - but it does mean we can't uncritically accept information from people we agree with. You are the easiest person for yourself to fool: we have a cognitive flaw called confirmation bias which makes us more willing to accept information that confirms our prior beliefs rather than ones that deny it. Another flaw called cognitive dissonance makes us want to actively resolve conflicts between our beliefs and new information, leading to a rush of relief when they are reconciled; combined with confirmation bias, people's beliefs can actually be strengthened by contradictory information.
So, as an exercise in information hygiene for those involved in one of those charged political conversations that dominate our modern landscape, try this. Take one piece of information that you've gotten from a trusted source, and ask yourself: how might this be wrong? Take one piece of information from an untrusted source, and ask yourself, how might this be right? Then take it one step further: research those chinks in your armor, or those sparks of light in your opponent's darkness, and see if you can find evidence pro or con. Try to keep an open mind: no-one's asking you to actually change your mind, just to see if you can tell whether the situation is actually as black and white as you thought.
Pictured: the book pile, containing some books I'm reading to answer a skeptical friend's questions, and other books for my own interest.
Hoisted from Facebook … what’s the biggest problem with the world today?
First I studied logic, and found out many people don’t know how to construct an argument, and I thought that was the biggest problem.
Then I studied emotion, and found out many people judge arguments to be correct if they make them feel good, and I thought that was the biggest problem.
Then I studied consciousness, and found out many people don’t argue at all, they post-hoc justify preconscious decisions, and then I thought that was the biggest problem.
Then I studied politics, and I realized the biggest problem was my political opponents, because they don’t agree with me!
Pictured: Me banging on a perfectly good piece of steel until it becomes useless.
http://www.theguardian.com/commentisfree/2016/mar/13/artificial-intelligence-robots-ethics-human-control The momentous advance in artificial intelligence demands a new set of ethics ... In a dramatic man versus machine encounter, AlphaGo has secured its third, decisive victory against a renowned Go player. With scientists amazed at how fast AI is developing, it’s vital that humans stay in control.I posted: "The AI researchers I know talk about ethics and implications all the time - that's why I get scared about every new call for new ethics after every predictable incremental advance." I mean, Jim and I have talked about this, at length; so did my I and my old boss, James Kuffner ... heck, one of my best friends, Gordon Shippey, went round and round on this over two decades ago in grad school. Issues like killbots, all the things you could do with the 99% of a killbot that's not lethal, the displacement of human jobs, the potential for new industry, the ethics of sentient robots, the ethics of transhuman uplift, and whether any of these things are possible ... we talk about it a lot. So if we've been building towards this for a while, and talking about ethics the whole time, where's the need for a "new" ethics, except in the minds of people not paying attention? But my friend David Colby raised the following point: "I'm no scientist, but it seems to me that anyone who doesn't figure out how to make an ethical A.I before they make an A.I is just asking for trouble." Okay, okay, so I admit it: my old professor Ron Arkin's book on the ethics of autonomous machines in warfare is lower in my stack than the book I'm reading on reinforcement learning ... but it's literally in my stack, and I think about this all the time ... and the people I work with think about this all the time ... and talk about it all the time ... so where is this coming from? I feel like there's something else beneath the surface. Since David and I are space buffs, my response to him was that I read all these stories about the new dangers of AI as if they said:
With the unexpected and alarming success of the recent commercial space launch, it's time for a new science of safety for space systems. What we need is a sober look at the risks. After all, on a mission to Mars, a space capsule might lose pressure. Before we move large proportions of the human race to space, we need to, as a society, look at the potential catastrophes that might ensue, and decide whether this is what we want our species to be doing. That's why, at The Future of Life on Earth Institute, we've assembled the best minds who don't work directly in the field to assess the real dangers and dubious benefits of space travel, because clearly the researchers who work in the area are so caught up with enthusiasm that they're not seriously considering the serious risks. Seriously. Sober. Can we ban it now? I just watched Gravity and I am really scared after clenching my sphincter for the last ninety minutes.To make that story more clear if you aren't a space buff: there are more commercial space endeavors out there than you can shake a stick at, so advances in commercial space travel should not be a surprise - and the risks outlined above, like decompression, are well known and well discussed. Some of us involved in space also talk about these issues all the time. My friend David has actually written a book about space disasters, DEBRIS DREAMS, which you can get on Amazon. So to make the analogy more clear, there are more research teams working on almost every possible AI problem that you can think of, so advances in artificial intelligence applications should not be a surprise - and the risks outlined by most of these articles are well known and discussed. In my personal experience - my literal personal experience - issues like safety in robotic systems, whether to trust machine decisions over human judgment, and the potential for disruption of human jobs or even life are all discussed more frequently, and with more maturity, than I see in all these "sober calls" for "clear-minded" research from people who wouldn't know a laser safety curtain from an orbital laser platform. I just get this sneaking suspicion they don't want AI to happen. -the Centaur