Press "Enter" to skip to content

Posts tagged as “Intelligence”

Talent, Incompetence and Other Excuses

centaur 0

lenora at rest in the library with the excelsior

The company I work at is a pretty great place, and it's attracted some pretty great people - so if your name isn't yet on the list of "the Greats" it can sometimes be a little intimidating. There's a running joke that half the people at the firm have Impostor Syndrome, a pernicious condition in which people become convinced they are frauds, despite objective evidence of their competence.

I definitely get that from time to time - not just at the Search Engine That Starts with a G, but previously in my career. In fact, just about as far back as people have been paying me money to do what I do, I've had a tape loop of negative thoughts running through my head, saying, "incompetent … you're incompetent" over and over again.

Until today, as I was walking down the hall, when I thought of Impostor Syndrome, when I thought of what my many very smart friends would say if I said that, when I thought of the response that they would immediately give: not "you're wrong," which they of course might say, but instead "well, what do you think you need to do to do a good job?"

Then, in a brain flash, I realized incompetence is just another excuse people use to justify their own inaction.

Now, I admit there are differences in competence in individuals: some people are better at doing things than others, either because of experience, aptitude, or innate talent (more on that bugbear later). But unless the job is actually overwhelming - unless simply performing the task at all taxes normal human competence, and only the best of the best can succeed - being "incompetent" is simply an excuse not to examine the job, to identify the things that need doing, and to make a plan to do them.

Most people, in my experience, just want to do the things that they want to do - and they want to do their jobs the way they want to do them. If your job is well tuned towards your aptitudes, this is great: you can design a nice, comfortable life.

But often the job you want to do requires more of you than doing things the way you want to do them. I'm a night owl, I enjoy working late, and I often tool in just before my first midmorning meeting - but tomorrow, for a launch review of a product, I'll be showing up at work a couple hours early to make sure that everything is working before the meeting begins. No late night coffee for you.

Doing what's necessary to show up early seems trivial, and obvious, to most people who aren't night owls, but it isn't trivial, or obvious, to most people that they don't do what's necessary in many other areas of their life. The true successes I know, in contrast, do whatever it takes: switching careers, changing their dress, learning new skills - even picking out the right shirts, if they have to meet with people, or spending hours shaving thirty seconds off their compile times, if they have to code software.

Forget individual differences. If you think you're "incompetent" at something, ask yourself: what would a "competent" person do? What does it really take to do that job? If it involves a mental or physical skill you don't have, like rapid mental arithmetic or a ninety-eight mile-per-hour fastball, then cut yourself some slack; but otherwise, figure out what would lead to success in the job, and make sure you do that.

You don't have to do those things, of course: you don't have to put on a business suit and do presentations. But that doesn't mean you're incompetent at giving presentations: it means you weren't willing to go to a business wear store to find the right suit or dress, and it means you weren't willing to go to Toastmasters until you learned to crack your fear of public speaking. With enough effort, you can do those things - if you want to. There's no shame in not wanting to. Just be honest about why.

That goes back to that other bugbear, talent.

When people find out I'm a writer, they often say "oh, it must take so much talent to do that." When I protest that it's really a learned skill, they usually say something a little more honest, "no, no, you're wrong: I don't have the talent to do that." What they really mean, though they may not know it, is that they don't want to put in the ten thousand hours worth of practice to become an expert.

Talent does affect performance. And from a very early age, I had a talent with words: I was reading soon after I started to walk. But, I assure you, if you read the stuff I wrote at an early age, you'd think I didn't have the talent to be a writer. What I did have was a desire to write, which translated into a heck of a lot of practice, which developed, slowly and painfully, into skill.

Talent does affect performance. Those of us who work at something for decades are always envious of those people who seem to take to something in a flash. I've seen it happen in writing, in computer programming, and in music: an experienced toiler is passed by a newbie with a shitload of talent. But even the talented can't go straight from raw talent to expert performance: it still takes hundreds or thousands of hours of practice to turn that talent into a marketable skill.

When people say they don't have talent, they really mean they don't have the desire to do the work. And that's OK. When people say they aren't competent to do a job, they really mean they don't want to think through what it takes to get the job done, or having done so, don't want to do those things. And that's OK too.

Not everyone has to sit in a coffeehouse for thousands of hours working on stories only to find that their best doesn't yet cut it. Not everyone needs to strum on that guitar for thousands of hours working on riffs only to find that their performance falls flat on the stage. Not everyone needs to put on that suit and polish that smile for thousands of hours working on sales only to find that they've lost yet another contract. No-one is making you do those things if you don't want to.

But if you are willing to put those hours in, you have a shot at the best selling story, the tight performance, the killer sale.

And a shot at it is all you get.

-the Centaur

Pictured: Lenora, my cat, in front of a stack of writing notebooks and writing materials, and a model of the Excelsior that I painted by hand. It's actually a pretty shitty paint job. Not because I don't have talent - but because I didn't want to put hundreds of hours in learning how to paint straight lines on a model. I had writing to do.

My Labors Are Not Ended

centaur 0

lenora at rest in the library

But I am going to take a rest for a bit.

Above you see a shot of my cat Lenora resting in front of the "To Read Science Fiction" section of my Library, the enormous book collection I've been accumulating over the last quarter century. I have books older than that, of course, but they're stored in my mother's house in my hometown. It's only over the last 25 years or so have I been accumulating my own personal library.

But why am I, if not resting, at least thinking about it? I finished organizing the books in my Library.

lenora at rest in the library 2

I have an enormous amount of papers, bills, bric a brac and other memorabilia still to organize, file, trash or donate, but the Library itself is organized, at last. It's even possible to use it.

How organized? Well...

Religion, politics, economics, the environment, women's studies, Ayn Rand, read books, Lovecraft, centaur books, read urban fantasy, read science fiction, Atlanta, read comics, to-read comics, to-read science fiction magazines, comic reference books, drawing reference books, steampunk, urban fantasy, miscellaneous writing projects, Dakota Frost, books to donate, science fiction to-reads: Asimov, Clarke, Banks, Cherryh, miscellaneous, other fiction to-reads, non-fiction to-reads, general art books, genre art books, BDSM and fetish magazines and art books, fetish and sexuality theory and culture, military, war, law, space travel, astronomy, popular science, physics of time travel, Einstein, quantum mechanics, Feynman, more physics, mathematics, philosophy, martial arts, health, nutrition, home care, ancient computer manuals, more recent computer manuals, popular computer books, the practice of computer programming, programming language theory, ancient computer languages, Web languages, Perl, Java, C and C++, Lisp, APL, the Art of Computer Programming, popular cognitive science, Schankian cognitive science, animal cognition, animal biology, consciousness, dreaming, sleep, emotion, personality, cognitive science theory, brain theory, brain philosophy, evolution, human evolution, cognitive evolution, brain cognition, memory, "Readings in …" various AI and cogsci disciplines, oversized AI and science books, conference proceedings, technical reports, game AI, game development, robotics, imagery, vision, information retrieval, natural language processing, linguistics, popular AI, theory of AI, programming AI, AI textbooks, AI notes from recent projects, notes from college from undergraduate through my thesis, more Dakota Frost, GURPS, other roleplaying games, Magic the Gathering, Dungeons and Dragons, more Dakota Frost, recent projects, literary theory of Asimov and Clarke, literary theory of science fiction, science fiction shows and TV, writing science fiction, mythology, travel, writing science, writing reference, writers on writing, writing markets, poetry, improv, voice acting, film, writing film, history of literature, representative examples, oversized reference, history, anthropology, dictionaries, thesauri, topical dictionaries, language dictionaries, language learning, Japanese, culture of Japan, recent project papers, comic archives, older project papers, tubs containing things to file … and the single volume version of the Oxford English Dictionary, complete with magnifying glass.

lenora at rest in the library 2

I deliberately left out the details of many categories and outright omitted a few others not stored in the library proper, like my cookbooks, my display shelves of Arkham House editions, Harry Potter and other hardbacks, my "favorite" nonfiction books, some spot reading materials, a stash of transhumanist science fiction, all the technical books I keep in the shelf next to me at work … and, of course, my wife and I's enormous collection of audiobooks.

What's really interesting about all that to me is there are far more categories out there in the world not in my Library than there are in my Library. Try it sometime - go into a bookstore or library, or peruse the list of categories in the Library of Congress or Dewey Decimal System Classifications. There's far more things to think about than even I, a borderline hoarder with a generous income and enormous knowledge of bookstores, have been able to accumulate in a quarter century.

Makes you think, doesn't it?

-the Centaur

Context-Directed Spreading Activation

centaur 0
netsphere.png Let me completely up front about my motivation for writing this post: recently, I came across a paper which was similar to the work in my PhD thesis, but applied to a different area. The paper didn’t cite my work – in fact, its survey of related work in the area seemed to indicate that no prior work along the lines of mine existed – and when I alerted the authors to the omission, they informed me they’d cited all relevant work, and claimed “my obscure dissertation probably wasn’t relevant.” Clearly, I haven’t done a good enough job articulating or promoting my work, so I thought I should take a moment to explain what I did for my doctoral dissertation. My research improved computer memory by modeling it after human memory. People remember different things in different contexts based on how different pieces of information are connected to one another. Even a word as simple as ‘ford’ can call different things to mind depending on whether you’ve bought a popular brand of car, watched the credits of an Indiana Jones movie, or tried to cross the shallow part of a river. Based on that human phenomenon, I built a memory retrieval engine that used context to remember relevant things more quickly. My approach was based on a technique I called context directed spreading activation, which I argued was an advance over so-called “traditional” spreading activation. Spreading activation is a technique for finding information in a kind of computer memory called semantic networks, which model relationships in the human mind. A semantic network represents knowledge as a graph, with concepts as nodes and relationships between concepts as links, and traditional spreading activation finds information in that network by starting with a set of “query” nodes and propagating “activation” out on the links, like current in an electric circuit. The current that hits each node in the network determines how highly ranked the node is for a query. (If you understand circuits and spreading activation, and this description caused you to catch on fire, my apologies. I’ll be more precise in future blogposts. Roll with it). The problem is, as semantic networks grow large, there’s a heck of a lot of activation to propagate. My approach, context directed spreading activation (CDSA), cuts this cost dramatically by making activation propagate over fewer types of links. In CDSA, each link has a type, each type has a node, and activation propagates only over links whose nodes are active (to a very rough first approximation, although in my evaluations I tested about every variant of this under the sun). Propagating over active links isn’t just cheaper than spreading activation over every link; it’s smarter: the same “query” nodes can activate different parts of the network, depending on which “context” nodes are active. So, if you design your network right, Harrison Ford is never going to occur to you if you’ve been thinking about cars. I was a typical graduate student, and I thought my approach was so good, it was good for everything—so I built an entire cognitive architecture around the idea. (Cognitive architectures are general reasoning systems, normally built by teams of researchers, and building even a small one is part of the reason my PhD thesis took ten years, but I digress.) My cognitive architecture was called context sensitive asynchronous memory (CSAM), and it automatically collected context while the system was thinking, fed it into the context-directed spreading activation system, and incorporated dynamically remembered information into its ongoing thought processes using patch programs called integration mechanisms. CSAM wasn’t just an idea: I built it out into a computer program called Nicole, and even published a workshop paper on it in 1997 called “Can Your Architecture Do This? A Proposal for Impasse-Driven Asynchronous Memory Retrieval and Integration.” But to get a PhD in artificial intelligence, you need more than a clever idea you’ve written up in a paper or implemented in a computer program. You need to use the program you’ve written to answer a scientific question. You need to show that your system works in the domains you claim it works in, that it can solve the problems that you claim it can solve, and that it’s better than other approaches, if other approaches exist. So I tested Nicole on computer planning systems and showed that integration mechanisms worked. Then I and a colleague tested Nicole on a natural language understanding program and showed that memory retrieval worked. But the most important part was showing that CDSA, the heart of the theory, didn’t just work, but was better than the alternatives. I did a detailed analysis of the theory of CDSA and showed it was better than traditional spreading activation in several ways—but that rightly wasn’t enough for my committee. They wanted an example. There were alternatives to my approach, and they wanted to see that my approach was better than the alternatives for real problems. So I turned Nicole into an information retrieval system called IRIA—the Information Retrieval Intelligent Assistant. By this time, the dot-com boom was in full swing, and my thesis advisor invited me and another graduate student to join him starting a company called Enkia. We tried many different concepts to start with, but the further we went, the more IRIA seemed to have legs. We showed she could recommend useful information to people while browsing the Internet. We showed several people could use her at the same time and get useful feedback. And critically, we showed that by using context-directed spreading activation, IRIA could retrieve better information faster than traditional spreading activation approaches. The first publication on IRIA came out in 2000, shortly before I got my PhD thesis, and at the company things were going gangbusters. We found customers for the idea, my more experienced colleagues and I turned the IRIA program from a typical graduate student mess into a more disciplined and efficient system called the Enkion, a process we documented in a paper in early 2001. We even launched a search site called Search Orbit—and then the whole dot-com disaster happened, and the company essentially imploded. Actually, that’s not fair: the company continued for many years after I left—but I essentially imploded, and if you want to know more about that, read “Approaching 33, as Seen from 44.” Regardless, the upshot is that I didn’t follow up on my thesis work after I finished my PhD. That happens to a lot of PhD students, but for me in particular I felt that it would have been betraying the trust of my colleagues to go publish a sequence of papers on the innards of a program they were trying to use to run their business. Eventually, they moved on to new software, but by that time, so had I. Fast forward to 2012, and while researching an unrelated problem for The Search Engine That Starts With A G, I came across the 2006 paper “Recommending in context: A spreading activation model that is independent of the type of recommender system and its contents” by Alexander Kovács and Haruki Ueno. At Enkia, we’d thought of doing recommender systems on top of the Enkion, and had even started to build a prototype for Emory University, but the idea never took off and we never generated any publications, so at first, I was pleased to see someone doing spreading activation work in recommender systems. Then I was unnerved to see that this approach also involved spreading activation, over a typed network, with nodes representing the types of links, and activation in the type nodes changing the way activation propagated over the links. Then I was unsettled to see that my work, which is based on a similar idea and predates their publication by almost a decade, was not cited in the paper. Then I was actually disturbed when I read: “The details of spreading activation networks in the literature differ considerably. However, they’re all equal with respect to how they handle context … context nodes do not modulate links at all…” If you were to take that at face value, the work that I did over ten years of my life—work which produced four papers, a PhD thesis, and at one point helped employ thirty people—did not exist. Now, I was also surprised by some spooky similarities between their systems and mine—their system is built on a context-directed spreading activation model, mine is a context-directed spreading activation model, theirs is called CASAN, mine is embedded in a system called CSAM—but as far as I can see there’s NO evidence that their work was derivative of mine. As Chris Atkinson said to a friend of mine (paraphrased): “The great beam of intelligence is more like a shotgun: good ideas land on lots of people all over the world—not just on you.” In fact, I’d argue that their work is a real advance to the field. Their model is similar, not identical, and their mathematical formalism uses more contemporary matrix algebra, making the relationship to related approaches like Page Rank more clear (see Google Page Rank and Beyond). Plus, they apparently got their approach to work on recommender systems, which we did not; IRIA did more straight up recommendation of information in traditional information retrieval, which is a similar but not identical problem. So Kovács and Ueno’s “Recommending in Context” paper is a great paper and you should read it if you’re into this kind of stuff. But, to set the record straight, and maybe to be a little bit petty, there are a number of spreading activation systems that do use context to modulate links in the network … most notably mine. -the Centaur Pictured: a tiny chunk of the WordNet online dictionary, which I’m using as a proxy of a semantic network. Data processing by me in Python, graph representation by the GraphViz suite’s dot program, and postprocessing by me in Adobe Photoshop.

Humans are Good Enough to Live

centaur 0

goodness.png
I'm a big fan of Ayn Rand and her philosophy of Objectivism. Even though there are many elements of her philosophy which are naive, or oversimplified, or just plain ignorant, the foundation of her thought is good: we live in exactly one shared world which has a definitive nature, and the good is defined by things which promote the life of human individuals.

It's hard to overestimate the importance of this move, this Randian answer to the age old question of how to get from "is" to "ought" - how to go from what we know about the world to be true to deciding what we should do. In Rand's world, ethical judgments are judgments made by humans about human actions - so the ethical good must be things that promote human life.

This may seem like a trivial philosophical point, but there are many theoretically possible definitions of ethics, from the logically absurd "all actions taken on Tuesday are good" to the logically indefensible "things are good because some authority said so." Rand's formulation of ethics echoes Jesus's claim that goodness is not found in the foods you eat, but in the actions you do.

But sometimes it seems like the world's a very depressing place. Jesus taught that everyone is capable of evil. Rand herself thought nothing is given to humans automatically: they must choose their values, and that the average human, because they never think about values, is pretty much a mess of contradictory assumptions which leaves them doing good only through luck.

But, I realized Rand's wrong about that - because her assumptions are wrong, that nothing is given to humans automatically. She's a philosopher, not a scientist, and she wasn't aware of the great strides that have been made in the understanding of how we think - because some of those strides were made in technical fields near the very end of her life.

Rant rails against philosophies like Kant's, who proposes, among many other things, that humans perceive reality unavoidably distorted by filters built into the human conceptual and perceptual apparatus. Rand admitted that human perception and cognition had a nature, but she believed, humans could perceive reality more objectively. Well, in a sense, they're both wrong.

Modern studies of bias in machine learning show that it's impossible - mathematically impossible - to learn any abstract concept without some kind of bias. In brief, if you want to predict something you've never seen before, you have to take some stance towards the data you've seen already - a bias - but there is no logical way to pick a correct bias. Any one you pick may be wrong.

So, like Kant suggested, our human conceptual processes impose unavoidable biases on the kind of concepts we learn, and unlike Rand wanted, those biases may prove distorting. However, we are capable of virtual levels of processing, which means that even if our base reasoning is flawed, we can build a more formal one, like mathematics, that avoids those problems.

But, I realized, there's an even stronger reason to believe that things aren't as bad as Kant or Rand feared, a reason founded in Rand's ideas of ethics. Even human communities that lack a formalized philosophy are nonetheless capable of building and maintaining systems that last for generations - which means the human default bias leads to concepts that are Randian goods.

In a way, this isn't surprising. From an evolutionary perspective, if any creature inherited a set of bad biases, it would learn bad concepts, and be unable to reproduce. From a cognitive science perspective, the human mind is constantly attempting to understand the world and to cache the results as automatic responses - what Rand would call building a philosophy.

So, if we are descendants of creatures that survived, we must have a basic bias for learning that promotes our life, and if we live by being rational creatures constantly attempting to understand the world who persist in communities that have lasted for generations, we must have a basic bias towards a philosophy which is just good enough to prevent our destruction.

That's not to say that the average human being, on their own, without self-examination, will develop a philosophy that Rand or Jesus would approve of. And it's not to say that individual human beings aren't capable of great evil - and that human communities aren't capable of greater evil towards their members.

But it does mean that humans are good enough to live on this Earth.

Just our continued existence shows that even though it seems like we live in a cold and cruel universe, the cards are stacked just enough in humanity's favor for it to be possible for at least some people to thrive, it also shows that while humans are capable of great evil, the bias of humanity is stacked just enough in our favor for human existence to continue.

Rising above the average, of course, is up to you.

-the Centaur

A Really Good Question

centaur 0

layout.png

Recently I was driving to work and thinking about an essay by a statistician on “dropping the stick.” The metaphor was about a game of pick-up hockey, where an inattentive player would be asked to “drop the stick” and skate for a while until they got their head in the game. In the statistical context, this became the action of stopping people who were asking for help with a specific statistical task and asking what problem they wanted to solve, because often solving the actual problem may be actually very different from fixing their technical issue and may require completely different approaches. That gets annoying sometimes when you ask a question to a mailing list and someone asks you what you're trying to solve rather than addressing the issue you've raised, but it's a good reflex to have: first ask, "What's the problem?"

Then I realized something even more important about projects that succeeded or failed in my life – successes at radical off the wall projects like the emotional robot pet project or the cell phone robots with personalities project or the 3d object visualization project, and failures at seemingly simpler problems like a tweak to a planner at Carnegie Mellon or a test domain for my thesis project or the failed search improvement I worked on during my third year at the Search Engine that Starts with a G. One of the things I noticed about the successes is that before I got started I did a hard core intensive research effort to understand the problem space before I tackled the problem proper, then I chose a method of approach, and then I planned out a solution. Paraphrasing Eisenhower, even though the plan often had to change once we started execution, the planning was indispensable. The day-to-day immersion in the problem that you need for planning provides the mental context you need to make the right decisions as the situation inevitably changes.

In failed projects, I found one or more things – the hard core research or the planning – wasn’t present, but that wasn’t all that was missing. In the failure cases, I often didn’t know what a solution would look like. I recently saw this from the outside when I conducted a job interview, and found that the interviewee clearly didn't understand what would constitute an answer to my question. He had knowledge, and he was trying, but his suggested moves were only analogically correct - they sounded like elements of a solution, but didn't connect to the actual features of the problem. Thinking back, a case that leapt to mind from my own experience was a project all the way back in grade school, where I we had an urban planning exercise to create an ideal city. My job was to create the map of the city, and I took the problem very literally, starting with a topographical map of the city's center, river and hills. Now, it's true that the geography of a city is important - for an ideal city, you'd want a source of water, easy transport, a relatively flat area for many buildings, and at least one high point for scenic vistas. But there was one big problem with my city plan: there were no buildings, neighborhoods, or districts on it! No buildings or people! It was just the land!

Ok, so I was in grade school, and this was one of my first projects, so perhaps I could be excused for not knowing what I was doing. But the educators who set up this project knew what they were doing, and they brought on board an actual city planner to talk to us about our project. When he saw my maps, he pointed out this wasn't a city plan and sat down with all of us to brainstorm what we'd actually want in a city - neighborhoods, power plants, a city center, museums, libraries, hospitals, food distribution and industrial regions. At the time, I was saddened that my hard work was abandoned, and now in hindsight I'm saddened that the city planner didn't take a minute or two to talk about how geography affects cities before beginning his brainstorming exercise. But what struck me most about this in hindsight is that I really didn't know what constituted an answer to the problem.

suddenclarity.png  

So, I asked myself, “What counts as a solution to this problem?” – and that, I realized, is a very good question.

-the Centaur

Pictured: an overhead shot of a diorama of the control room of the ENIAC computer as seen at the Computer History Museum, and of course our friend Clarence having his sudden moment of clarity.

Prometheus is the movie you show your kids to teach them how not to do science

centaur 0

promvsthingalt.png  

Too Diplomatic for My Own Good

I recently watched Ridley Scott's Prometheus. I wanted to love it, and ultimately didn't, but this isn't a post about how smart characters doing dumb things to advance a plot can destroy my appreciation of a movie. Prometheus is a spiritual prequel to Alien, my second favorite movie of all time, and Alien's characters often had similar afflictions, including numerous violations of the First Rule of Horror Movies: "Don't Go Down a Dark Passageway Where No One Can Hear You if You Call For Help". Prometheus is a big, smart movie filled with grand ideas, beautiful imagery, grotesque monsters and terrifying scares. If I'd seen it before seeing a sequence of movies like Alien maybe I would have cut it more slack.

I could also critique its scientific accuracy, but I'm not going to do that. Prometheus is a space opera: very early on in the movie we see a starship boldly plying its way through the deeps, rockets blazing as it shoots towards its distant destination. If you know a lot of science, that's a big waving flag that says "don't take the science in this movie too seriously." If you want hard science, go see Avatar. Yes, I know it's a mystical tale featuring giant blue people, but the furniture of the movie --- the spaceship, the base, the equipment they use --- is so well thought out it could have been taken from Hal Clement. Even concepts like the rock-lifting "flux tube," while highly exaggerated, are based on real scientific ideas. Prometheus is not Avatar. Prometheus is like a darker cousin to Star Trek: you know, the scary cousin from the other branch you only see at the family Halloween party, the one that occasionally forgets to take his medication. He may have flunked college physics, but he can sure spin a hell of a ghost story.

What I want to do is hold up Prometheus as a bad example of how to do science. I'm not saying Ridley Scott or the screenwriters don't know science, or even that they didn't think of or even film sequences which showed more science, sequences that unfortunately ended up on the cutting room floor --- and with that I'm going to shelve my caveats. What I'm saying is that the released version of Prometheus presents a set of characters who are really poor scientists, and to show just how bad they are I'd like to compare them with the scientists in the 2011 version of The Thing, who, in contrast, do everything just about right.

But Wait ... What's a "Scientist"?

Good question. You can define them by what they do, which I'm going to try to do with this article.

But one thing scientists do is share their preliminary results with their colleagues to smoke out errors before they submit work for publication. While I make a living twiddling bits and juggling words, I was trained as (and still fancy myself as) a scientist, so I shared an early version of this essay with colleagues also trained as a scientist --- and one of them, a good friend, pointed out that there's a whole spectrum of real life scientists, from the careful to the irresponsible to the insane.

He noted "there's the platonic ideal of the Scientist, there's real-life science with its dirty little secrets, and then there's Hollywood science which is often and regrettably neither one of the previous two." So, to be clear, what I'm talking when I say scientist is the ideal scientist, Scientist-with-a-Capital-S, who does science the right way.

But to understand how the two groups of scientists in the two movies operate ... I'm going to have to spoil their plots.

Shh ... Spoilers

SPOILERS follow. If you don't want to know the plots of Prometheus and The Thing, stop reading as there are SPOILERS.

Both Prometheus and The Thing are "prequels" to classic horror movies, but the similarities don't stop there: both are stories about scientific expeditions to a remote place to study alien artifacts that prove unexpectedly dangerous when virulent, mutagenic alien life is found among the ruins. The Thing even begins with a tractor plowing through snow towards a mysterious, haunting signal, a shot which makes the tractor and its track look like a space probe rocketing towards its target --- a shot directly paralleling the early scenes of Prometheus that I mentioned earlier.

Both expeditions launch in secrecy, understandably concerned someone might "scoop" the discovery, and so both feature scientists "thrown off the deep end" with a problem. Because they're both horror movies challenging humans with existential threats, and not quasi-documentaries about how science might really work, both groups of scientists must first practice science in a "normal" mode, dealing with the expectedly unexpected, and then must shift to "abnormal" mode, dealing with unknown unknowns. "Normal" and "abnormal" science are my own definitions for the purpose of this article, to denote the two different modes in which science seems to get done in oh so many science fiction and horror movies --- science in the lab, and science when running screaming from the monster. However, as I'll explain later, even though abnormal science seems like a feature of horror movies, it's actually something that real scientists actually have a lot of experience with in the real world.

But even before the scientists in Prometheus shift to "abnormal" mode --- heck, even before they get to "normal" mode --- they go off the rails: first in how they picked the project in the first place, and second, in how they picked their team.

Why Scientists Pick Projects

You may believe Earth's Moon is made of cheese, but you're unlikely to convince NASA to dump millions into an expedition to verify your claims. Pictures of a swiss cheese wheel compared with the Moon's pockmarked surface won't get you there. Detailed mathematical models showing the correlations between the distribution of craters and cheese holes are still not likely to get you a probe atop a rocket; at best you'll get some polite smiles, because that hypothesis contradicts what we already know about the lunar surface. If, on the other hand, you cough up a spectrograph reading showing fragments of casein protein spread across the lunar surface, side by side with replication by an independent lab --- well, get packing, you're going to the Moon. What I'm getting at is that scientists are selective in picking projects --- and the more expensive the project, the more selective they get.

In one sense, science is the search for the truth, but if we look at the history of science, it isn't about proving the correctness of just any old idea: ideas are a dime a dozen. Science isn't about validating random speculations sparked by why different things look similar - for every alignment between the shoreline of Africa and South America that leads to a discovery like plate tectonics, there's a spurious match between the shape of the Pacific and the shape of the Moon that leads nowhere. (Believe it or not, this theory, which sounds ridiculous to us now, was a serious contender for the origin of the Moon many years, first proposed in 1881 by Osmond Fisher). Science is about following leads --- real evidence that leads to testable predictions, like not just a shape match between continents, but actual rock formations which are mirrored, down to their layering and fossils.

There's some subtlety to this. Nearly everybody who's not a scientist thinks that science is about finding evidence that confirms our ideas. Unfortunately, that's wrong: humans are spectacularly good at latching on evidence that confirms our ideas and spectacularly bad at picking up on evidence that disconfirms them. So we teach budding scientists in school that the scientific method depends on finding disconfirming evidence that proves bad ideas wrong. But experienced scientists funding expeditions follow follow precisely the opposite principle, at least at first: we need to find initial evidence that supports a speculation before we follow it up by looking for disconfirming evidence.

That's not to say an individual scientist can't test out even a wild and crazy idea, but even an individual scientist only has one life. In practice, we want to spend our limited resources on likely bets. For example, Einstein spent the entire latter half of his life trying to unify gravitation and quantum mechanics, but he'd probably have been better off spending a decade each on three problems rather than spending thirty years in complete failure. When it gets to a scientific expedition with millions invested and lives on the line, the effect is more pronounced. We can't simply follow every idea: we need good leads.

Prometheus fails this test, at least in part. The scientists begin with a good lead: in a series of ancient human cultures, none of whom have had prior contact, they find almost identical pictures, all of which depict an odd tall creature pointing to a specific constellation in the sky not visible without a telescope, a constellation with a star harboring an Earthlike planet. As leads go, that's pretty good: better than mathematical mappings between Swiss cheese holes and lunar crater sizes, but not quite as good as a spectrograph reading. It's clearly worth conducting astronomical studies or sending a probe to learn more.

But where the scientists fail is they launch a trillion dollar expedition to investigate this distant planet, an expedition which, we learn later, was actually bankrolled not because of the good lead but because of a speculation by Elizabeth, one of the paleontologists, that the tall figure in the ancient illustration is an "Engineer" who is responsible for engineering humanity, thousands of years ago. This speculation is firmly back in the lunar cheese realm because, as one character points out, it contradicts an enormous amount of biological evidence. What makes it worse is that Elizabeth has no mathematical model or analogy or even myth to point to on why she believes it: she says she simply chooses to believe it.

If I was funding the Prometheus expedition, I'd have to ask: why? Simply saying she later proves to be right is no answer: right answers reached the wrong way still aren't good science. Simply saying she has faith is not an answer; that explains why she continues to hold the belief, but not how she formed it in the first place. Or, more accurately, how she justified her belief: as one of of my colleagues reading this article pointed out, it really doesn't matter why she came to believe it, only how she came to support it. After all, the chemist Kekulé supposedly figured out benzene's ring shape after dreaming about a snake biting its tail --- but he had a lot of accumulated evidence to support that idea once he had it. So, what evidence led Elizabeth to believe that her intuition was correct?

Was there some feature of the target planet that makes it look like it is the origin of life on Earth? No, from the descriptions, it doesn't seem Earthlike enough. Was there some feature of the rock painting that makes the tall figures seem like they created humans? No, the figure looks more like a herald. So what sparked this idea in her? We just don't know. If there was some myth or inscription or pictogram or message or signal or sign or spectrogram or artifact that hinted in that direction, we could understand the genesis of her big idea, but she doesn't tell us, even though she's directly asked, and has more than enough time to say why using at least one of those words. Instead, because the filmmakers are playing with big questions without really understanding how those kinds of questions are asked or answered, she just says it's what she chooses to believe.

But that's not a good reason to fund a trillion dollar scientific expedition. Too many people choose to believe too many things for us to send spacecraft to every distant star that someone happens to wish upon --- we simply don't have enough scientists, much less trillions. If you want to spend a trillion dollars on your own idea, of course, please knock yourself out.

Now, if we didn't know the whole story of the movie, we could cut them slack based on their other scientific lead, and I'll do so because I'm not trying to bash the movie, but to bash the scientists that it depicts. And while for the rest of this article I'm going to be comparing Prometheus with The Thing, that isn't fair in this case. The team from Prometheus follows up a scientific lead for a combination of reasons, one pretty good, one pretty bad. The team from The Thing finds a fricking alien spacecraft, or, if you want to roll it back further, they find an unexplained radio signal in the middle of a desert which has been dead for millions of years and virtually uninhabited by humans in its whole history. This is one major non-parallel between the two movies: unlike the scientists of Prometheus, who had to work hard for their meager scraps of leads, the scientists in The Thing had their discovery handed to them on a silver platter.

How Scientists Pick Teams

Science is an organized body of knowledge based on the collection and analysis of data, but it isn't just the product of any old data collection and analysis: it's based on a method, a method which is based on analyzing empirical data objectively in a way which can be readily duplicated by others. Science is subtle and hard to get right. Even smart, educated, well-meaning people can fool themselves, so it's important for the people doing it to be well trained so that common mistakes in evidence collection and reasoning can be avoided.

Both movies begin with real research to establish the scientific credibility of the investigators. Early in Prometheus, the scientists Elizabeth and Charlie are shown at an archaeological dig, and later the android David practices some very real linguistics --- studying Schleicher's Fable, a highly speculative but non-fictional attempt to reconstruct early human languages --- to prepare for a possible meeting with the Engineers that Elizabeth and Charlie believe they've found. Early in The Thing, Edvard's team is shown carefully following up on a spurious radio signal found near their site, and the paleontologist Kate uses an endoscope to inspect the interior of a specimen extracted from pack ice (just to be clear, one not related to Edvard's discovery).

But in Prometheus, things almost immediately begin to go wrong. The team which made the initial discovery is marginalized, and the expedition to study their results is run by a corporate executive, Meredith, who selects a crew based on personal loyalty or willingness to accept hazard pay. Later, we find there are good reasons why Meredith picked who she did --- within the movie's logic, well worth the trillion dollars her company spent bankrolling the expedition --- but those criteria aren't scientific, and they produce an uninformed, disorganized crew whose expedition certainly explores a new world, but doesn't really do science.

The lead scientist of The Thing, Edvard, in contrast, is a scientist in charge of a substantial team on a mission of its own when he makes the discovery that starts the movie. He studies it carefully before calling in help, and when he does call in help, he calls in a close friend --- Sander, a dedicated scientist in his own right, so world-renowned that Kate recognizes him on sight. He in turn selects Kate based on another personal recommendation, because he's trying to select a team of high caliber. Sander clashes with Kate when she questions his judgment, but these are just disagreements and don't lead to foul consequences.

In short, The Thing picks scientists to do science, and this difference from Prometheus shows up almost immediately in how they choose to attack their problems.

Why Scientists Don't Bungee Jump Into Random Volcanoes

Normal science is the study of things that aren't unexpectedly trying to kill you. There may be a hazardous environment, like radiation or vacuum or political unrest, and your subject itself might be able to kill you, like a virus or a bear or a volcano, but in normal science, you know all this going in, and can take adequate precaution. Scaredycats who aren't willing to study radioactive bears on the surface of Mount Explodo while dodging the rebel soldiers of Remotistan should just stay home and do something safe, like simulate bear populations on their laptops using Mathematica. The rest of us know the risks.

Because risk is known, it's important to do science the right way. To collect data not just for the purposes of collecting it, but to do so in context. If I've seen a dozen bees today, what conclusions can you draw? Nothing. You don't know if I'm in a jungle or a desert or even if I'm a beekeeper. Even if I told you I was a beekeeper and I'd just visited a hive, you don't even know if a dozen bees is a low number, a high number, or totally unexpected. Is it a new hive just getting started, or an old hive dying out? Is it summer or winter? Did I record at noon or midnight? Was I counting inside or outside the hive? Even if you knew all that, you can interpret the number better if you know the recent and typical statistics for beehives in that region, plus maybe the weather, plus ...

What I'm getting at that it does you no good as a scientist to bungee jump into random volcanoes to snap pictures of bubbling lava, no matter how photogenic that looks on the cover of National Geographic or Scientific American. Science works when we record observations in context, so we can organize the data appropriately and develop models of its patterns, explanations of its origins and theories about its meaning. Once again, there's a big difference in the kind of normal-science data collection depicted in Prometheus and The Thing. With one or two notable exceptions, the explorers in Prometheus don't do organized data collection at all - they blunder around almost completely without context.

How (Not) to Do Normal Science

In Prometheus, after spending two whole years approaching the alien world LV223, the crew lands and begins exploring without more than a cursory survey. We know this because the ship arrives on Christmas, breaks orbit, flies around seemingly at random until one of our heroes leaps from his chair because he's sighted a straight line formation, and then the ship lands, disgorging a crew of explorers eager to open their Christmas presents. We can deduce from this that less than a day has passed from arrival to landing, which is not enough time to do enough orbits to complete a full planetary survey. We can furthermore deduce that the ship had no preplanned route because then the destination would not have been enough of a surprise for our hero to leap out of his chair (despite the seat-belt sign) and redirect the landing. Once the Prometheus lands, the crew performs only a modest atmospheric survey before striking out for the nearest ruin. In true heroic space opera style this ruin just happens to have a full stock of all the interesting things that they might want to encounter, and as a moviegoer, I wasn't bothered by that. But it's not science.

Planets are big. Really big. The surface area of the Earth is half a billon square kilometers. The surface area of a smaller world, one possibly more like LV223, is just under a hundred fifty million square kilometers. You're not likely to find anything interesting just by wandering around for a few hours at roughly the speed of sound. The crew is shown to encounter a nasty storm because they don't plan ahead, but even an archaeological site is too big to stumble about hoping to find something, much less the mammoth Valley of the Kings style complex the Prometheus lands in. Here the movie both fails and succeeds at showing the protagonists doing science: they blunder out on the surface despite having perfectly good mapping technology (well, speaking as this is one of my actual areas of expertise, really awesome mapping technology), which they later use to map the inside of a structure, enabling one of the movie's key discoveries. (The other key discovery is made as a result of David spending two years studying ancient languages so he can decipher and act on alien hieroglyphs, and he has his own motives for deliberately keeping the other characters in the dark, so props to the filmmakers there: he's doing bad science for his team, but shown to be doing good science on his own, for clearly explained motives).

SO ANYWAY, a scientific expedition would have been mapping from the beginning to provide context for observations and to direct explorations. A scientific expedition would have released an army of small satellites to map the surface; left them up to predict weather; launched a probe to assess ground conditions; and, once they landed, launched that awesome flock of mapping drones to guide them to the target. The structure of the movie could have remained the same - and still shown science.

The Thing provides an example of precisely this behavior. The explorers in The Thing don't stumble across it. They're in Antarctica on a long geological survey expedition to extract ice cores. They've mapped the region so thoroughly that spurious radio transmissions spark their curiosity. Once the ship and alien are found, they survey the area carefully in both horizontal and vertical elevation, build maps, assess the structure of the ice, and set up a careful archeological dig. When the paleontologist Kate arrives, they can tell her where the spacecraft and alien are, roughly how long the spacecraft has been there, and even estimate the fracturability of the ice is like around the specimen based on geological surveys, and already have collected all the necessary equipment. Kate is so impressed she exclaims that the crew of the base doesn't really need her. And maybe they don't. But they're careful scientists on the verge of a momentous discovery, and they don't want to screw it up.

Real Scientists Don't Take off Their Helmets

Speaking of screwing up momentous discoveries, here's a pro tip: don't take off your helmet on an alien world, even if you think the atmosphere is safe, if you later plan to collect biological samples and compare them with human DNA, as the crew does in Prometheus. Humans are constantly flaking off bits of skin and breathing out droplets of moisture filled with cells and fragments of cells, and taking off a helmet could irrevocably contaminate the environment. The filmmakers can't even point to the idea that you could tell human from alien DNA because ultimately chemicals are chemicals: the way you tell human from alien DNA is to collect and sequence it, and in an alien environment filled with unknown chemicals, human-deposited samples could quickly break down into something that looked alien. You might get lucky ... but you probably won't. Upon reading this article, one of my colleagues complained to me that this was an unfair criticism because it's a simply a filmmaker's convention to let the audience see the faces of the actors, but regardless of whether you buy that for the purpose of making an engaging space opera with great performances by fine actors, it nevertheless portrays these scientists in a very bad light. No crew of careful scientists is going to take off their helmets, even if they think they've mysteriously found a breathable atmosphere.

The movie Avatar gets this right when, even in a dense jungle, one character notices another open a sample container with his mouth (to keep his hands free) and points out that he's contaminated the sample. The Thing also addresses the same issue: one key point of contention between paleontologist Kate and her superior Sander is that Sander wants to take a sample to confirm that their find is alien and that Kate does not because she doesn't want the sample to be contaminated. Both are right: Kate's more cautious approach preserves the sample, while Sander's more experienced approach would have protected the priority of his discovery from other labs if it really was alien, or let them all down early if the sample just was some oddly frozen Earth animal. My sympathy is with Kate, but my money is actually on Sander here: with a discovery as important as finding alien life on Earth, it's critically important to exclude as soon as possible the chance that what we've found is actually a contorted yak. More than enough of the sample remained undisturbed, and likely uncontaminated, to guard against Kate's fears.

Unfortunately, neither the crew of Prometheus or The Thing get the chance to be proved lucky or right.

How (Not) to Do Abnormal Science

Abnormal science is my term for what scientists do when "everything's gone to pot" and lives are on the line. This happens more often than you might think: the Fukushima Daiichi nuclear disaster and the Deepwater Horizon oil spill are two recent examples. Strictly speaking, what happens in abnormal science isn't science, that is, the controlled collection of data designed to enhance the state of human knowledge. Instead, it's crisis mitigation, a mixture of first responses, disaster management and improvisational engineering designed to blunt the unfolding harm. Even engineering isn't science; it's a procedure for tackling a problem by methodically collecting what's known to set constraints on a library of best practices that are used to develop solutions. The tools of science may get used in the improvisational engineering that happens after a disaster, but it's rarely a controlled study: instead, what gets used are the collected data, the models, the experimental methods and more importantly the precautions that scientists use to keep themselves from getting hurt.

One scientific precaution often applied in abnormal science which Prometheus and The Thing both get right is quarantine. When dealing with a destructive transmissible condition, like an infectious organism or a poisonous material, the first thing to do is to quarantine it: isolate the destructive force until it's neutralized, until the vector of spread is stopped, or until the potential targets are hardened or inoculated. After understandable moments of incredulity, both the crew of the Prometheus and The Thing implement quarantines to stop the spread of the biological agent and then decisively up the ante once its full spread is known.

The next scientific precaution applied in abnormal science is putting the health of team members first. So, for goodness's sake, if you've opened your helmet on an alien world, start feeling under the weather, and then see a tentacle poke out of your eye, don't shrug it off, put your helmet back on and venture out onto a hostile alien world as part of a rescue mission! On scientific expeditions, ill crewmembers do not go on data collection missions, nor do they go on rescue missions. That's just putting yourself and everyone else around you in danger - and the character in question in Prometheus pays with his life for it. In The Thing, in contrast, when a character gets mildly sick after an initial altercation, the team immediately prepares to medivac him to safety (this is before the need for a quarantine is known).

Another precaution observed in abnormal science is full information sharing. In both the Fukushima Daiishi and Deepwater Horizon disasters, lack of information sharing slowed down the potential response to the disaster - though in the Fukushima case it was a result of the general chaos of a country-rocking earthquake while in the Deepwater Horizon case it was a deliberate and in some cases criminal effort at information hiding in an attempt to create positive spin. The Prometheus crew has even the Deepwater Horizon event beat. On a relatively small ship, there are no less than seven distinct groups, all of whom hide critical information from each other - sometimes when there's not even a good motivation to. (For the record, these groups are (1) the mission sponsor Weyland who hides himself and the real mission from the crew, (2) the mission leader Meredith who's working for and against Weyland, (3) the android David who's both working with and hiding information from Weyland, Meredith, the crew and everyone else, (4) the regular scientific crew trying to do their jobs, (5) the Captain who then directs the crew via a comlink and then hides information for no clear reason, (6) the scientist Charlie who hides information about his illness from the crew and his colleague and lover Elizabeth, and finally (7) Elizabeth, who like the crew is just trying to do her job, but ends up having to hide information about her alien "pregnancy" from them to retain freedom of action). There are good story reasons why everyone ends up being so opposed, but as an example of how to do science or manage a disaster ... well, let's say predictable shenanigans ensue.

In The Thing, in contrast, there are three groups: Kate, who has a conservative approach, Sander, who has a studious approach, and everyone else. Once the shit hits the fan, both Kate and Sander share their views with everyone in multiple all-hands meetings (though Sander does at one point try to have a closed door meeting with Kate to sort things out). Sander pushes for a calm, methodical approach, which Kate initially resists but then participates with, helping her make key discoveries which end up detecting the alien presence relatively early. Then Kate pushes for a quarantine approach, which Sander resists but then participates in, volunteering key ideas which the alien force thinks are good enough to try to sabotage. Only at the end, when Kate suggests a test that the uninfected Sander knows full well will result in a false positive result for him, do they really get at serious loggerheads - but they're not given a chance to resolve this, as the science ends and the action movie starts at that point.

The Importance of Peer Review

I enjoyed Prometheus. I saw it twice. I'll buy it on DVD or Blu-Ray or something. I loved its focus on big questions, which it raised and explored and didn't always answer. It was pretty and gory and pretty gory. It pulled off the fair trick of adding absolute classic scenes to the horror genre, like Elizabeth's self-administered Ceasarean section, and absolute classic scenes to the scifi genre, like the David in the star map sequence - and perhaps even the crashing alien spacecraft inexorably rolling towards our heroes counts as both classic horror and classic science fiction at the same time.

But as Ridley Scott was quoted as saying, Prometheus was a movie, not a science lesson. The Thing is too. Like Prometheus, the accuracy of the scientific backdrop of The Thing is a full spectrum mixture of dead on correct (the vastness of space) to questionable (where do the biological constructs created by the black goo in Prometheus get their added mass? how can the Thing possibly be so smart that it can simulate whole humans so well that no-one can tell them apart?) to genre tropes (faster than light travel, alien life being compatible with human life) to downright absurd (humanoid aliens creating human life on Earth, hyperintelligent alien monsters expert at imitation screaming and physically assaulting people rather than simply making them coffee laced with Thing cells).

I'm not going to pretend either movie got it right. Neither Prometheus nor The Thing are good sources of scientific facts --- both include a great deal of cinematic fantasy.

But one of them can teach you how to do science.

-the Centaur

Pictured: a mashup of The Thing and Prometheus's movie posters, salsa'd under fair use guidelines.

Thanks to: Jim Davies, Keiko O'Leary, and Gordon Shippey for commenting on early drafts of this article. Many of the good ideas are theirs, but the remaining errors are my own.

Too Many Projects … or an External Memory?

centaur 0

too-many-projects-detail.png

Anyone who knows me in detail knows I'm a pile person. You can see the all the windows open above, but that's not the half of it: I had 14 tabs open in Firefox, 3 windows with 17, 13, and 3 tabs open in Chrome, and ten windows open in Finder, Mac OS X's file browser. I hammer my operating systems, loading them with as many windows, programs, files and fonts they can take.

IMG_20120204_115758.jpg

But it's not just operating systems. I've got a huge folder of todos in my jacket pocket, a pile of books in my bookbag, on the table, in my car. My library, office, spare office and even kitchen table are filled with piles, as is my desk at work.

On the one hand, this could simply be because I'm a hoarder and need to learn to clean up more, and maybe I do. But most of the piles are thematically organized: in the shot above you can see (slightly overlapping) piles for a young adult and urban fantasy series, an art pile, a pile of bills, CDs being organized, and so on.

Some of this is, again, a product of mess, but the rest of it is a deliberate strategy. A collection of books on a topic serves as an external memory that augments the goo we have in our heads. This is part of the theory of situated cognition, which posits that our memories are elaborated through interaction with the external world.

William Clancey, one of the founders of situated cognition, puts it this way: his knowledge of what to take on a fishing trip isn't in his internal memory: it's in his fully stocked tacklebox, which represents the stored wisdom of many, many fishing trips; if he was to lose that tacklebox, he'd lose a portion of his memory, and become less effective.

My toiletry bag for flying serves the same role. Its contents have been refined over dozens, maybe even hundreds of trips. It doesn't just have a toothbrush and toothpaste, contact lens solution and hairspray, it has soap, shampoo, cough drops, nail clippers, bandaids and more. If I forget it, and try to recreate the toiletries that I need for a trip on the fly, I almost always have to go back to the store.

Situated cognition has been challenged, and I couldn't find the perfect reference that summarized what Clancey said in the Cognitive Science Brownbag talk I attended at Georgia Tech so many years ago. But I know how I work, and I know how it's influenced by that framework.

When I'm tackling a project, I build a pile. It might be a pile of tabs in a browser, folders of links in my bookmarks, files in a directory, books from my mammoth library. These serve as references I use to generate the text, the material I use to generate my writing, but they also serve as something more. They serve as a pointer to return me to an old mental state.

If I have to close my browser, reboot my machine, put a project aside, switch to another book, I can keep the pile. I have mammoth collections of files and bookmarks, and a mammoth library with something like 30 bookcases (that's cases, not shelves). And when I'm ready to reopen the project, I can start work on it again.

I've done that recently, restarting both my work on the "Watch on a Tangled Chain" interactive fiction and an exploration of programming languages - one project I hadn't worked on for a year, and one maybe for several years. But when I found the files, I was able to resume my work almost effortlessly. With physical piles of books, the process is even more joyful, as it involves reading snippets from half a dozen or so books until I'm back into the mindset.

too-many-projects-screenshot.png

So thank you, my poor processor, my crowded browser, my packed library. You make me more than I am on my own.

-the Centaur

The Future of Warfare

centaur 0

ogre-4.jpg

Every day, a new viral share sparks through the Internet, showing robots and drones and flying robot drones playing tennis while singing the theme to James Bond. At the same time we've seen shares of area-denying heat rays and anti-speech guns that disrupt talking ... and it all starts to sound a little scary. Vijay Kumar's TED talk on swarms of flying robots reminded me that I've been saying privately to friends for years years that the military applications of flying robots are coming ... for the first time, we'll have a technology that can replace infantry at taking and holding ground.
The four elements of military power are infantry, who take and hold ground, cavalry, which break up infantry, artillery, which softens up positions from a distance, and supply, which moves the first three elements into position. In our current world those are still human infantry, human piloted tanks, human piloted bombers, and human piloted aircraft carriers.
We already have automated drones for human-free (though human-controlled) artillery strikes. Soon we will have the capacity to have webs of armed flying robots acting as human-free infantry holding ground. Autonomous armored vehicles acting as human-free cavalry are farther out, because the ground is a harder problem than the air, but they can't be too far in the future. Aircraft carriers and home bases we can assume can be manned for a while.

So then soon, into cities that have been softened up by drone strikes, we'll have large tanks like OGREs trundling in serving as refueling stations for armies of armored flying helicopters who will spread out to control the ground. No longer will we need to throw lives away to hold a city ... we'll be able to do it from a distance with robots. One of the reasons I love The Phantom Menace is that is shows this kind of military force in action.
Once a city is taken, drones can be used for more than surveillance ... a drone with the ability to track a person can become a flying assassin, or at least force someone to ditch any networked technology. Perhaps they'll even be able to loot items or, if they're large and able enough, even kidnap people.
It would be enormously difficult to fight such a robotic force. A robotic enemy can use a heat ray to deny people access to an area or a noise gun to flush them out. Camera detection technology can be used to flush out anyone trying to deploy countermeasures. Radar flashlights can be used to find hiding humans by their heartbeats, speech jammers can be used to prevent them from coordinating, and face detection you probably have on your phone will work against anyone venturing out in the open. I've seen a face detector in the lab combined with a targeting system and a nerf gun almost nail someone ... and now a similar system is in the wild. The system could destroy anyone who had a face.
And don't get me started on terminators and powered armor.
Now, I am a futurist, transhumanist, Ph.D. in artificial intelligence, very interested in promoting a better future ... but all too familiar with false prophecies of the field. Critics of futurism are fond of pointing out that many glistening promises of the future have never come to pass. But we don't need a full success for these technologies to be deployed. Many of the pieces already exist, and even if they're partially deployed, partially effective mostly controlled by humans ... they could be awesome weapons of warfare ... or repression.
The future of warfare is coming. And it's scary. I'd say I don't think we can stop it, and on one level I don't ... but we've had some success in turning back from poison gas, are making progress on land mines, and maybe even nuclear weapons. So it is possible to step back from the brink ... but I don't want to throw the baby out with the bathwater the way we seem to have done with nuclear power (to the climate's great detriment). As my friend Jim Davies said to me, 99% of the technologies we'd need to build killbots have nothing to do with killbots, and could do great good.
In the Future of Warfare series on this blog, I'm going to monitor developing weapons trends, both military systems and civilian technologies, realistic and unrealistic, in production and under speculation. I'm going to try to apply my science fiction writer's hat to imagine possible weapons systems, my scientist's hat to explore the technologies to build them, and my skeptic's hat to help discard the ones that don't hold water. Hint: it's highly likely people will invent new ways to hurt each other ... but highly unlikely that Skynet will decide our fate in a millisecond.
A bright future awaits us in the offworld colonies ... but if we want to get there, we need to be careful about the building blocks we use.
-the Centaur
Pictured: an OGRE miniature. This blogpost is an expansion of an earlier Google+ post.

The Rules Disease at Write to The End

centaur 0


IMG_20120304_111441.jpg

I've a new essay on writing at the Write to the End blog, called "The Rules Disease." A preview:

Anyone who seriously tackles the craft of writing is likely to have encountered a writing­ rule, like “Show, Don’t Tell,” or “Never Begin a Sentence with a Conjunction.” “Don’t Split Infinitives” and “Never Head Hop” are also popular. The granddaddy of all of them, “Omit Needless Words,” is deliciously self-explanatory … but the ever baffling “Murder Your Darlings” is a rule so confusing it deserves its own essay.

This is part of my ongoing column The Centaur's Pen.

-the Centaur

Scientific Citations in Popular Literature

centaur 0
Lightly edited from a recent email:
Here's the revised version. Rather than just including linked references [in that middle section as you suggested], I actually expanded that section so that it was clear who I was citing and what I was claiming they said. Citations work for science types but I want to learn (create? promote?) a new way of including references for popular literature in which, rather than saying something like, "Scientists think it's OK to start sentences with a conjunction [Wolfram 2002]." I instead want to say things like, "In the foreword of his mammoth tome A New Kind of Science, computer scientist Stephen Wolfram defends starting sentences with conjunctions, arguing forcefully that it makes long, complex arguments easier to read." Yes, it's longer, but it's more honest, and the [cite] style was aimed at scientific papers with enormously compressed length requirements. Tell me what you think.
What do you think about the use of citations in non-scientific literature? I think we can do better. I'm just not sure what it is yet. Textbooks have generally solved this problem with "info boxes," but that's not always appropriate. -the Centaur

Gödel, Escher, Bach: An Eternally Inspiring Tome

centaur 0

IMG_20120301_212447.jpg

This is the book that got me started on artificial intelligence ... and now has inspired me again to attack my craft with greater vigor. I was writing an essay for The Centaur's Pen column for the Write to The End site and realized it depended on a concept - true, but unprovable theorems - which isn't in wide circulation. So I've started an essay on that topic for this site, and decided to go reread Gödel, Escher, Bach, the book which introduced me to the concept.

At the writing group, the topic of the essay and Gödel, Escher, Bach came up, and we all started discussing how intricate, how rewarding, and how friendly Hofstadter's immense tome is. It's a work of genius that continues to stagger me to this day. And then my writing friends told me that in the new edition there's a foreward with the entire back story of how the book came to be.

I picked it up last night, and reading the new intro I was gratified to learn that I understood his basic thesis - that conscious intelligence arises from bare matter by grounding its symbols in correspondence to reality, then inexorably turning that grounding inward into a spiral of self-reference with no end. Hofstadter and I might disagree about what's sufficient to produce conscious intelligence, but we'd just be quibbling about details, because I think he nailed a necessary component.

But after the intro of the foreword, when I began to read the story of how this 750 page long Pulitzer Prize winning book started its life as a 20 page letter that Hofstadter decided needed to be turned into a pamphlet, I was stunned.

He wrote it in 5 years.

Well, it actually took 6 to complete, because he typeset it himself---through a happy-but-not-at-the-time accident, twice---producing an amazing work that was polished far beyond his original intention. But he wrote it while in graduate school, while teaching classes, while traveling cross-country. He put it down for a bit finishing his PhD thesis itself, but basically the book's a white hot blaze of inspiration polished to pure excellence.

I'm inspired, all over again.

-the Centaur

efface[john-mccarthy;universe]

centaur 0
John McCarthy, creator of Lisp and one of the founders of the field of artificial intelligence, has died. He changed the world more than Steve Jobs ... but in a far subtler way, by laying the foundation for programs like Apple's Siri through his artificial intelligence work, or more broadly by laying the foundation for much of modern computing through innovations like the IF-THEN-ELSE formalism. It's important not to overstate the impact of great men like John and Steve; artificial intelligence pioneers like Marvin Minsky would have pushed us forward without John, and companies like Xerox and Microsoft would have pushed us forward without Steve. But we're certainly better off, and farther along, with their contributions. I have only three stories to tell about John McCarthy. The third story is that I last saw him at a conference at IBM, in a mobile scooter and not looking very well. Traveling backwards in time, the second story is that I spoke with one of his former graduate students, who saw a John McCarthy poster in my office, and told me John's illness had progressed to the point where he basically couldn't program any more and that he was feeling very sad about it. But what I want to remember is my first encounter with John ... it's been a decade and a half, so my memory's fuzzy, but I recall it was at AAAI-97 in Providence, Rhode Island. I'd arrived at the conference in a terrible snafu and had woken up a friend at 4 in the morning because I had no place to stay. I wandered the city looking for H.P. Lovecraft landmarks and had trouble finding them, though I did see a house some think inspired Dreams in the Witch House. But near the end, at a dinner for AI folks, I want to say at Waterplace Park but I could be misremembering, I bumped in to John McCarthy. He was holding court at the end of the table, and as the evening progressed I ended up following him and a few friends to a bar, where we hung out for an evening. And there, the grand old man of artificial intelligence, still at the height of his powers, regaled the wet-behind-the-ears graduate student from Atlanta with tales of his grand speculative ideas, beyond that of any science fiction writer, to accelerate galaxies to the speed of light to save shining stars from the heat death of the universe. We'll miss you, John. -Anthony Image stolen shamelessly from Zach Beane's blog. The title of this post is taken from the Lisp 1.5 Programmer's Manual, and is the original, pre-implementation Lisp M-expression notation for code to remove an item from a list.

Who Am I?

centaur 2
me in front of the bell bridge books promotional material for BLOOD ROCK Who are you? Good question. I'm Anthony Francis, and I write stuff and make computers jump through hoops for a living. What have you done? I'm most notable for the EPIC award winning urban fantasy novel FROST MOON and its sequel, BLOOD ROCK, which are about magical tattoo artist Dakota Frost and are therefore hopefully both more interesting than my ~700 page PhD thesis on context-sensitive computer memory. Also on the computer side, I've done some exploration of robot emotions. What are you doing next? Forthcoming in the Dakota Frost series is the third book, LIQUID FIRE, and this November for National Novel Writing Month I plan to work on HEX CODE, the first in a spin-off series featuring Dakota's adopted daughter Cinnamon Frost. Are you working on anything other than Dakota Frost? I've also recently completed a rough draft of the first book in a new series, JEREMIAH WILLSTONE AND THE CLOCKWORK TIME MACHINE. A short story set in this universe, "Steampunk Fairy Chick", will be included in the forthcoming anthology UnCONventional. What are you working on currently? I'm also currently working on a fourth new series with the working title STRANDED, a young adult science fiction novel set a thousand years in the future, featuring a spoiled young centauress who must rescue a shipload of children who have crashlanded upon a world she wanted to claim as her own. This story's set in the "Library of Dresan" universe from which this blog takes its name and which was setting of my very first unpublished novel "homo centauris", which I am now happily milking for its 57 billion year backstory. Anything else? I have a flash fiction story called "The Secret of the T-Rex's Arms" to appear on the Smashed Cat Magazine. I've also published one short story, "Sibling Rivalry" in the Leading Edge Magazine. I have a webcomic, f@nu fiku, on hiatus. And I'm actively involved with helping people succeed at 24 Hour Comics through tutorials that I and my friend Nathan Vargas have put together at Blitz Comics. Is that enough questions for now? Yes, it is. Please enjoy. -the Centaur

What Is Consciousness?

centaur 0
what information is beautiful thinks i think about consciousness infographic on consciousness as functionalism The ever wonderful chaps at Information is Beautiful have put up a beautiful animated infographic of many of the major theories of consciousness. Click on the graphic to the right to see them all ... I'm essentially a functionalist but try to keep an open mind. OK, I can state it more forcefully than that: I believe, and believe I can point to evidence for, that consciousness performs many important functions, and I want to know what they all are, how they work together, and how they relate to the other functions of the brain. If we do build up a solid picture of that, however, it won't surprise me too much if we find interesting phenomena left over that require us rethinking everything we've done up to that point. -the Centaur UPDATE: Ooo, there's even more to the graphic than I thought ... you can click on the brains and get it to produce a composite graphic of what "your" theory of consciousness is.

Tricking Yourself Into Doing The Right Thing

centaur 0
Ribeye Steak, Tabbouleh, and Cognitive Neuroscience

Sometimes it's hard to do the right thing. For example, I enjoy eating dinner out. There's nothing wrong with that; but it's always easier to eat out than it is to fix dinner, as I can have high-quality healthy food made for me while I read or write or draw, whereas cooking at home involves shopping, cooking, and cleaning that I'm fortunate enough to be able to pay other people to do (and that through the absurd good luck that the rather esoteric work I was most interested in doing in grad school turned out to be relatively lucrative in real life).

But that's not fair to my wife, or cats, nor does it help me catch up on my pile of DVDs or my library cleaning or any of a thousand other projects that can't be done out at dinner. Sometimes I deliberately go out to dinner because I need to read or write or draw rather than do laundry, but I shouldn't do that all the time - even though I can. But, if I keep making local decisions each time I go out to eat, I'll keep doing the same thing - going out to eat - until the laundry or bills or book piles reach epic proportions.

This may not be a problem for people who are "deciders", but I'm definitely a "get-stuck-in-a-rutter". So how can I overcome this, if I'm living with the inertia of my own decision making system? One way is to find some other reason to come home - for example, cooking dinner with my wife (normally not convenient as she eats early, while I'd normally be at work, and even if I did try to get home her dinner time traffic puts me an hour and a half from home; but we've set a time to do that from time to time) but she's out of town for business in New York, so I don't have her to help me.

So the way I've been experimenting with recently is treating myself. Over the weekend I made a large bowl of tabbouleh, one of my favorite foods, and pound cake, one of my favorite desserts. The next evening I grabbed a small plate of sushi from Whole Foods and made another dent into the tabbouleh. I had a commitment the next night, but the following night I stopped to get gas and found that a Whole Foods had opened near my house, and on the spur of the moment I decided to go in, get a ribeye steak, and cook myself another dinner, eating even more of the tabbouleh.

The tabbouleh itself is healthy, and maybe the sushi is too; the steak, not so much. Normally I wouldn't get another steak as I'd had a few recently, both homecooked and out at restaurants; but I wanted to overcome my decision making inertia. It would have been so easy to note the presence of the Whole Foods for later and go eat out; instead, I said explicitly to myself: you can have a steak if you eat in. And so I walked in to Whole Foods, walked out a couple minutes later with a very nice steak, and went home, quickly cooked a very nice dinner, and got some work done.

Normally I prefer to eat about one steak a month (or less), sticking to mostly fish as my protein source, but I'll let my red meat quota creep up a bit if it helps me establish the habit of cooking more meals at home. Once that habit's more established, I can work on making it healthier again. Already I know ways to do it: switch to buffalo, for example, which I prefer over beef steak anyway (and I'm not just saying that as a health food nut; after you've eaten buffalo long enough to appreciate the flavor you don't want to go back).

So far, tricking myself into doing the right thing has been a success. Now let's see if we can go a step further and just do the right thing on our own.

-the Centaur

Pictured: a ribeye steak, fresh fruit and mint garnish, tabbouleh in a bed of red leaf lettuce, and Gazzaniga et al.'s textbook on Cognitive Neuroscience.

Your AI Just Wants To Have Fun

centaur 2
Upcoming AAAI Workshop: AI and Fun:

Interactive entertainment (aka computer games) has become a dominant force in the entertainment sector of the global economy. The question that needs to be explored in depth: what is the role of artificial intelligence in the entertainment sector? If we accept the premise that artificial intelligence has a role in facilitating the entertainment and engagement of humans, then we are left with new questions...


Papers due March 29...

Recursion, XKCD Style

centaur 0


Ok, this is a good runner up for the best definition of recursion. Douglas Hofstadter would be proud.

I think I'm going to start collecting these.

-the Centaur

Comic from xkcd, used according to their "terms of service":
You are welcome to reprint occasional comics pretty much anywhere (presentations, papers, blogs with ads, etc). If you're not outright merchandizing, you're probably fine. Just be sure to attribute the comic to xkcd.com.
So attributed.

Hm. Does the xkcd terms of service apply to the xkcd terms of service? Is that a bit like a post about recursion referring to itself? How meta.

Best definition of recursion EVAH.

centaur 0
recursion did you mean recursion

For those that don't get it, recursion in computer science refers to a process or definition that refers back to itself. For example, you could imagine "searching for your keys" in terms of searching everywhere in your house for your keys, which involves finding each room and searching everywhere in each room for your keys, which involves going into each room and looking for all the drawers and hiding places and looking everywhere in them for your keys ... and so on, until there's no smaller place to search.

So searching for [recursion] on Google involves Google suggesting that you look for [recursion]. Neat! And I'm pretty sure this is an Easter Egg and not just a bug ... it's persisted for a long time and is geeky enough for the company that encourages you to "Feel Lucky"!

-the Centaur

More Computer Hugs

centaur 0
ashwin ramMy colleague Ashwin Ram (pictured to the left, not above :-) has blogged about the "Emotional Memory and Adaptive Personalities" paper that he, Manish Mehta and I wrote. Go check it out on his research blog on interactive digital entertainment. It highlights the work his Cognitive Computing lab is doing to lay the underpinnings for a new generation of computer games based on intelligent computer interaction - both simulated intelligence and increased understanding of the player and his relationship with the environment.
They're putting out a surprising amount of work in this area; you should go check it out...

-the Centaur
P.S. The title of the post comes from my external blogpost on the paper, "Maybe your computer just needs a hug."

The Ogre Mark … 0.1?

centaur 1
Ogre T-Shirt

As a teenager I used to play OGRE and GEV, the quintessential microgames produced by Steve Jackson featuring cybernetic tanks called OGREs facing off with a variety of lesser tanks. For those that don't remember those "microgames", they were sold in small plastic bags or boxes, which contained a rulebook, map, and a set of perforated cardboard pieces used to play the game. After playing a lot, we extended OGRE by creating our own units and pieces from cut up paper; the lead miniature you see in the pictures came much later, and was not part of the original game.

Ogre Game

In OGRE's purest form, however, one OGRE, a mammoth cybernetic vehicle, faced off with a dozen or so more other tanks firing tactical nuclear weapons ... and thanks to incredible firepower and meters of lightweight BCP armor, it would just about be an even fight. Below you see a GEV (Ground Effect Vehicle) about to have a very bad day.

Ogre vs GEV

OGREs were based (in part) on the intelligent tanks from Keith Laumer's Bolo series, but there was also an OGRE timeline that detailed the development of the armament and weapons that made tank battles make sense in the 21st century. So there was a special thrill playing OGRE: I got to relive my favorite Keith Laumer story, in which one decommissioned, radioactive OGRE is accidentally reawakened and digs its way out of its concrete tomb to continue the fight. (The touching redemption scene in which the tank is convinced not to lay waste to the countryside by its former commander were, sadly, left out of the game mechanics of Steve Jackson's initial design).

Ogre Miniature

But how realistic are tales of cybernetic tanks? AI is famous for overpromising and underdelivering: it's well nigh on 2010, and we don't have HAL 9000, much less the Terminator. But OGRE, being a boardgame, did not need to satisfy the desires of filmmakers to present a near-future people could relate to; so it did not compress the timeline to the point of unbelievability. According to the Steve Jackson OGRE chronology the OGRE Mark I was supposed to come out in 2060. And from what I can see, that date is a little pessimistic. Take a look at this video from General Dynamics:

[youtube=http://www.youtube.com/watch?v=jCAiQyuWfOk]

It even has the distinctive OGRE high turret in the form of an automated XM307 machine gun. Scary! Admittedly, the XUV is a remote controlled vehicle and not a completely automated battle tank capable of deciding our fate in a millisecond. But that's not far in coming... General Dynamics is working on autonomous vehicle navigation, and they're not alone. Take a look at Stanley driving itself to the win of the Darpa Grand Challenge:

[youtube=http://www.youtube.com/watch?v=LZ3bbHTsOL4]

Now, that's more like it! Soon, I will be able to relive the boardgames my youth in real life ... running from an automated tank ... hell-bent on destroying the entire countryside ...

Hm.

Somehow, that doesn't sound so appealing. I have an idea! Instead of building killer death-bots, why don't we try building some of these instead (full disclosure: I've worked in robotic pet research):

[youtube=http://www.youtube.com/watch?v=NKAeihiy5Ck]

Oh, wait. The AIBO program was canceled ... as was the XM307. Stupid economics. It's supposed to be John Connor saving us from the robot apocalypse, not Paul Krugman and Greg Mankiw.

-the Centaur

Pictured: Various shots of OGRE T-shirt, book, rules, pieces, and miniatures, along with the re-released version of the OGRE and GEV games. Videos courtesy Youtube.