Wishful Thinking Won’t Land a Man on the Moon

203_co2-graph-021116.jpeg

Wishful thinking won’t land a man on the moon, but it might get us all killed – fortunately, though, we have people who know how to nail a good landing.

All we have to do now is preserve the fruits of their labors.

Now that a climate denier is barreling towards the presidency, other climate deniers are coming out of the woodwork, but fortunately, NASA has a great site telling the story of climate change. For those who haven’t been keeping score at home, the too-simple story is that humans have pumped vast amounts of carbon dioxide in the atmosphere in the past few decades, amounts that in the geological record resulted in disastrous temperature changes – and it’s really convenient for a lot of people to deny that.

Now, don’t worry: NASA results are in the public record, so even though Trump’s team has threatened to blind NASA’s earth sciences program and looks poised to conduct a witch hunt of climate change workers in the Department of Energy, even though climate deniers are wringing their hands in glee at the thought of a politicized EPA attacking climate science, scientists are working to save this climate data. If you want to get involved, check out climatemirror.org.

Now, I said it’s a too-simple story, and there are a lot of good references on climate change, like Henson’s The Rough Guide to Climate Change. But, technically, that could be considered a polemic, and if you want to really dig deep, you need to go for a textbook instead, one presenting a broad overview of the science without pushing an agenda. For example, Understanding Weather and Climate has a great chapter (Chapter 16 in the 4th edition) that breaks down some of the science behind global climate change (human and not) and why anthropogenic climate change is both very tricky to study – and still very worrisome.

And because I am a scientist, and I am not afraid to consider warranted arguments on both sides of any scientific question, I also want to call out Human Impacts on Weather and Climate 2/e by Cotton and Pielke, which in Chapter 8 and the Epilogue take a more skeptical view of our predictive power. In their view, well-argued in my opinion, current climate models are sensitivity studies, not forecasts; they merely establish the vulnerability of our systems to forcing factors like excess carbon, and don’t take into account areas of natural variability which might seriously alter the outcomes. And, yes, they are worried about climate groupthink.

Yes, they’re climate skeptics. But no-one is burning them at the stake. No-one is shunning them at conferences. People like me who believe in climate change read their papers with interest (especially Pielke’s work, which while it in some ways makes CO2 less of an issue and in some ways makes other human impacts seem worse). Still, Cotton and Pielke think the right approach is “sustained, stable national funding at a high level” and decry the politicization of science in either direction.

Still, do worry. Earth’s climate looks intransitive – it can get shoved from one regime to another, like the rapid-cooling Heinrich events and rapid-warming Dansgaard Oeschger events in the geological record, possibly triggered by large-scale ice sheet breakdowns and ocean circulation changes. Yes, global warming can cause global cooling by shutting down the existing pattern of global ocean circulation – and we’re pumping enough carbon dioxide into the atmosphere to simulate past triggers for such events.

Do you see why people who study climate change in enough depth to see where the science is really not settled end up walking away more unsettled about the future of our planet, not less? And why we stand up and say NO when someone else comes forward saying the “science is not settled” while acting like the science has already been settled in their favor?

larsenc_msr_2016235_parallax.jpg

Have fun warming the planet!” Just hope it doesn’t inundate Florida. I’d love to tell you that the projected 1M sea rise discussed in the Florida resource isn’t as bad as the Geology.com map’s default 6m projections, but unfortunately, sea level seems to be rising in Florida faster than the IPCC projections, and if the science isn’t really settled, we could have a sea level rise of … jeez. After reviewing some of the research I don’t even want to tell you. The “good” news is, hey, the seas might fall too.

“Have fun rolling the dice!”

-the Centaur

“Sibling Rivalry” returning to print

sibling-rivalry-cover-small.png

Wow. After nearly 21 years, my first published short story, “Sibling Rivalry”, is returning to print. Originally an experiment to try out an idea I wanted to use for a longer novel, ALGORITHMIC MURDER, I quickly found that I’d caught a live wire with “Sibling Rivalry”, which was my first sale to The Leading Edge magazine back in 1995.

“Sibling Rivalry” was borne of frustrations I had as a graduate student in artificial intelligence (AI) watching shows like Star Trek which Captain Kirk talks a computer to death. No-one talks anyone to death outside of a Hannibal Lecter movie or a bad comic book, much less in real life, and there’s no reason to believe feeding a paradox to an AI will make it explode.

But there are ways to beat one, depending on how they’re constructed – and the more you know about them, the more potential routes there are for attack. That doesn’t mean you’ll win, of course, but … if you want to know, you’ll have to wait for the story to come out.

“Sibling Rivalry” will be the second book in Thinking Ink Press’s Snapbook line, with another awesome cover by my wife Sandi Billingsley, interior design by Betsy Miller and comments by my friends Jim Davies and Kenny Moorman, the latter of whom uses “Sibling Rivalry” to teach AI in his college courses. Wow! I’m honored.

Our preview release will be at the Beyond the Fence launch party next week, with a full release to follow.

Watch this space, fellow adventurers!

-the Centaur

Visualizing Cellular Automata

cellular-automata-v1.png

SO, why’s an urban fantasy author digging into the guts of Mathematica trying to reverse-engineer how Stephen Wolfram drew the diagrams of cellular automata in his book A New Kind of Science? Well, one of my favorite characters to write about is the precocious teenage weretiger Cinnamon Frost, who at first glance was a dirty little street cat until she blossomed into a mathematical genius when watered with just the right amount of motherly love. My training as a writer was in hard science fiction, so even if I’m writing about implausible fictions like teenage weretigers, I want the things that are real – like the mathematics she develops – to be right. So I’m working on a new kind of math behind the discoveries of my little fictional genius, but I’m not the youngest winner of the Hilbert Prize, so I need tools to help simulate her thought process.

And my thought process relies on visualizations, so I thought, hey, why don’t I build on whatever Stephen Wolfram did in his groundbreaking tome A New Kind of Science, which is filled to its horse-choking brim with handsome diagrams of cellular automata, their rules, and the pictures generated by their evolution? After all, it only took him something like ten years to write the book … how hard could it be?

Deconstructing the Code from A New Kind of Science, Chapter 2

Fortunately Stephen Wolfram provides at least some of the code that he used for creating the diagrams in A New Kind of Science. He’s got the code available for download on the book’s website, wolframscience.com, but a large subset is in the extensive endnotes for his book (which, densely printed and almost 350 pages long, could probably constitute a book in their own right). I’m going to reproduce that code here, as I assume it’s short enough to fall under fair use, and for the half-dozen functions we’ve got here any attempt to reverse-engineer it would end up just recreating essentially the same functions with slightly different names.
Cellular automata are systems that take patterns and evolve them according to simple rules. The most basic cellular automata operate on lists of bits – strings of cells which can be “on” or “off” or alternately “live” or “dead,” “true” and “false,” or just “1” and “0” – and it’s easiest to show off how they behave if you start with a long string of cells which are “off” with the very center cell being “on,” so you can easily see how a single live cell evolves. And Wolfram’s first function gives us just that, a list filled with dead cells represented by 0 with a live cell represented by 1 in its very center:

In[1]:= CenterList[n_Integer] := ReplacePart[Table[0, {n}], 1, Ceiling[n/2]]


In[2]:= CenterList[10]
Out[2]= {0, 0, 0, 0, 1, 0, 0, 0, 0, 0}


One could imagine a cellular automata which updated each cell just based on its contents, but that would be really boring as each cell would be effectively independent. So Wolfram looks at what he calls “elementary automata” which update each cell based on their neighbors. Counting the cell itself, that’s a row of three cells, and there are eight possible combinations of live and dead neighbors of three elements – and only two possible values that can be set for each new element, live or dead. Wolfram had a brain flash to list the eight possible combinations the same each way every time, so all you have are that list of eight values of “live” or “dead” – or 1’s and 0’s, and since a list of 1’s and 0’s is just a binary number, that enabled Wolfram to represent each elementary automata rule as a number:

In[3]:= ElementaryRule[num_Integer] := IntegerDigits[num, 2, 8]

In[4]:= ElementaryRule[30]
Out[4]= {0, 0, 0, 1, 1, 1, 1, 0}


Once you have that number, building code to apply the rule is easy. The input data is already a string of 1’s and 0’s, so Wolfram’s rule for updating a list of cells basically involves shifting (“rotating”) the list left and right, adding up the values of these three neighbors according to base 2 notation, and then looking up the value in the rule. Wolfram created Mathematica in part to help him research cellular automata, so the code to do this is deceptively simple…

In[5]:= CAStep[rule_List, a_List] :=
rule[[8 – (RotateLeft[a] + 2 (a + 2 RotateRight[a]))]]


… a “RotateLeft” and a “RotateRight” with some addition and multiplication to get the base 2 index into the rule. The code to apply this again and again to a list to get the history of a cellular automata over time is also simple:

In[6]:= CAEvolveList[rule_, init_List, t_Integer] :=
NestList[CAStep[rule, #] &, init, t]


Now we’re ready to create the graphics for the evolution of Wolfram’s “rule 30,” the very simple rule which shows highly complex and irregular behavior, a discovery which Wolfram calls “the single most surprising scientific discovery [he has] ever made.” Wow. Let’s spin it up for a whirl and see what we get!

In[7]:= CAGraphics[history_List] :=
Graphics[Raster[1 – Reverse[history]], AspectRatio -> Automatic]


In[8]:= Show[CAGraphics[CAEvolveList[ElementaryRule[30], CenterList[103], 50]]]
Out[8]=

rule-30-evolution.png

Uh – oh. The “Raster” code that Wolfram provides is the code to create the large images of cellular automata, not the sexy graphics that show the detailed evolution of the rules. And reading between the lines of Wolfram’s end notes, he started his work in FrameMaker before Mathematica was ready to be his full publishing platform, with a complex build process producing the output – so there’s no guarantee that clean simple Mathematica code even exists for some of those early diagrams.

Guess we’ll have to create our own.

Visualizing Cellular Automata in the Small

The cellular automata diagrams that Wolfram uses have boxes with thin lines, rather than just a raster image with 1’s and 0’s represented by borderless boxes. They’re particularly appealing because the lines are white between black boxes and black between white boxes, which makes the structures very easy to see. After some digging, I found that, naturally, a Mathematica function to create those box diagrams does exist, and it’s called ArrayPlot, with the Mesh option set to True:

In[9]:= ArrayPlot[Table[Mod[i + j, 2], {i, 0, 3}, {j, 0, 3}], Mesh -> True]
Out[9]=

checkerboard.png

While we could just use ArrayPlot, it’ s important when developing software to encapsulate our knowledge as much as possible, so we’ll create a function CAGridGraphics (following the way Wolfram named his functions) that encapsulates the knowledge of turning the Mesh option to True. If later we decide there’s a better representation, we can just update CAMeshGraphics, rather than hunting down every use of ArrayPlot. This function gives us this:

In[10]:= CAMeshGraphics[matrix_List] :=
ArrayPlot[matrix, Mesh -> True, ImageSize -> Large]

In[11]:= CAMeshGraphics[{CenterList[10], CenterList[10]}]
Out[11]=

lines-of-boxes.png

Now, Wolfram has these great diagrams to help visualize cellular automata rules which show the neighbors up top and the output value at bottom, with a space between them. The GraphicsGrid does what we want here, except it by its nature resizes all the graphics to fill each available box. I’m sure there’s a clever way to do this, but I don’t know Mathematica well enough to find it, so I’m going to go back on what I just said earlier, break out the options on ArrayPlot, and tell the boxes to be the size I want:

In[20]:= CATransitionGraphics[rule_List] :=
GraphicsGrid[
Transpose[{Map[
   ArrayPlot[{#}, Mesh -> True, ImageSize -> {20 Length[#], 20}] &, rule]}]]


That works reasonably well; here’ s an example rule, where three live neighbors in a row kills the center cell :

In[21]:= CATransitionGraphics[{{1, 1, 1}, {0}}]
Out[21]=

Screenshot 2016-01-03 14.19.21.png  

Now we need the pattern of digits that Wolfram uses to represent his neighbor patterns. Looking at the diagrams and sfter some digging in the code, it seems like these digits are simply listed in reverse counting order – that is, for 3 cells, we count down from 2^3 – 1 to 0, represented as binary digits.

In[22]:= CANeighborPattern[num_Integer] :=
Table[IntegerDigits[i, 2, num], {i, 2^num – 1, 0, -1}]

In[23]:= CANeighborPattern[3]
Out[23]= {{1, 1, 1}, {1, 1, 0}, {1, 0, 1}, {1, 0, 0}, {0, 1, 1}, {0, 1, 0}, {0, 0,
1}, {0, 0, 0}}


Stay with me – that only gets us the first row of the CATransitionGraphics; to get the next row, we need to apply a rule to that pattern and take the center cell:

In[24]:= CARuleCenterElement[rule_List, pattern_List] :=
CAStep[rule, pattern][[Floor[Length[pattern]/2]]]


In[25]:= CARuleCenterElement[ElementaryRule[30], {0, 1, 0}]
Out[25]= 1


With all this, we can now generate the pattern of 1′ s and 0′ s that represent the transitions for a single rule:

In[26]:= CARulePattern[rule_List] :=
Map[{#, {CARuleCenterElement[rule, #]}} &, CANeighborPattern[3]]

In[27]:= CARulePattern[ElementaryRule[30]]
Out[27]= {{{1, 1, 1}, {0}}, {{1, 1, 0}, {1}}, {{1, 0, 1}, {0}}, {{1, 0, 0}, {1}}, {{0,
   1, 1}, {0}}, {{0, 1, 0}, {1}}, {{0, 0, 1}, {1}}, {{0, 0, 0}, {0}}}


Now we can turn it into graphics, putting it into another GraphicsGrid, this time with a Frame.

In[28]:= CARuleGraphics[rule_List] :=
GraphicsGrid[{Map[CATransitionGraphics[#] &, CARulePattern[rule]]},
Frame -> All]

In[29]:= CARuleGraphics[ElementaryRule[30]]
Out[29]=

Screenshot 2016-01-03 14.13.52.png

At last! We’ ve got the beautiful transition diagrams that Wolfram has in his book. And we want to apply it to a row with a single cell:

In[30]:= CAMeshGraphics[{CenterList[43]}]
Out[30]=

Screenshot 2016-01-03 14.13.59.png

What does that look like? Well, we once again take our CAEvolveList function from before, but rather than formatting it with Raster, we format it with our CAMeshGraphics:

In[31]:= CAMeshGraphics[CAEvolveList[ElementaryRule[30], CenterList[43], 20]]
Out[31]=

Screenshot 2016-01-03 14.14.26.png

And now we’ ve got all the parts of the graphics which appear in the initial diagram of this page. Just to work it out a bit further, let’s write a single function to put all the graphics together, and try it out on rule 110, the rule which Wolfram discovered could effectively simulate any possible program, making it effectively a universal computer:

In[22]:= CAApplicationGraphics[rule_Integer, size_Integer] := Column[
{CAMeshGraphics[{CenterList[size]}],
   CARuleGraphics[ElementaryRule[rule]],
   CAMeshGraphics[
CAEvolveList[ElementaryRule[rule], CenterList[size],
   Floor[size/2] – 1]]},
Center]

In[23]:= CAApplicationGraphics[110, 43]
Out[23]=

Screenshot 2016-01-03 14.14.47.png

It doesn’ t come out quite the way it did in Photoshop, but we’ re getting close. Further learning of the rules of Mathematica graphics will probably help me, but that’s neither here nor there. We’ve got a set of tools for displaying diagrams, which we can craft into what we need.

Which happens to be a non-standard number system unfolding itself into hyperbolic space, God help me.

Wish me luck.

-the Centaur

P.S. While I’ m going to do a standard blogpost on this, I’ m also going to try creating a Mathematica Computable Document Format (.cdf) for your perusal. Wish me luck again – it’s my first one of these things.

P.P.S. I think it’ s worthwhile to point out that while the tools I just built help visualize the application of a rule in the small …

In[24]:= CAApplicationGraphics[105, 53]
Out[24]=

Screenshot 2016-01-03 14.14.58.png

… the tools Wolfram built help visualize rules in the very, very large:

In[25]:= Show[CAGraphics[CAEvolveList[ElementaryRule[105], CenterList[10003], 5000]]]

Out[25]=

rule-105-a-lot.png

That’s 10,000 times bigger – 100 times bigger in each direction – and Mathematica executes and displays it flawlessly.

LIQUID FIRE and TWELVE HOURS LATER

Liquid Fire - 600x900x300.jpg

I think I’ll be posting this everywhere for a while … LIQUID FIRE, my third novel, is now available for preorder on Amazon. I talk a bit more about this on the Dakota Frost blog, but after a lot of work with beta readers, editing, and my editor, I’m very proud of this book, which takes Dakota out of her comfort zone in Atlanta and brings her to the San Francisco Bay, where she encounters romance, danger, magic, science, art, mathematics, vampires, werewolves, and the fae. It comes out May 22, but you can preorder it now on Amazon! Go get it! You’ll have a blast.

And, almost at the same time, I found out this is coming out on May 22 as well…

Twelve Hours Later.png

TWELVE HOURS LATER is also available for preorder on Amazon Kindle and CreateSpace. Put together by the Treehouse Writers, TWELVE HOURS LATER is a collection of 24 steampunk stories, one for every hour in the day – many of them in linked pairs, half a day apart … hence “Twelve Hours Later”. My two stories in the anthology, “The Hour of the Wolf” and “The Time of Ghosts”, feature Jeremiah Willstone, the protagonist of “Steampunk Fairy Chick” in the UnCONventional anthology … and also the protagonist of the forthcoming novel THE CLOCKWORK TIME MACHINE from Bell Bridge Books. (It’s also set in the same universe as “The Doorway to Extra Time” from the anthology of the almost identical name).

And, believe it or not, I may have something else coming out soon … stay tuned. 🙂

-the Centaur

Hustle and Bustle at the Library

shattered-small.png

I’ve felt quite harried over the past few weeks … and talking with another author, I realized why.

In April, I finally finished my part of Dakota Frost #3, LIQUID FIRE – sending comments to the publisher Bell Bridge Books on the galley proofs, reviewing cover ideas, contributing to the back cover copy, writing blogposts. I also as part of Camp Nanowrimo finished a rough rough draft of Dakota Frost #4, SPECTRAL IRON. But at the same time, I had recently finished a short story, “Vogler’s Garden”, and have been sending it out to quite a few places.

In May, we expect LIQUID FIRE will be out, I have two stories in the anthology TWELVE HOURS LATER, and I have three guest blog posts coming out, one on “Science is Story: Science, Magic, and the Thin Line Between” on the National Novel Writing Month blog which has gotten some traction. And I’ll be speaking at the Clockwork Alchemy conference. Oh, and I’m about to start responding to Bell Bridge’s feedback on my fourth novel, THE CLOCKWORK TIME MACHINE.

Holy cow. No wonder I feel so harried! But it’s all for a good cause.

-the Centaur

Pictured: a friend at work shattered his monitor and inadvertently made art.

The Science of Airships, Redux

science-of-airships.png

Once again, I will be giving a talk on The Science of Airships at Clockwork Alchemy this year, this time at 11AM on Monday. I had to suffer doing all the airship research for THE CLOCKWORK TIME MACHINE, so you should too! Seriously, I hope the panel is fun and informative and it was received well at previous presentations. From the online description:

Steampunk isn’t just brown, boots and buttons: our adventurers need glorious flying machines! This panel will unpack the science of lift, the innovations of Count Zeppelin, how airships went down in flames, and how we might still have cruise liners of the air if things had gone a bit differently. Anthony Francis is a science fiction author best known for his Dakota Frost urban fantasy series, beginning with the award winning FROST MOON. His forays into Steampunk include two stories and the forthcoming novel THE CLOCKWORK TIME MACHINE.

Yes, yes, I know THE CLOCKWORK TIME MACHINE is long in forthcoming, but at least it’s closer now. I’ll also be appearing on two panels, “Facts with Your Fiction” moderated by Sharon Cathcartat 5pm on Saturday and “Multi-cultural Influences in Steampunk” moderated by Madeline Holly at 5pm on Sunday. With that, BayCon and Fanime, looks to be a busy weekend.

-the Centaur

Context-Directed Spreading Activation

netsphere.png

Let me completely up front about my motivation for writing this post: recently, I came across a paper which was similar to the work in my PhD thesis, but applied to a different area. The paper didn’t cite my work – in fact, its survey of related work in the area seemed to indicate that no prior work along the lines of mine existed – and when I alerted the authors to the omission, they informed me they’d cited all relevant work, and claimed “my obscure dissertation probably wasn’t relevant.” Clearly, I haven’t done a good enough job articulating or promoting my work, so I thought I should take a moment to explain what I did for my doctoral dissertation.

My research improved computer memory by modeling it after human memory. People remember different things in different contexts based on how different pieces of information are connected to one another. Even a word as simple as ‘ford’ can call different things to mind depending on whether you’ve bought a popular brand of car, watched the credits of an Indiana Jones movie, or tried to cross the shallow part of a river. Based on that human phenomenon, I built a memory retrieval engine that used context to remember relevant things more quickly.

My approach was based on a technique I called context directed spreading activation, which I argued was an advance over so-called “traditional” spreading activation. Spreading activation is a technique for finding information in a kind of computer memory called semantic networks, which model relationships in the human mind. A semantic network represents knowledge as a graph, with concepts as nodes and relationships between concepts as links, and traditional spreading activation finds information in that network by starting with a set of “query” nodes and propagating “activation” out on the links, like current in an electric circuit. The current that hits each node in the network determines how highly ranked the node is for a query. (If you understand circuits and spreading activation, and this description caused you to catch on fire, my apologies. I’ll be more precise in future blogposts. Roll with it).

The problem is, as semantic networks grow large, there’s a heck of a lot of activation to propagate. My approach, context directed spreading activation (CDSA), cuts this cost dramatically by making activation propagate over fewer types of links. In CDSA, each link has a type, each type has a node, and activation propagates only over links whose nodes are active (to a very rough first approximation, although in my evaluations I tested about every variant of this under the sun). Propagating over active links isn’t just cheaper than spreading activation over every link; it’s smarter: the same “query” nodes can activate different parts of the network, depending on which “context” nodes are active. So, if you design your network right, Harrison Ford is never going to occur to you if you’ve been thinking about cars.

I was a typical graduate student, and I thought my approach was so good, it was good for everything—so I built an entire cognitive architecture around the idea. (Cognitive architectures are general reasoning systems, normally built by teams of researchers, and building even a small one is part of the reason my PhD thesis took ten years, but I digress.) My cognitive architecture was called context sensitive asynchronous memory (CSAM), and it automatically collected context while the system was thinking, fed it into the context-directed spreading activation system, and incorporated dynamically remembered information into its ongoing thought processes using patch programs called integration mechanisms.

CSAM wasn’t just an idea: I built it out into a computer program called Nicole, and even published a workshop paper on it in 1997 called “Can Your Architecture Do This? A Proposal for Impasse-Driven Asynchronous Memory Retrieval and Integration.” But to get a PhD in artificial intelligence, you need more than a clever idea you’ve written up in a paper or implemented in a computer program. You need to use the program you’ve written to answer a scientific question. You need to show that your system works in the domains you claim it works in, that it can solve the problems that you claim it can solve, and that it’s better than other approaches, if other approaches exist.

So I tested Nicole on computer planning systems and showed that integration mechanisms worked. Then I and a colleague tested Nicole on a natural language understanding program and showed that memory retrieval worked. But the most important part was showing that CDSA, the heart of the theory, didn’t just work, but was better than the alternatives. I did a detailed analysis of the theory of CDSA and showed it was better than traditional spreading activation in several ways—but that rightly wasn’t enough for my committee. They wanted an example. There were alternatives to my approach, and they wanted to see that my approach was better than the alternatives for real problems.

So I turned Nicole into an information retrieval system called IRIA—the Information Retrieval Intelligent Assistant. By this time, the dot-com boom was in full swing, and my thesis advisor invited me and another graduate student to join him starting a company called Enkia. We tried many different concepts to start with, but the further we went, the more IRIA seemed to have legs. We showed she could recommend useful information to people while browsing the Internet. We showed several people could use her at the same time and get useful feedback. And critically, we showed that by using context-directed spreading activation, IRIA could retrieve better information faster than traditional spreading activation approaches.

The first publication on IRIA came out in 2000, shortly before I got my PhD thesis, and at the company things were going gangbusters. We found customers for the idea, my more experienced colleagues and I turned the IRIA program from a typical graduate student mess into a more disciplined and efficient system called the Enkion, a process we documented in a paper in early 2001. We even launched a search site called Search Orbit—and then the whole dot-com disaster happened, and the company essentially imploded. Actually, that’s not fair: the company continued for many years after I left—but I essentially imploded, and if you want to know more about that, read “Approaching 33, as Seen from 44.”

Regardless, the upshot is that I didn’t follow up on my thesis work after I finished my PhD. That happens to a lot of PhD students, but for me in particular I felt that it would have been betraying the trust of my colleagues to go publish a sequence of papers on the innards of a program they were trying to use to run their business. Eventually, they moved on to new software, but by that time, so had I.

Fast forward to 2012, and while researching an unrelated problem for The Search Engine That Starts With A G, I came across the 2006 paper “Recommending in context: A spreading activation model that is independent of the type of recommender system and its contents” by Alexander Kovács and Haruki Ueno. At Enkia, we’d thought of doing recommender systems on top of the Enkion, and had even started to build a prototype for Emory University, but the idea never took off and we never generated any publications, so at first, I was pleased to see someone doing spreading activation work in recommender systems.

Then I was unnerved to see that this approach also involved spreading activation, over a typed network, with nodes representing the types of links, and activation in the type nodes changing the way activation propagated over the links. Then I was unsettled to see that my work, which is based on a similar idea and predates their publication by almost a decade, was not cited in the paper. Then I was actually disturbed when I read: “The details of spreading activation networks in the literature differ considerably. However, they’re all equal with respect to how they handle context … context nodes do not modulate links at all…” If you were to take that at face value, the work that I did over ten years of my life—work which produced four papers, a PhD thesis, and at one point helped employ thirty people—did not exist.

Now, I was also surprised by some spooky similarities between their systems and mine—their system is built on a context-directed spreading activation model, mine is a context-directed spreading activation model, theirs is called CASAN, mine is embedded in a system called CSAM—but as far as I can see there’s NO evidence that their work was derivative of mine. As Chris Atkinson said to a friend of mine (paraphrased): “The great beam of intelligence is more like a shotgun: good ideas land on lots of people all over the world—not just on you.”

In fact, I’d argue that their work is a real advance to the field. Their model is similar, not identical, and their mathematical formalism uses more contemporary matrix algebra, making the relationship to related approaches like Page Rank more clear (see Google Page Rank and Beyond). Plus, they apparently got their approach to work on recommender systems, which we did not; IRIA did more straight up recommendation of information in traditional information retrieval, which is a similar but not identical problem.

So Kovács and Ueno’s “Recommending in Context” paper is a great paper and you should read it if you’re into this kind of stuff. But, to set the record straight, and maybe to be a little bit petty, there are a number of spreading activation systems that do use context to modulate links in the network … most notably mine.

-the Centaur

Pictured: a tiny chunk of the WordNet online dictionary, which I’m using as a proxy of a semantic network. Data processing by me in Python, graph representation by the GraphViz suite’s dot program, and postprocessing by me in Adobe Photoshop.

Going Gonzo

IMG_20130126_140326.jpg

It would be hard to adequately describe the story I’m working on now in between gaps of finishing up the anthology Doorways to Extra Time, but from the reading list I have above, you can fairly assume it’s going to be gonzo.

Of course, everything that has Jeremiah Willstone in it is a bit gonzo.

-the Centaur

Humans are Good Enough to Live

goodness.png
I’m a big fan of Ayn Rand and her philosophy of Objectivism. Even though there are many elements of her philosophy which are naive, or oversimplified, or just plain ignorant, the foundation of her thought is good: we live in exactly one shared world which has a definitive nature, and the good is defined by things which promote the life of human individuals.

It’s hard to overestimate the importance of this move, this Randian answer to the age old question of how to get from “is” to “ought” – how to go from what we know about the world to be true to deciding what we should do. In Rand’s world, ethical judgments are judgments made by humans about human actions – so the ethical good must be things that promote human life.

This may seem like a trivial philosophical point, but there are many theoretically possible definitions of ethics, from the logically absurd “all actions taken on Tuesday are good” to the logically indefensible “things are good because some authority said so.” Rand’s formulation of ethics echoes Jesus’s claim that goodness is not found in the foods you eat, but in the actions you do.

But sometimes it seems like the world’s a very depressing place. Jesus taught that everyone is capable of evil. Rand herself thought nothing is given to humans automatically: they must choose their values, and that the average human, because they never think about values, is pretty much a mess of contradictory assumptions which leaves them doing good only through luck.

But, I realized Rand’s wrong about that – because her assumptions are wrong, that nothing is given to humans automatically. She’s a philosopher, not a scientist, and she wasn’t aware of the great strides that have been made in the understanding of how we think – because some of those strides were made in technical fields near the very end of her life.

Rant rails against philosophies like Kant’s, who proposes, among many other things, that humans perceive reality unavoidably distorted by filters built into the human conceptual and perceptual apparatus. Rand admitted that human perception and cognition had a nature, but she believed, humans could perceive reality more objectively. Well, in a sense, they’re both wrong.

Modern studies of bias in machine learning show that it’s impossible – mathematically impossible – to learn any abstract concept without some kind of bias. In brief, if you want to predict something you’ve never seen before, you have to take some stance towards the data you’ve seen already – a bias – but there is no logical way to pick a correct bias. Any one you pick may be wrong.

So, like Kant suggested, our human conceptual processes impose unavoidable biases on the kind of concepts we learn, and unlike Rand wanted, those biases may prove distorting. However, we are capable of virtual levels of processing, which means that even if our base reasoning is flawed, we can build a more formal one, like mathematics, that avoids those problems.

But, I realized, there’s an even stronger reason to believe that things aren’t as bad as Kant or Rand feared, a reason founded in Rand’s ideas of ethics. Even human communities that lack a formalized philosophy are nonetheless capable of building and maintaining systems that last for generations – which means the human default bias leads to concepts that are Randian goods.

In a way, this isn’t surprising. From an evolutionary perspective, if any creature inherited a set of bad biases, it would learn bad concepts, and be unable to reproduce. From a cognitive science perspective, the human mind is constantly attempting to understand the world and to cache the results as automatic responses – what Rand would call building a philosophy.

So, if we are descendants of creatures that survived, we must have a basic bias for learning that promotes our life, and if we live by being rational creatures constantly attempting to understand the world who persist in communities that have lasted for generations, we must have a basic bias towards a philosophy which is just good enough to prevent our destruction.

That’s not to say that the average human being, on their own, without self-examination, will develop a philosophy that Rand or Jesus would approve of. And it’s not to say that individual human beings aren’t capable of great evil – and that human communities aren’t capable of greater evil towards their members.

But it does mean that humans are good enough to live on this Earth.

Just our continued existence shows that even though it seems like we live in a cold and cruel universe, the cards are stacked just enough in humanity’s favor for it to be possible for at least some people to thrive, it also shows that while humans are capable of great evil, the bias of humanity is stacked just enough in our favor for human existence to continue.

Rising above the average, of course, is up to you.

-the Centaur

For Sale: Garden Planet. Barely Used.

Stranded - print.jpg

It’s been on preorder for a while, but STRANDED, the anthology featuring stories by me, James Alan Gardner, and headlined by Anne Bishop, is finally out in print and Kindle on Amazon and both print and Nook on Barnes and Noble. Three authors, three stories – one theme: young adults making their own way in space. An excerpt from my story, “Stranded”:

“It’s called Halfway Point,” Serendipity said, “because they wanted to do what I want to do: set up a port between those two bubbles, which have grown so they almost touch. Shipping routes are still rerouted, but they won’t stay that way. Halfway Point’s even got a black hole—”

“Oh, wonderful,” Tianyu said. “Sounds like a big KEEP OFF sign to me.”

“Hush, love,” Serendipity said. “The orbit’s far enough that the inner planets are stable, but close enough to power heavy industry someday. In all the galaxy, Halfway Point is unique. I have no idea why it was overlooked, but I’m not about to let someone else step up and claim it.”

They stared at the little blue-green moon, that forgotten jewel, curling around the rainbow pastels of its mammoth mother planet.

“I looked up headstrong in the dictionary,” Tianyu said at last, curling up in a huff. “Your name was all over it: synonym, hyponym, see also, properly capitalized and everything.”

“Be a good sport,” Serendipity said, ruffling behind his ears. “Double-check my kit, would you?”

Ah, Serendipity. Best of luck on that new planet. You can check out more of Serendipity the Centaur at her Facebook page, or here, where I’ll be filling in details on “Stranded’s” sequel, “Conflicted,” as I get the story done. The current plan is to collect the first three novellas in the Serendipity story into a single novel titled MAROONED.

-the Centaur