Surfacing

An interpretation of the rocket equation.

Wow. It’s been a long time. Or perhaps not as long as I thought, but I’ve definitely not been able to post as much as I wanted over the last six months or so. But it’s been for good reasons: I’ve been working on a lot of writing projects. The Dakota Frost / Cinnamon Frost “Hexology”, which was a six book series; the moment I finished those rough drafts, it seemed, I rolled into National Novel Writing Month and worked on JEREMIAH WILLSTONE AND THE MACHINERY OF THE APOCALYPSE. Meanwhile, at work, I’ve been snowed under following up on our PRM-RL paper.

Thor's Hammer space station.

But I’ve been having fun! The MACHINERY OF THE APOCALYPSE is (at least possibly) spaaaace steampunk, which has led me to learn all sorts of things about space travel and rockets and angular momentum which I somehow didn’t learn when I was writing pure hard science fiction. I’ve learned so much about creating artificial languages as part of the HEXOLOGY.

The Modanaqa Abugida.

So, hopefully I will have some time to start sharing this information again, assuming that no disasters befall me in the middle of the night.

Gabby in the emergency room.

Oh dag nabbit! (He’s going to be fine).

-the Centaur

PRM-RL Won a Best Paper Award at ICRA!

So, this happened! Our team’s paper on “PRM-RL” – a way to teach robots to navigate their worlds which combines human-designed algorithms that use roadmaps with deep-learned algorithms to control the robot itself – won a best paper award at the ICRA robotics conference!

I talked a little bit about how PRM-RL works in the post “Learning to Drive … by Learning Where You Can Drive“, so I won’t go over the whole spiel here – but the basic idea is that we’ve gotten good at teaching robots to control themselves using a technique called deep reinforcement learning (the RL in PRM-RL) that trains them in simulation, but it’s hard to extend this approach to long-range navigation problems in the real world; we overcome this barrier by using a more traditional robotic approach, probabilistic roadmaps (the PRM in PRM-RL), which build maps of where the robot can drive using point to point connections; we combine these maps with the robot simulator and, boom, we have a map of where the robot thinks it can successfully drive.

We were cited not just for this technique, but for testing it extensively in simulation and on two different kinds of robots. I want to thank everyone on the team – especially Sandra Faust for her background in PRMs and for taking point on the idea (and doing all the quadrotor work with Lydia Tapia), for Oscar Ramirez and Marek Fiser for their work on our reinforcement learning framework and simulator, for Kenneth Oslund for his heroic last-minute push to collect the indoor robot navigation data, and to our manager James for his guidance, contributions to the paper and support of our navigation work.

Woohoo! Thanks again everyone!

-the Centaur

Why I’m Solving Puzzles Right Now

When I was a kid (well, a teenager) I’d read puzzle books for pure enjoyment. I’d gotten started with Martin Gardner’s mathematical recreation books, but the ones I really liked were Raymond Smullyan’s books of logic puzzles. I’d go to Wendy’s on my lunch break at Francis Produce, with a little notepad and a book, and chew my way through a few puzzles. I’ll admit I often skipped ahead if they got too hard, but I did my best most of the time.

I read more of these as an adult, moving back to the Martin Gardner books. But sometime, about twenty-five years ago (when I was in the thick of grad school) my reading needs completely overwhelmed my reading ability. I’d always carried huge stacks of books home from the library, never finishing all of them, frequently paying late fees, but there was one book in particular – The Emotions by Nico Frijda – which I finished but never followed up on.

Over the intervening years, I did finish books, but read most of them scattershot, picking up what I needed for my creative writing or scientific research. Eventually I started using the tiny little notetabs you see in some books to mark the stuff that I’d written, a “levels of processing” trick to ensure that I was mindfully reading what I wrote.

A few years ago, I admitted that wasn’t enough, and consciously  began trying to read ahead of what I needed to for work. I chewed through C++ manuals and planning books and was always rewarded a few months later when I’d already read what I needed to to solve my problems. I began focusing on fewer books in depth, finishing more books than I had in years.

Even that wasn’t enough, and I began – at last – the re-reading project I’d hoped to do with The Emotions. Recently I did that with Dedekind’s Essays on the Theory of Numbers, but now I’m doing it with the Deep Learning. But some of that math is frickin’ beyond where I am now, man. Maybe one day I’ll get it, but sometimes I’ve spent weeks tackling a problem I just couldn’t get.

Enter puzzles. As it turns out, it’s really useful for a scientist to also be a science fiction writer who writes stories about a teenaged mathematical genius! I’ve had to simulate Cinnamon Frost’s staggering intellect for the purpose of writing the Dakota Frost stories, but the further I go, the more I want her to be doing real math. How did I get into math? Puzzles!

So I gave her puzzles. And I decided to return to my old puzzle books, some of the ones I got later but never fully finished, and to give them the deep reading treatment. It’s going much slower than I like – I find myself falling victim to the “rule of threes” (you can do a third of what you want to do, often in three times as much time as you expect) – but then I noticed something interesting.

Some of Smullyan’s books in particular are thinly disguised math books. In some parts, they’re even the same math I have to tackle in my own work. But unlike the other books, these problems are designed to be solved, rather than a reflection of some chunk of reality which may be stubborn; and unlike the other books, these have solutions along with each problem.

So, I’ve been solving puzzles … with careful note of how I have been failing to solve puzzles. I’ve hinted at this before, but understanding how you, personally, usually fail is a powerful technique for debugging your own stuck points. I get sloppy, I drop terms from equations, I misunderstand conditions, I overcomplicate solutions, I grind against problems where I should ask for help, I rabbithole on analytical exploration, and I always underestimate the time it will take for me to make the most basic progress.

Know your weaknesses. Then you can work those weak mental muscles, or work around them to build complementary strengths – the way Richard Feynman would always check over an equation when he was done, looking for those places where he had flipped a sign.

Back to work!

-the Centaur

Pictured: my “stack” at a typical lunch. I’ll usually get to one out of three of the things I bring for myself to do. Never can predict which one though.

Don’t Fall Into Rabbit Holes

SO! There I was, trying to solve the mysteries of the universe, learn about deep learning, and teach myself enough puzzle logic to create credible puzzles for the Cinnamon Frost books, and I find myself debugging the fine details of a visualization system I’ve developed in Mathematica to analyze the distribution of problems in an odd middle chapter of Raymond Smullyan’s The Lady or the Tiger.

I meant well! Really I did. I was going to write a post about how finding a solution is just a little bit harder than you normally think, and how insight sometimes comes after letting things sit.

But the tools I was creating didn’t do what I wanted, so I went deeper and deeper down the rabbit hole trying to visualize them.

The short answer seems to be that there’s no “there” there and that further pursuit of this sub-problem will take me further and further away from the real problem: writing great puzzles!

I learned a lot – about numbers, about how things could combinatorially explode, about Ulam Spirals and how to code them algorithmically. I even learned something about how I, particularly, fail in these cases.

But it didn’t provide the insights I wanted. Feynman warned about this: he called it “the computer disease”, worrying about the formatting of the printout so much you forget about the answer you’re trying to produce, and it can strike anyone in my line of work.

Back to that work.

-the Centaur

Learning to Drive … by Learning Where You Can Drive

I often say “I teach robots to learn,” but what does that mean, exactly? Well, now that one of the projects that I’ve worked on has been announced – and I mean, not just on arXiv, the public access scientific repository where all the hottest reinforcement learning papers are shared, but actually, accepted into the ICRA 2018 conference – I  can tell you all about it!

When I’m not roaming the corridors hammering infrastructure bugs, I’m trying to teach robots to roam those corridors – a problem we call robot navigation. Our team’s latest idea combines “traditional planning,” where the robot tries to navigate based on an explicit model of its surroundings, with “reinforcement learning,” where the robot learns from feedback on its performance.

For those not in the know, “traditional” robotic planners use structures like graphs to plan routes, much in the same way that a GPS uses a roadmap. One of the more popular methods for long-range planning are probabilistic roadmaps, which build a long-range graph by picking random points and attempting to connect them by a simpler “local planner” that knows how to navigate shorter distances. It’s a little like how you learn to drive in your neighborhood – starting from landmarks you know, you navigate to nearby points, gradually building up a map in your head of what connects to what.

But for that to work, you have to know how to drive, and that’s where the local planner comes in. Building a local planner is simple in theory – you can write one for a toy world in a few dozen lines of code – but difficult in practice, and making one that works on a real robot is quite the challenge. These software systems are called “navigation stacks” and can contain dozens of components – and in my experience they’re hard to get working and even when you do, they’re often brittle, requiring many engineer-months to transfer to new domains or even just to new buildings.

People are much more flexible, learning from their mistakes, and the science of making robots learn from their mistakes is reinforcement learning, in which an agent learns a policy for choosing actions by simply trying them, favoring actions that lead to success and suppressing ones that lead to failure. Our team built a deep reinforcement learning approach to local planning, using a state-of-the art algorithm called DDPG (Deep Deterministic Policy Gradients) pioneered by DeepMind to learn a navigation system that could successfully travel several meters in office-like environments.

But there’s a further wrinkle: the so-called “reality gap“. By necessity, the local planner used by a probablistic roadmap is simulated – attempting to connect points on a map. That simulated local planner isn’t identical to the real-world navigation stack running on the robot, so sometimes the robot thinks it can go somewhere on a map which it can’t navigate safely in the real world. This can have disastrous consequences – causing robots to tumble down stairs, or, worse, when people follow their GPSes too closely without looking where they’re going, causing cars to tumble off the end of a bridge.

Our approach, PRM-RL, directly combats the reality gap by combining probabilistic roadmaps with deep reinforcement learning. By necessity, reinforcement learning navigation systems are trained in simulation and tested in the real world. PRM-RL uses a deep reinforcement learning system as both the probabilistic roadmap’s local planner and the robot’s navigation system. Because links are added to the roadmap only if the reinforcement learning local controller can traverse them, the agent has a better chance of attempting to execute its plans in the real world.

In simulation, our agent could traverse hundreds of meters using the PRM-RL approach, doing much better than a “straight-line” local planner which was our default alternative. While I didn’t happen to have in my back pocket a hundred-meter-wide building instrumented with a mocap rig for our experiments, we were able to test a real robot on a smaller rig and showed that it worked well (no pictures, but you can see the map and the actual trajectories below; while the robot’s behavior wasn’t as good as we hoped, we debugged that to a networking issue that was adding a delay to commands sent to the robot, and not in our code itself; we’ll fix this in a subsequent round).

This work includes both our group working on office robot navigation – including Alexandra Faust, Oscar Ramirez, Marek Fiser, Kenneth Oslund, me, and James Davidson – and Alexandra’s collaborator Lydia Tapia, with whom she worked on the aerial navigation also reported in the paper.  Until the ICRA version comes out, you can find the preliminary version on arXiv:

https://arxiv.org/abs/1710.03937
PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning

We present PRM-RL, a hierarchical method for long-range navigation task completion that combines sampling-based path planning with reinforcement learning (RL) agents. The RL agents learn short-range, point-to-point navigation policies that capture robot dynamics and task constraints without knowledge of the large-scale topology, while the sampling-based planners provide an approximate map of the space of possible configurations of the robot from which collision-free trajectories feasible for the RL agents can be identified. The same RL agents are used to control the robot under the direction of the planning, enabling long-range navigation. We use the Probabilistic Roadmaps (PRMs) for the sampling-based planner. The RL agents are constructed using feature-based and deep neural net policies in continuous state and action spaces. We evaluate PRM-RL on two navigation tasks with non-trivial robot dynamics: end-to-end differential drive indoor navigation in office environments, and aerial cargo delivery in urban environments with load displacement constraints. These evaluations included both simulated environments and on-robot tests. Our results show improvement in navigation task completion over both RL agents on their own and traditional sampling-based planners. In the indoor navigation task, PRM-RL successfully completes up to 215 meters long trajectories under noisy sensor conditions, and the aerial cargo delivery completes flights over 1000 meters without violating the task constraints in an environment 63 million times larger than used in training.

 

So, when I say “I teach robots to learn” … that’s what I do.

-the Centaur

What is Artificial Intelligence?

20140523_114702_HDR.jpg

Simply put, “artificial intelligence” is people trying to make things do things that we’d call smart if done by people.

So what’s the big deal about that?

Well, as it turns out, a lot of people get quite wound up with the definition of “artificial intelligence.” Sometimes this is because they’re invested in a prescientific notion that machines can’t be intelligent and want to define it in a way that writes the field off before it gets started, or it’s because they’re invested in an unscientific degree into their particular theory of intelligence and want to define it in a way that constrains the field to look at only the things they care about, or because they’re actually not scientific at all and want to proscribe the field to work on the practical problems of particular interest to them.

No, I’m not bitter about having to wade through a dozen bad definitions of artificial intelligence as part of a survey. Why do you ask?

Wishful Thinking Won’t Land a Man on the Moon

203_co2-graph-021116.jpeg

Wishful thinking won’t land a man on the moon, but it might get us all killed – fortunately, though, we have people who know how to nail a good landing.

All we have to do now is preserve the fruits of their labors.

Now that a climate denier is barreling towards the presidency, other climate deniers are coming out of the woodwork, but fortunately, NASA has a great site telling the story of climate change. For those who haven’t been keeping score at home, the too-simple story is that humans have pumped vast amounts of carbon dioxide in the atmosphere in the past few decades, amounts that in the geological record resulted in disastrous temperature changes – and it’s really convenient for a lot of people to deny that.

Now, don’t worry: NASA results are in the public record, so even though Trump’s team has threatened to blind NASA’s earth sciences program and looks poised to conduct a witch hunt of climate change workers in the Department of Energy, even though climate deniers are wringing their hands in glee at the thought of a politicized EPA attacking climate science, scientists are working to save this climate data. If you want to get involved, check out climatemirror.org.

Now, I said it’s a too-simple story, and there are a lot of good references on climate change, like Henson’s The Rough Guide to Climate Change. But, technically, that could be considered a polemic, and if you want to really dig deep, you need to go for a textbook instead, one presenting a broad overview of the science without pushing an agenda. For example, Understanding Weather and Climate has a great chapter (Chapter 16 in the 4th edition) that breaks down some of the science behind global climate change (human and not) and why anthropogenic climate change is both very tricky to study – and still very worrisome.

And because I am a scientist, and I am not afraid to consider warranted arguments on both sides of any scientific question, I also want to call out Human Impacts on Weather and Climate 2/e by Cotton and Pielke, which in Chapter 8 and the Epilogue take a more skeptical view of our predictive power. In their view, well-argued in my opinion, current climate models are sensitivity studies, not forecasts; they merely establish the vulnerability of our systems to forcing factors like excess carbon, and don’t take into account areas of natural variability which might seriously alter the outcomes. And, yes, they are worried about climate groupthink.

Yes, they’re climate skeptics. But no-one is burning them at the stake. No-one is shunning them at conferences. People like me who believe in climate change read their papers with interest (especially Pielke’s work, which while it in some ways makes CO2 less of an issue and in some ways makes other human impacts seem worse). Still, Cotton and Pielke think the right approach is “sustained, stable national funding at a high level” and decry the politicization of science in either direction.

Still, do worry. Earth’s climate looks intransitive – it can get shoved from one regime to another, like the rapid-cooling Heinrich events and rapid-warming Dansgaard Oeschger events in the geological record, possibly triggered by large-scale ice sheet breakdowns and ocean circulation changes. Yes, global warming can cause global cooling by shutting down the existing pattern of global ocean circulation – and we’re pumping enough carbon dioxide into the atmosphere to simulate past triggers for such events.

Do you see why people who study climate change in enough depth to see where the science is really not settled end up walking away more unsettled about the future of our planet, not less? And why we stand up and say NO when someone else comes forward saying the “science is not settled” while acting like the science has already been settled in their favor?

larsenc_msr_2016235_parallax.jpg

Have fun warming the planet!” Just hope it doesn’t inundate Florida. I’d love to tell you that the projected 1M sea rise discussed in the Florida resource isn’t as bad as the Geology.com map’s default 6m projections, but unfortunately, sea level seems to be rising in Florida faster than the IPCC projections, and if the science isn’t really settled, we could have a sea level rise of … jeez. After reviewing some of the research I don’t even want to tell you. The “good” news is, hey, the seas might fall too.

“Have fun rolling the dice!”

-the Centaur

“Sibling Rivalry” returning to print

sibling-rivalry-cover-small.png

Wow. After nearly 21 years, my first published short story, “Sibling Rivalry”, is returning to print. Originally an experiment to try out an idea I wanted to use for a longer novel, ALGORITHMIC MURDER, I quickly found that I’d caught a live wire with “Sibling Rivalry”, which was my first sale to The Leading Edge magazine back in 1995.

“Sibling Rivalry” was borne of frustrations I had as a graduate student in artificial intelligence (AI) watching shows like Star Trek which Captain Kirk talks a computer to death. No-one talks anyone to death outside of a Hannibal Lecter movie or a bad comic book, much less in real life, and there’s no reason to believe feeding a paradox to an AI will make it explode.

But there are ways to beat one, depending on how they’re constructed – and the more you know about them, the more potential routes there are for attack. That doesn’t mean you’ll win, of course, but … if you want to know, you’ll have to wait for the story to come out.

“Sibling Rivalry” will be the second book in Thinking Ink Press’s Snapbook line, with another awesome cover by my wife Sandi Billingsley, interior design by Betsy Miller and comments by my friends Jim Davies and Kenny Moorman, the latter of whom uses “Sibling Rivalry” to teach AI in his college courses. Wow! I’m honored.

Our preview release will be at the Beyond the Fence launch party next week, with a full release to follow.

Watch this space, fellow adventurers!

-the Centaur

Visualizing Cellular Automata

cellular-automata-v1.png

SO, why’s an urban fantasy author digging into the guts of Mathematica trying to reverse-engineer how Stephen Wolfram drew the diagrams of cellular automata in his book A New Kind of Science? Well, one of my favorite characters to write about is the precocious teenage weretiger Cinnamon Frost, who at first glance was a dirty little street cat until she blossomed into a mathematical genius when watered with just the right amount of motherly love. My training as a writer was in hard science fiction, so even if I’m writing about implausible fictions like teenage weretigers, I want the things that are real – like the mathematics she develops – to be right. So I’m working on a new kind of math behind the discoveries of my little fictional genius, but I’m not the youngest winner of the Hilbert Prize, so I need tools to help simulate her thought process.

And my thought process relies on visualizations, so I thought, hey, why don’t I build on whatever Stephen Wolfram did in his groundbreaking tome A New Kind of Science, which is filled to its horse-choking brim with handsome diagrams of cellular automata, their rules, and the pictures generated by their evolution? After all, it only took him something like ten years to write the book … how hard could it be?

Deconstructing the Code from A New Kind of Science, Chapter 2

Fortunately Stephen Wolfram provides at least some of the code that he used for creating the diagrams in A New Kind of Science. He’s got the code available for download on the book’s website, wolframscience.com, but a large subset is in the extensive endnotes for his book (which, densely printed and almost 350 pages long, could probably constitute a book in their own right). I’m going to reproduce that code here, as I assume it’s short enough to fall under fair use, and for the half-dozen functions we’ve got here any attempt to reverse-engineer it would end up just recreating essentially the same functions with slightly different names.
Cellular automata are systems that take patterns and evolve them according to simple rules. The most basic cellular automata operate on lists of bits – strings of cells which can be “on” or “off” or alternately “live” or “dead,” “true” and “false,” or just “1” and “0” – and it’s easiest to show off how they behave if you start with a long string of cells which are “off” with the very center cell being “on,” so you can easily see how a single live cell evolves. And Wolfram’s first function gives us just that, a list filled with dead cells represented by 0 with a live cell represented by 1 in its very center:

In[1]:= CenterList[n_Integer] := ReplacePart[Table[0, {n}], 1, Ceiling[n/2]]


In[2]:= CenterList[10]
Out[2]= {0, 0, 0, 0, 1, 0, 0, 0, 0, 0}


One could imagine a cellular automata which updated each cell just based on its contents, but that would be really boring as each cell would be effectively independent. So Wolfram looks at what he calls “elementary automata” which update each cell based on their neighbors. Counting the cell itself, that’s a row of three cells, and there are eight possible combinations of live and dead neighbors of three elements – and only two possible values that can be set for each new element, live or dead. Wolfram had a brain flash to list the eight possible combinations the same each way every time, so all you have are that list of eight values of “live” or “dead” – or 1’s and 0’s, and since a list of 1’s and 0’s is just a binary number, that enabled Wolfram to represent each elementary automata rule as a number:

In[3]:= ElementaryRule[num_Integer] := IntegerDigits[num, 2, 8]

In[4]:= ElementaryRule[30]
Out[4]= {0, 0, 0, 1, 1, 1, 1, 0}


Once you have that number, building code to apply the rule is easy. The input data is already a string of 1’s and 0’s, so Wolfram’s rule for updating a list of cells basically involves shifting (“rotating”) the list left and right, adding up the values of these three neighbors according to base 2 notation, and then looking up the value in the rule. Wolfram created Mathematica in part to help him research cellular automata, so the code to do this is deceptively simple…

In[5]:= CAStep[rule_List, a_List] :=
rule[[8 – (RotateLeft[a] + 2 (a + 2 RotateRight[a]))]]


… a “RotateLeft” and a “RotateRight” with some addition and multiplication to get the base 2 index into the rule. The code to apply this again and again to a list to get the history of a cellular automata over time is also simple:

In[6]:= CAEvolveList[rule_, init_List, t_Integer] :=
NestList[CAStep[rule, #] &, init, t]


Now we’re ready to create the graphics for the evolution of Wolfram’s “rule 30,” the very simple rule which shows highly complex and irregular behavior, a discovery which Wolfram calls “the single most surprising scientific discovery [he has] ever made.” Wow. Let’s spin it up for a whirl and see what we get!

In[7]:= CAGraphics[history_List] :=
Graphics[Raster[1 – Reverse[history]], AspectRatio -> Automatic]


In[8]:= Show[CAGraphics[CAEvolveList[ElementaryRule[30], CenterList[103], 50]]]
Out[8]=

rule-30-evolution.png

Uh – oh. The “Raster” code that Wolfram provides is the code to create the large images of cellular automata, not the sexy graphics that show the detailed evolution of the rules. And reading between the lines of Wolfram’s end notes, he started his work in FrameMaker before Mathematica was ready to be his full publishing platform, with a complex build process producing the output – so there’s no guarantee that clean simple Mathematica code even exists for some of those early diagrams.

Guess we’ll have to create our own.

Visualizing Cellular Automata in the Small

The cellular automata diagrams that Wolfram uses have boxes with thin lines, rather than just a raster image with 1’s and 0’s represented by borderless boxes. They’re particularly appealing because the lines are white between black boxes and black between white boxes, which makes the structures very easy to see. After some digging, I found that, naturally, a Mathematica function to create those box diagrams does exist, and it’s called ArrayPlot, with the Mesh option set to True:

In[9]:= ArrayPlot[Table[Mod[i + j, 2], {i, 0, 3}, {j, 0, 3}], Mesh -> True]
Out[9]=

checkerboard.png

While we could just use ArrayPlot, it’ s important when developing software to encapsulate our knowledge as much as possible, so we’ll create a function CAGridGraphics (following the way Wolfram named his functions) that encapsulates the knowledge of turning the Mesh option to True. If later we decide there’s a better representation, we can just update CAMeshGraphics, rather than hunting down every use of ArrayPlot. This function gives us this:

In[10]:= CAMeshGraphics[matrix_List] :=
ArrayPlot[matrix, Mesh -> True, ImageSize -> Large]

In[11]:= CAMeshGraphics[{CenterList[10], CenterList[10]}]
Out[11]=

lines-of-boxes.png

Now, Wolfram has these great diagrams to help visualize cellular automata rules which show the neighbors up top and the output value at bottom, with a space between them. The GraphicsGrid does what we want here, except it by its nature resizes all the graphics to fill each available box. I’m sure there’s a clever way to do this, but I don’t know Mathematica well enough to find it, so I’m going to go back on what I just said earlier, break out the options on ArrayPlot, and tell the boxes to be the size I want:

In[20]:= CATransitionGraphics[rule_List] :=
GraphicsGrid[
Transpose[{Map[
   ArrayPlot[{#}, Mesh -> True, ImageSize -> {20 Length[#], 20}] &, rule]}]]


That works reasonably well; here’ s an example rule, where three live neighbors in a row kills the center cell :

In[21]:= CATransitionGraphics[{{1, 1, 1}, {0}}]
Out[21]=

Screenshot 2016-01-03 14.19.21.png  

Now we need the pattern of digits that Wolfram uses to represent his neighbor patterns. Looking at the diagrams and sfter some digging in the code, it seems like these digits are simply listed in reverse counting order – that is, for 3 cells, we count down from 2^3 – 1 to 0, represented as binary digits.

In[22]:= CANeighborPattern[num_Integer] :=
Table[IntegerDigits[i, 2, num], {i, 2^num – 1, 0, -1}]

In[23]:= CANeighborPattern[3]
Out[23]= {{1, 1, 1}, {1, 1, 0}, {1, 0, 1}, {1, 0, 0}, {0, 1, 1}, {0, 1, 0}, {0, 0,
1}, {0, 0, 0}}


Stay with me – that only gets us the first row of the CATransitionGraphics; to get the next row, we need to apply a rule to that pattern and take the center cell:

In[24]:= CARuleCenterElement[rule_List, pattern_List] :=
CAStep[rule, pattern][[Floor[Length[pattern]/2]]]


In[25]:= CARuleCenterElement[ElementaryRule[30], {0, 1, 0}]
Out[25]= 1


With all this, we can now generate the pattern of 1′ s and 0′ s that represent the transitions for a single rule:

In[26]:= CARulePattern[rule_List] :=
Map[{#, {CARuleCenterElement[rule, #]}} &, CANeighborPattern[3]]

In[27]:= CARulePattern[ElementaryRule[30]]
Out[27]= {{{1, 1, 1}, {0}}, {{1, 1, 0}, {1}}, {{1, 0, 1}, {0}}, {{1, 0, 0}, {1}}, {{0,
   1, 1}, {0}}, {{0, 1, 0}, {1}}, {{0, 0, 1}, {1}}, {{0, 0, 0}, {0}}}


Now we can turn it into graphics, putting it into another GraphicsGrid, this time with a Frame.

In[28]:= CARuleGraphics[rule_List] :=
GraphicsGrid[{Map[CATransitionGraphics[#] &, CARulePattern[rule]]},
Frame -> All]

In[29]:= CARuleGraphics[ElementaryRule[30]]
Out[29]=

Screenshot 2016-01-03 14.13.52.png

At last! We’ ve got the beautiful transition diagrams that Wolfram has in his book. And we want to apply it to a row with a single cell:

In[30]:= CAMeshGraphics[{CenterList[43]}]
Out[30]=

Screenshot 2016-01-03 14.13.59.png

What does that look like? Well, we once again take our CAEvolveList function from before, but rather than formatting it with Raster, we format it with our CAMeshGraphics:

In[31]:= CAMeshGraphics[CAEvolveList[ElementaryRule[30], CenterList[43], 20]]
Out[31]=

Screenshot 2016-01-03 14.14.26.png

And now we’ ve got all the parts of the graphics which appear in the initial diagram of this page. Just to work it out a bit further, let’s write a single function to put all the graphics together, and try it out on rule 110, the rule which Wolfram discovered could effectively simulate any possible program, making it effectively a universal computer:

In[22]:= CAApplicationGraphics[rule_Integer, size_Integer] := Column[
{CAMeshGraphics[{CenterList[size]}],
   CARuleGraphics[ElementaryRule[rule]],
   CAMeshGraphics[
CAEvolveList[ElementaryRule[rule], CenterList[size],
   Floor[size/2] – 1]]},
Center]

In[23]:= CAApplicationGraphics[110, 43]
Out[23]=

Screenshot 2016-01-03 14.14.47.png

It doesn’ t come out quite the way it did in Photoshop, but we’ re getting close. Further learning of the rules of Mathematica graphics will probably help me, but that’s neither here nor there. We’ve got a set of tools for displaying diagrams, which we can craft into what we need.

Which happens to be a non-standard number system unfolding itself into hyperbolic space, God help me.

Wish me luck.

-the Centaur

P.S. While I’ m going to do a standard blogpost on this, I’ m also going to try creating a Mathematica Computable Document Format (.cdf) for your perusal. Wish me luck again – it’s my first one of these things.

P.P.S. I think it’ s worthwhile to point out that while the tools I just built help visualize the application of a rule in the small …

In[24]:= CAApplicationGraphics[105, 53]
Out[24]=

Screenshot 2016-01-03 14.14.58.png

… the tools Wolfram built help visualize rules in the very, very large:

In[25]:= Show[CAGraphics[CAEvolveList[ElementaryRule[105], CenterList[10003], 5000]]]

Out[25]=

rule-105-a-lot.png

That’s 10,000 times bigger – 100 times bigger in each direction – and Mathematica executes and displays it flawlessly.