Press "Enter" to skip to content

Posts tagged as “Intelligence”

Surfacing

centaur 0
An interpretation of the rocket equation.

Wow. It's been a long time. Or perhaps not as long as I thought, but I've definitely not been able to post as much as I wanted over the last six months or so. But it's been for good reasons: I've been working on a lot of writing projects. The Dakota Frost / Cinnamon Frost "Hexology", which was a six book series; the moment I finished those rough drafts, it seemed, I rolled into National Novel Writing Month and worked on JEREMIAH WILLSTONE AND THE MACHINERY OF THE APOCALYPSE. Meanwhile, at work, I've been snowed under following up on our PRM-RL paper.

Thor's Hammer space station.

But I've been having fun! The MACHINERY OF THE APOCALYPSE is (at least possibly) spaaaace steampunk, which has led me to learn all sorts of things about space travel and rockets and angular momentum which I somehow didn't learn when I was writing pure hard science fiction. I've learned so much about creating artificial languages as part of the HEXOLOGY.

The Modanaqa Abugida.

So, hopefully I will have some time to start sharing this information again, assuming that no disasters befall me in the middle of the night.

Gabby in the emergency room.

Oh dag nabbit! (He's going to be fine).

-the Centaur

I<tab-complete> welcome our new robot overlords.

centaur 0
Hoisted from a recent email exchange with my friend Gordon Shippey:
Re: Whassap? Gordon: Sounds like a plan. (That was an actual GMail suggested response. Grumble-grumble AI takeover.) Anthony: I<tab-complete> welcome our new robot overlords.
I am constantly amazed by the new autocomplete. While, anecdotally, autocorrect of spell checking is getting worse and worse (I blame the nearly-universal phenomenon of U-shaped development, where a system trying to learn new generalizations gets worse before it gets better), I have written near-complete emails to friends and colleagues with Gmail's suggested responses, and when writing texts to my wife, it knows our shorthand! One way of doing this back in the day were Markov chain text models, where we learn predictions of what patterns are likely to follow each other; so if I write "love you too boo boo" to my wife enough times, it can predict "boo boo" will follow "love you too" and provide it as a completion. More modern systems use recurrent neural networks to learn richer sets of features with stateful information carried down the chain, enabling modern systems to capture subtler relationships and get better results, as described in the great article  "The Unreasonable Effectiveness of Recurrent Neural Networks". -the<tab-complete> Centaur  

PRM-RL Won a Best Paper Award at ICRA!

centaur 2
So, this happened! Our team's paper on "PRM-RL" - a way to teach robots to navigate their worlds which combines human-designed algorithms that use roadmaps with deep-learned algorithms to control the robot itself - won a best paper award at the ICRA robotics conference! I talked a little bit about how PRM-RL works in the post "Learning to Drive ... by Learning Where You Can Drive", so I won't go over the whole spiel here - but the basic idea is that we've gotten good at teaching robots to control themselves using a technique called deep reinforcement learning (the RL in PRM-RL) that trains them in simulation, but it's hard to extend this approach to long-range navigation problems in the real world; we overcome this barrier by using a more traditional robotic approach, probabilistic roadmaps (the PRM in PRM-RL), which build maps of where the robot can drive using point to point connections; we combine these maps with the robot simulator and, boom, we have a map of where the robot thinks it can successfully drive. We were cited not just for this technique, but for testing it extensively in simulation and on two different kinds of robots. I want to thank everyone on the team - especially Sandra Faust for her background in PRMs and for taking point on the idea (and doing all the quadrotor work with Lydia Tapia), for Oscar Ramirez and Marek Fiser for their work on our reinforcement learning framework and simulator, for Kenneth Oslund for his heroic last-minute push to collect the indoor robot navigation data, and to our manager James for his guidance, contributions to the paper and support of our navigation work. Woohoo! Thanks again everyone! -the Centaur

Why I’m Solving Puzzles Right Now

centaur 0
When I was a kid (well, a teenager) I'd read puzzle books for pure enjoyment. I'd gotten started with Martin Gardner's mathematical recreation books, but the ones I really liked were Raymond Smullyan's books of logic puzzles. I'd go to Wendy's on my lunch break at Francis Produce, with a little notepad and a book, and chew my way through a few puzzles. I'll admit I often skipped ahead if they got too hard, but I did my best most of the time. I read more of these as an adult, moving back to the Martin Gardner books. But sometime, about twenty-five years ago (when I was in the thick of grad school) my reading needs completely overwhelmed my reading ability. I'd always carried huge stacks of books home from the library, never finishing all of them, frequently paying late fees, but there was one book in particular - The Emotions by Nico Frijda - which I finished but never followed up on. Over the intervening years, I did finish books, but read most of them scattershot, picking up what I needed for my creative writing or scientific research. Eventually I started using the tiny little notetabs you see in some books to mark the stuff that I'd written, a "levels of processing" trick to ensure that I was mindfully reading what I wrote. A few years ago, I admitted that wasn't enough, and consciously  began trying to read ahead of what I needed to for work. I chewed through C++ manuals and planning books and was always rewarded a few months later when I'd already read what I needed to to solve my problems. I began focusing on fewer books in depth, finishing more books than I had in years. Even that wasn't enough, and I began - at last - the re-reading project I'd hoped to do with The Emotions. Recently I did that with Dedekind's Essays on the Theory of Numbers, but now I'm doing it with the Deep Learning. But some of that math is frickin' beyond where I am now, man. Maybe one day I'll get it, but sometimes I've spent weeks tackling a problem I just couldn't get. Enter puzzles. As it turns out, it's really useful for a scientist to also be a science fiction writer who writes stories about a teenaged mathematical genius! I've had to simulate Cinnamon Frost's staggering intellect for the purpose of writing the Dakota Frost stories, but the further I go, the more I want her to be doing real math. How did I get into math? Puzzles! So I gave her puzzles. And I decided to return to my old puzzle books, some of the ones I got later but never fully finished, and to give them the deep reading treatment. It's going much slower than I like - I find myself falling victim to the "rule of threes" (you can do a third of what you want to do, often in three times as much time as you expect) - but then I noticed something interesting. Some of Smullyan's books in particular are thinly disguised math books. In some parts, they're even the same math I have to tackle in my own work. But unlike the other books, these problems are designed to be solved, rather than a reflection of some chunk of reality which may be stubborn; and unlike the other books, these have solutions along with each problem. So, I've been solving puzzles ... with careful note of how I have been failing to solve puzzles. I've hinted at this before, but understanding how you, personally, usually fail is a powerful technique for debugging your own stuck points. I get sloppy, I drop terms from equations, I misunderstand conditions, I overcomplicate solutions, I grind against problems where I should ask for help, I rabbithole on analytical exploration, and I always underestimate the time it will take for me to make the most basic progress. Know your weaknesses. Then you can work those weak mental muscles, or work around them to build complementary strengths - the way Richard Feynman would always check over an equation when he was done, looking for those places where he had flipped a sign. Back to work! -the Centaur Pictured: my "stack" at a typical lunch. I'll usually get to one out of three of the things I bring for myself to do. Never can predict which one though.

Nailed It (Sorta)

centaur 0
Here's what was in the rabbit hole from last time (I had been almost there): I had way too much data to exploit, so I started to think about culling it out, using the length of the "mumbers" to cut off all the items too big to care about. That led to the key missing insight: my method of mapping mumbers mapped the first digit of each item to the same position - that is, 9, 90, 900, 9000 all had the same angle, just further out. This distance was already a logarithm of the number, but once I dropped my resistance to taking the logarithm twice... ... then I could create a transition plot function which worked for almost any mumber in the sets of mumbers I was playing with ... Then I could easily visualize the small set of transitions - "mumbers" with 3 digits - that yielded the graph above; for reference these are: The actual samples I wanted to play with were larger, like this up to 4 digits: This yields a still visible graph: And this, while it doesn't let me visualize the whole space that I wanted, does provide the insight I wanted. The "mumbers" up to 10000 do indeed "produce" most of the space of the smaller "mumbers" (not surprising, as the "mumber" rule 2XYZ produces XYZ, and 52XY produces XYXY ... meaning most numbers in the first 10,000 will be produced by one in that first set). But this shows that sequences of 52 rule transitions on the left produce a few very, very large mumbers - probably because 552552 produces 552552552552 which produces 552552552552552552552552552552552552 which quickly zooms away to the "mumberOverflow" value at the top of my chart. And now the next lesson: finishing up this insight, which more or less closes out what I wanted to explore here, took 45 minutes. I had 15 allotted to do various computer tasks before leaving Aqui, and I'm already 30 minutes over that ... which suggests again that you be careful going down rabbit holes; unlike leprechaun trails, there isn't likely to be a pot of gold down there, and who knows how far down it can go? -the Centaur P.S. I am not suggesting this time spent was not worthwhile; I'm just trying to understand the option cost of various different problem solving strategies so I can become more efficient.

Don’t Fall Into Rabbit Holes

centaur 2
SO! There I was, trying to solve the mysteries of the universe, learn about deep learning, and teach myself enough puzzle logic to create credible puzzles for the Cinnamon Frost books, and I find myself debugging the fine details of a visualization system I've developed in Mathematica to analyze the distribution of problems in an odd middle chapter of Raymond Smullyan's The Lady or the Tiger. I meant well! Really I did. I was going to write a post about how finding a solution is just a little bit harder than you normally think, and how insight sometimes comes after letting things sit. But the tools I was creating didn't do what I wanted, so I went deeper and deeper down the rabbit hole trying to visualize them. The short answer seems to be that there's no "there" there and that further pursuit of this sub-problem will take me further and further away from the real problem: writing great puzzles! I learned a lot - about numbers, about how things could combinatorially explode, about Ulam Spirals and how to code them algorithmically. I even learned something about how I, particularly, fail in these cases. But it didn't provide the insights I wanted. Feynman warned about this: he called it "the computer disease", worrying about the formatting of the printout so much you forget about the answer you're trying to produce, and it can strike anyone in my line of work. Back to that work. -the Centaur

Learning to Drive … by Learning Where You Can Drive

centaur 1
I often say "I teach robots to learn," but what does that mean, exactly? Well, now that one of the projects that I've worked on has been announced - and I mean, not just on arXiv, the public access scientific repository where all the hottest reinforcement learning papers are shared, but actually, accepted into the ICRA 2018 conference - I  can tell you all about it! When I'm not roaming the corridors hammering infrastructure bugs, I'm trying to teach robots to roam those corridors - a problem we call robot navigation. Our team's latest idea combines "traditional planning," where the robot tries to navigate based on an explicit model of its surroundings, with "reinforcement learning," where the robot learns from feedback on its performance. For those not in the know, "traditional" robotic planners use structures like graphs to plan routes, much in the same way that a GPS uses a roadmap. One of the more popular methods for long-range planning are probabilistic roadmaps, which build a long-range graph by picking random points and attempting to connect them by a simpler "local planner" that knows how to navigate shorter distances. It's a little like how you learn to drive in your neighborhood - starting from landmarks you know, you navigate to nearby points, gradually building up a map in your head of what connects to what. But for that to work, you have to know how to drive, and that's where the local planner comes in. Building a local planner is simple in theory - you can write one for a toy world in a few dozen lines of code - but difficult in practice, and making one that works on a real robot is quite the challenge. These software systems are called "navigation stacks" and can contain dozens of components - and in my experience they're hard to get working and even when you do, they're often brittle, requiring many engineer-months to transfer to new domains or even just to new buildings. People are much more flexible, learning from their mistakes, and the science of making robots learn from their mistakes is reinforcement learning, in which an agent learns a policy for choosing actions by simply trying them, favoring actions that lead to success and suppressing ones that lead to failure. Our team built a deep reinforcement learning approach to local planning, using a state-of-the art algorithm called DDPG (Deep Deterministic Policy Gradients) pioneered by DeepMind to learn a navigation system that could successfully travel several meters in office-like environments. But there's a further wrinkle: the so-called "reality gap". By necessity, the local planner used by a probablistic roadmap is simulated - attempting to connect points on a map. That simulated local planner isn't identical to the real-world navigation stack running on the robot, so sometimes the robot thinks it can go somewhere on a map which it can't navigate safely in the real world. This can have disastrous consequences - causing robots to tumble down stairs, or, worse, when people follow their GPSes too closely without looking where they're going, causing cars to tumble off the end of a bridge. Our approach, PRM-RL, directly combats the reality gap by combining probabilistic roadmaps with deep reinforcement learning. By necessity, reinforcement learning navigation systems are trained in simulation and tested in the real world. PRM-RL uses a deep reinforcement learning system as both the probabilistic roadmap's local planner and the robot's navigation system. Because links are added to the roadmap only if the reinforcement learning local controller can traverse them, the agent has a better chance of attempting to execute its plans in the real world. In simulation, our agent could traverse hundreds of meters using the PRM-RL approach, doing much better than a "straight-line" local planner which was our default alternative. While I didn't happen to have in my back pocket a hundred-meter-wide building instrumented with a mocap rig for our experiments, we were able to test a real robot on a smaller rig and showed that it worked well (no pictures, but you can see the map and the actual trajectories below; while the robot's behavior wasn't as good as we hoped, we debugged that to a networking issue that was adding a delay to commands sent to the robot, and not in our code itself; we'll fix this in a subsequent round). This work includes both our group working on office robot navigation - including Alexandra Faust, Oscar Ramirez, Marek Fiser, Kenneth Oslund, me, and James Davidson - and Alexandra's collaborator Lydia Tapia, with whom she worked on the aerial navigation also reported in the paper.  Until the ICRA version comes out, you can find the preliminary version on arXiv:

https://arxiv.org/abs/1710.03937 PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning

We present PRM-RL, a hierarchical method for long-range navigation task completion that combines sampling-based path planning with reinforcement learning (RL) agents. The RL agents learn short-range, point-to-point navigation policies that capture robot dynamics and task constraints without knowledge of the large-scale topology, while the sampling-based planners provide an approximate map of the space of possible configurations of the robot from which collision-free trajectories feasible for the RL agents can be identified. The same RL agents are used to control the robot under the direction of the planning, enabling long-range navigation. We use the Probabilistic Roadmaps (PRMs) for the sampling-based planner. The RL agents are constructed using feature-based and deep neural net policies in continuous state and action spaces. We evaluate PRM-RL on two navigation tasks with non-trivial robot dynamics: end-to-end differential drive indoor navigation in office environments, and aerial cargo delivery in urban environments with load displacement constraints. These evaluations included both simulated environments and on-robot tests. Our results show improvement in navigation task completion over both RL agents on their own and traditional sampling-based planners. In the indoor navigation task, PRM-RL successfully completes up to 215 meters long trajectories under noisy sensor conditions, and the aerial cargo delivery completes flights over 1000 meters without violating the task constraints in an environment 63 million times larger than used in training.
  So, when I say "I teach robots to learn" ... that's what I do. -the Centaur

Welcome to the Future

centaur 0

20161230_215137.jpg

Welcome to the future, ladies and gentlemen. Here in the future, the obscure television shows of my childhood rate an entire section in the local bookstore, which combines books, games, music, movies, and even vinyl records with a coffeehouse and restaurant.

20161227_171758.jpg

Here in the future, the heretofore unknown secrets of my discipline, artificial intelligence, are now conveniently compiled in compelling textbooks that you can peruse at your leisure over a cup of coffee.

20161230_195132.jpg

Here in the future, genre television shows play on the monitors of my favorite bar / restaurant, and the servers and I have meaningful conversations about the impact of robotics on the future of labor.

20161230_162633.jpg

And here in the future, Monty Python has taken over the world.

Perhaps that explains 2016.

-the Centaur

The Two Fear Channels

centaur 0

20160618_135145.jpg

Hoisted from a recent email thread with the estimable Jim Davies:

“You wrote to me once that the brain has two fear channels, cognitive and reactive. Do you have a citation I can look at for an introduction to that idea?”

So I didn’t have a citation off the top of my head, though I do now - LeDoux’s 1998 book The Emotional Brain - but I did remember what I told Jim: that we have two fear channels, one fast, one slow. The fast one is primarily sensory, reactive, and can learn bad associations which are difficult to unlearn, as in PTSD (post-traumatic stress disorder); the slow one is more cognitive, deliberative, and has intellectual fear responses.

It turns out that it ain’t that simple, but I was almost right. Spoiling the lead a bit, there are two conditioned fear channels, the fast “low road” and slow “high road” and they do function more or less as I described: the low road has quick reactions to stimuli, a direct hotline from sensory processing in your thalamus to the amygdala which is a clearinghouse for emotional information; the high road involves the sensory cortex and confirms the quick reaction of the low road. The low road’s implicated in PTSD, though PTSD seems to involve broader areas of brain damage brought on by traumatic events.

Where that needs tweaking is that there’s also a third fear channel, the instructed or cognitive fear channel. This allows us to become scared if we’re told that there’s a tiger behind a door, even if we haven’t seen the fearsome beast. This one relies on an interaction between the hippocampus and the amygdala; if your hippocampus is damaged, you will likely not remember what you’re told, whereas if your amygdala is damaged, you may react appropriately to instruction, but you might not feel the appropriate emotional response to your situation (which could lead you to make poor choices).

So, anyway, that’s the gist. But, in the spirit of Check Your Work, let me show my work from my conversation with Jim.

Ok, I have an answer for you (description based on [Gazzaniga et al 2002], though I found similar information in [Lewis et al 2010]).

There are two fear channels: one involving fast sensory processing and one involving slower perceptual information. Based on the work of LeDoux [1996] these are sometimes called the "low road" (quick and dirty connection of the thalamus to the amygdala, a crude signal that a stimulus resembles a conditioned stimulus) and the "high road" (thalamus to sensory cortex to amygdala, a more refined signal which is more reliable); both of these channels help humans learn implicit conditioned fear responses to stimuli.

This "low road" and "high road" concept was what my understanding of PTSD is based on, that individuals acquire a fast low-road response to stimuli that they cannot readily suppress; I don't have a reference for you, but I've heard it many times (and it's memorably portrayed in Born on the Fourth of July when veterans in a parade react to firecrackers with flinches, and later the protagonist after his experience has the same reaction). A little research seems to indicate that PTSD may actually involve events traumatic enough to damage the amygdala or hippocampus or both, but likely involving other brain areas as well ([Bremner 2006], [Chen et al 2012]).

There's a couple more wrinkles. Even patients with amygdala damage have unconditioned fear responses; conditioned responses seem to involve the amygdala [Phelps et al 1998]. Instructed fear (warning a subject about a loud noise that will follow a flashing light, for example) seems to involve the hippocampus as well, though patients with amygdala damage don't show fear responses even though they may behave appropriately when instructed (e.g., not showing a galvanic skin response even though they flinch [Phelps et al 2001]). This amygdala response can influence storage of emotional memories [Ferry et al 2000]. Furthermore, there's evidence the amygdala is even involved in perceptual processing of emotional expression [Dolan and Morris 2000].

So to sum, the primary reference that I was talking about was the "low road" (fast connection from thalamus to amygdala, implicated in fast conditioned fear responses and PTSD, though PTSD may involve trauma-induced damage to more brain areas) and "high road" (slow reliable connection from thalamus to sensory cortex to amygdala, implicated in conditioned fear responses), but there's also a "sensory" path (conditioned fear response via the thalamus to the amygdala, with or without the sensory cortex involvement) vs "cognitive" path (instructed fear response via the hippocampus, which functions but shows reduced emotional impact in case of amygdala damage).

Hope this helps!

Bremner, J. D. (2006). Traumatic stress: effects on the brain. Dialogues in clinical neuroscience, 8(4), 445.

Chen, Y., Fu, K., Feng, C., Tang, L., Zhang, J., Huan, Y., ... & Ma, C. (2012). Different regional gray matter loss in recent onset PTSD and non PTSD after a single prolonged trauma exposure. PLoS One, 7(11), e48298.

Dolan, R. J., & Morris, J. S. (2000). The functional anatomy of innate and acquired fear: Perspectives from neuroimaging. Cognitive neuroscience of emotion, 225-241.

Ferry, B., Roozendaal, B., & McGaugh, J. L. (1999). Basolateral amygdala noradrenergic influences on memory storage are mediated by an interaction between β-and α1-adrenoceptors. The Journal of Neuroscience, 19(12), 5119-5123.

Gazzaniga, M.S., Ivry, R.B., & Mangun, G.R. (2002) Cognitive Neuroscience - The Biology of the Mind (2e) W. W. Norton & Company.

LeDoux, J. (1998). The emotional brain: The mysterious underpinnings of emotional life. Simon and Schuster.
Lewis, M., Haviland-Jones, J. M., & Barrett, L. F. (Eds.). (2010). Handbook of emotions. Guilford Press.

Phelps, E. A., LaBar, K. S., Anderson, A. K., O'connor, K. J., Fulbright, R. K., & Spencer, D. D. (1998). Specifying the contributions of the human amygdala to emotional memory: A case study. Neurocase, 4(6), 527-540.

Phelps, E. A., O'Connor, K. J., Gatenby, J. C., Gore, J. C., Grillon, C., & Davis, M. (2001). Activation of the left amygdala to a cognitive representation of fear. Nature neuroscience, 4(4), 437-441.
-the Centaur
Pictured: a few of the books I looked at to answer Jim’s question.



“Sibling Rivalry” returning to print

centaur 0
sibling-rivalry-cover-small.png Wow. After nearly 21 years, my first published short story, “Sibling Rivalry”, is returning to print. Originally an experiment to try out an idea I wanted to use for a longer novel, ALGORITHMIC MURDER, I quickly found that I’d caught a live wire with “Sibling Rivalry”, which was my first sale to The Leading Edge magazine back in 1995. “Sibling Rivalry” was borne of frustrations I had as a graduate student in artificial intelligence (AI) watching shows like Star Trek which Captain Kirk talks a computer to death. No-one talks anyone to death outside of a Hannibal Lecter movie or a bad comic book, much less in real life, and there’s no reason to believe feeding a paradox to an AI will make it explode. But there are ways to beat one, depending on how they’re constructed - and the more you know about them, the more potential routes there are for attack. That doesn’t mean you’ll win, of course, but … if you want to know, you’ll have to wait for the story to come out. “Sibling Rivalry” will be the second book in Thinking Ink Press's Snapbook line, with another awesome cover by my wife Sandi Billingsley, interior design by Betsy Miller and comments by my friends Jim Davies and Kenny Moorman, the latter of whom uses “Sibling Rivalry” to teach AI in his college courses. Wow! I’m honored. Our preview release will be at the Beyond the Fence launch party next week, with a full release to follow. Watch this space, fellow adventurers! -the Centaur

All the Transitions of Tic-Tac-Toe, Redux

centaur 1
What was supposed to be a quick exercise to help me visualize a reinforcement learning problem has turned into a much larger project, one which I'm reluctantly calling a temporary halt to: a visualization of all the states of Tic-Tac-Toe. What I found is that it's surprisingly hard to make this work: all the states want to pile on top of each other, and there are a few subtleties to representing it correctly. To make it work, I had to separately represent board positions - the typical X'es and Oh's used in play - from game states, such as Start, X Wins, O Wins, and Stalemate. The Mathematica for this is gnarly and a total hack; it probably could be made more efficient to process all 17,000+ transitions of the game, and I definitely need to think of a way to make each state appear in its own, non-overlapping position. But that will require more thought than my crude jitter function above, the time it takes to run each render is way too long to quickly iterate, and I have a novel to finish. I don't want to get stuck in a grind against a game known for its stalemate. Ugh. You can see the jumble there; it's hard to see which transitions lead to X's or O's victory and which lead to stalemate. I have ideas on how to fix this, but I want my novel done more and first, dag nab it. So let me give you all the transitions of Tic-Tac-Toe in their full glory (22.8mb). I could say more about this problem - or I can say what I have, call it victory, and move on. On to the novel. It's going well. -the Centaur

I just think they don’t want AI to happen

centaur 2
Hoisted from Facebook: I saw my friend Jim Davies share the following article:
http://www.theguardian.com/commentisfree/2016/mar/13/artificial-intelligence-robots-ethics-human-control The momentous advance in artificial intelligence demands a new set of ethics ... In a dramatic man versus machine encounter, AlphaGo has secured its third, decisive victory against a renowned Go player. With scientists amazed at how fast AI is developing, it’s vital that humans stay in control.
I posted: "The AI researchers I know talk about ethics and implications all the time - that's why I get scared about every new call for new ethics after every predictable incremental advance." I mean, Jim and I have talked about this, at length; so did my I and my old boss, James Kuffner ... heck, one of my best friends, Gordon Shippey, went round and round on this over two decades ago in grad school. Issues like killbots, all the things you could do with the 99% of a killbot that's not lethal, the displacement of human jobs, the potential for new industry, the ethics of sentient robots, the ethics of transhuman uplift, and whether any of these things are possible ... we talk about it a lot. So if we've been building towards this for a while, and talking about ethics the whole time, where's the need for a "new" ethics, except in the minds of people not paying attention? But my friend David Colby raised the following point: "I'm no scientist, but it seems to me that anyone who doesn't figure out how to make an ethical A.I before they make an A.I is just asking for trouble." Okay, okay, so I admit it: my old professor Ron Arkin's book on the ethics of autonomous machines in warfare is lower in my stack than the book I'm reading on reinforcement learning ... but it's literally in my stack, and I think about this all the time ... and the people I work with think about this all the time ... and talk about it all the time ... so where is this coming from? I feel like there's something else beneath the surface. Since David and I are space buffs, my response to him was that I read all these stories about the new dangers of AI as if they said:
With the unexpected and alarming success of the recent commercial space launch, it's time for a new science of safety for space systems. What we need is a sober look at the risks. After all, on a mission to Mars, a space capsule might lose pressure. Before we move large proportions of the human race to space, we need to, as a society, look at the potential catastrophes that might ensue, and decide whether this is what we want our species to be doing. That's why, at The Future of Life on Earth Institute, we've assembled the best minds who don't work directly in the field to assess the real dangers and dubious benefits of space travel, because clearly the researchers who work in the area are so caught up with enthusiasm that they're not seriously considering the serious risks. Seriously. Sober. Can we ban it now? I just watched Gravity and I am really scared after clenching my sphincter for the last ninety minutes.
To make that story more clear if you aren't a space buff: there are more commercial space endeavors out there than you can shake a stick at, so advances in commercial space travel should not be a surprise - and the risks outlined above, like decompression, are well known and well discussed. Some of us involved in space also talk about these issues all the time. My friend David has actually written a book about space disasters, DEBRIS DREAMS, which you can get on Amazon. So to make the analogy more clear, there are more research teams working on almost every possible AI problem that you can think of, so advances in artificial intelligence applications should not be a surprise - and the risks outlined by most of these articles are well known and discussed. In my personal experience - my literal personal experience - issues like safety in robotic systems, whether to trust machine decisions over human judgment, and the potential for disruption of human jobs or even life are all discussed more frequently, and with more maturity, than I see in all these "sober calls" for "clear-minded" research from people who wouldn't know a laser safety curtain from an orbital laser platform. I just get this sneaking suspicion they don't want AI to happen. -the Centaur

All the States of Tic-Tac-Toe

centaur 0
Screenshot 2016-03-12 15.06.34.png NOT the most elegant Mathematica, but trying to do clever things with NestList was a pain. And my math was creating duplicate transitions, which is why the other graphs were so dense - and the layer size needed to be tweaked a bit to show both the starting and ending states more clearly. But, after some cleanup, it worked, after a bit of churning (click the image for a larger size): All the States of Tic Tac Toe.png

I Am Easily Amused

centaur 0

Screenshot 2016-03-12 14.24.06.png

More seriously, what I’m trying to do is improve my understanding of state spaces. Below’s yet another visualization of the first four stages of tic-tac-toe, trying to get at how the states reconverge.

TicTacToe v1.png

You can see the structure even better without the board visualizations, but if you do it’s just a graph and you no longer know what it is that you’re seeing. More thought is required on how to visualize this (and the real problems I’m tackling behind this, for my day job).

-the Centaur

Visualizing Cellular Automata

centaur 0


cellular-automata-v1.png

SO, why's an urban fantasy author digging into the guts of Mathematica trying to reverse-engineer how Stephen Wolfram drew the diagrams of cellular automata in his book A New Kind of Science? Well, one of my favorite characters to write about is the precocious teenage weretiger Cinnamon Frost, who at first glance was a dirty little street cat until she blossomed into a mathematical genius when watered with just the right amount of motherly love. My training as a writer was in hard science fiction, so even if I'm writing about implausible fictions like teenage weretigers, I want the things that are real - like the mathematics she develops - to be right. So I'm working on a new kind of math behind the discoveries of my little fictional genius, but I'm not the youngest winner of the Hilbert Prize, so I need tools to help simulate her thought process.

And my thought process relies on visualizations, so I thought, hey, why don't I build on whatever Stephen Wolfram did in his groundbreaking tome A New Kind of Science, which is filled to its horse-choking brim with handsome diagrams of cellular automata, their rules, and the pictures generated by their evolution? After all, it only took him something like ten years to write the book ... how hard could it be?

Deconstructing the Code from A New Kind of Science, Chapter 2

Fortunately Stephen Wolfram provides at least some of the code that he used for creating the diagrams in A New Kind of Science. He's got the code available for download on the book's website, wolframscience.com, but a large subset is in the extensive endnotes for his book (which, densely printed and almost 350 pages long, could probably constitute a book in their own right). I'm going to reproduce that code here, as I assume it's short enough to fall under fair use, and for the half-dozen functions we've got here any attempt to reverse-engineer it would end up just recreating essentially the same functions with slightly different names.
Cellular automata are systems that take patterns and evolve them according to simple rules. The most basic cellular automata operate on lists of bits - strings of cells which can be "on" or "off" or alternately "live" or "dead," "true" and "false," or just "1" and "0" - and it's easiest to show off how they behave if you start with a long string of cells which are "off" with the very center cell being "on," so you can easily see how a single live cell evolves. And Wolfram's first function gives us just that, a list filled with dead cells represented by 0 with a live cell represented by 1 in its very center:

In[1]:= CenterList[n_Integer] := ReplacePart[Table[0, {n}], 1, Ceiling[n/2]]


In[2]:= CenterList[10]
Out[2]= {0, 0, 0, 0, 1, 0, 0, 0, 0, 0}


One could imagine a cellular automata which updated each cell just based on its contents, but that would be really boring as each cell would be effectively independent. So Wolfram looks at what he calls "elementary automata" which update each cell based on their neighbors. Counting the cell itself, that's a row of three cells, and there are eight possible combinations of live and dead neighbors of three elements - and only two possible values that can be set for each new element, live or dead. Wolfram had a brain flash to list the eight possible combinations the same each way every time, so all you have are that list of eight values of "live" or "dead" - or 1's and 0's, and since a list of 1's and 0's is just a binary number, that enabled Wolfram to represent each elementary automata rule as a number:

In[3]:= ElementaryRule[num_Integer] := IntegerDigits[num, 2, 8]

In[4]:= ElementaryRule[30]
Out[4]= {0, 0, 0, 1, 1, 1, 1, 0}


Once you have that number, building code to apply the rule is easy. The input data is already a string of 1's and 0's, so Wolfram's rule for updating a list of cells basically involves shifting ("rotating") the list left and right, adding up the values of these three neighbors according to base 2 notation, and then looking up the value in the rule. Wolfram created Mathematica in part to help him research cellular automata, so the code to do this is deceptively simple…

In[5]:= CAStep[rule_List, a_List] :=
rule[[8 - (RotateLeft[a] + 2 (a + 2 RotateRight[a]))]]


... a “RotateLeft” and a “RotateRight” with some addition and multiplication to get the base 2 index into the rule. The code to apply this again and again to a list to get the history of a cellular automata over time is also simple:

In[6]:= CAEvolveList[rule_, init_List, t_Integer] :=
NestList[CAStep[rule, #] &, init, t]


Now we're ready to create the graphics for the evolution of Wolfram's "rule 30," the very simple rule which shows highly complex and irregular behavior, a discovery which Wolfram calls "the single most surprising scientific discovery [he has] ever made." Wow. Let's spin it up for a whirl and see what we get!

In[7]:= CAGraphics[history_List] :=
Graphics[Raster[1 - Reverse[history]], AspectRatio -> Automatic]


In[8]:= Show[CAGraphics[CAEvolveList[ElementaryRule[30], CenterList[103], 50]]]
Out[8]=

rule-30-evolution.png



Uh - oh. The "Raster" code that Wolfram provides is the code to create the large images of cellular automata, not the sexy graphics that show the detailed evolution of the rules. And reading between the lines of Wolfram's end notes, he started his work in FrameMaker before Mathematica was ready to be his full publishing platform, with a complex build process producing the output - so there's no guarantee that clean simple Mathematica code even exists for some of those early diagrams.

Guess we'll have to create our own.

Visualizing Cellular Automata in the Small

The cellular automata diagrams that Wolfram uses have boxes with thin lines, rather than just a raster image with 1's and 0's represented by borderless boxes. They're particularly appealing because the lines are white between black boxes and black between white boxes, which makes the structures very easy to see. After some digging, I found that, naturally, a Mathematica function to create those box diagrams does exist, and it's called ArrayPlot, with the Mesh option set to True:

In[9]:= ArrayPlot[Table[Mod[i + j, 2], {i, 0, 3}, {j, 0, 3}], Mesh -> True]
Out[9]=

checkerboard.png


While we could just use ArrayPlot, it' s important when developing software to encapsulate our knowledge as much as possible, so we'll create a function CAGridGraphics (following the way Wolfram named his functions) that encapsulates the knowledge of turning the Mesh option to True. If later we decide there's a better representation, we can just update CAMeshGraphics, rather than hunting down every use of ArrayPlot. This function gives us this:

In[10]:= CAMeshGraphics[matrix_List] :=
ArrayPlot[matrix, Mesh -> True, ImageSize -> Large]


In[11]:= CAMeshGraphics[{CenterList[10], CenterList[10]}]
Out[11]=

lines-of-boxes.png


Now, Wolfram has these great diagrams to help visualize cellular automata rules which show the neighbors up top and the output value at bottom, with a space between them. The GraphicsGrid does what we want here, except it by its nature resizes all the graphics to fill each available box. I'm sure there's a clever way to do this, but I don't know Mathematica well enough to find it, so I'm going to go back on what I just said earlier, break out the options on ArrayPlot, and tell the boxes to be the size I want:

In[20]:= CATransitionGraphics[rule_List] :=
GraphicsGrid[
Transpose[{Map[
   ArrayPlot[{#}, Mesh -> True, ImageSize -> {20 Length[#], 20}] &, rule]}]]


That works reasonably well; here' s an example rule, where three live neighbors in a row kills the center cell :

In[21]:= CATransitionGraphics[{{1, 1, 1}, {0}}]
Out[21]=

Screenshot 2016-01-03 14.19.21.png  

Now we need the pattern of digits that Wolfram uses to represent his neighbor patterns. Looking at the diagrams and sfter some digging in the code, it seems like these digits are simply listed in reverse counting order - that is, for 3 cells, we count down from 2^3 - 1 to 0, represented as binary digits.

In[22]:= CANeighborPattern[num_Integer] :=
Table[IntegerDigits[i, 2, num], {i, 2^num - 1, 0, -1}]


In[23]:= CANeighborPattern[3]
Out[23]= {{1, 1, 1}, {1, 1, 0}, {1, 0, 1}, {1, 0, 0}, {0, 1, 1}, {0, 1, 0}, {0, 0,
1}, {0, 0, 0}}


Stay with me - that only gets us the first row of the CATransitionGraphics; to get the next row, we need to apply a rule to that pattern and take the center cell:

In[24]:= CARuleCenterElement[rule_List, pattern_List] :=
CAStep[rule, pattern][[Floor[Length[pattern]/2]]]


In[25]:= CARuleCenterElement[ElementaryRule[30], {0, 1, 0}]
Out[25]= 1


With all this, we can now generate the pattern of 1' s and 0' s that represent the transitions for a single rule:

In[26]:= CARulePattern[rule_List] :=
Map[{#, {CARuleCenterElement[rule, #]}} &, CANeighborPattern[3]]

In[27]:= CARulePattern[ElementaryRule[30]]
Out[27]= {{{1, 1, 1}, {0}}, {{1, 1, 0}, {1}}, {{1, 0, 1}, {0}}, {{1, 0, 0}, {1}}, {{0,
   1, 1}, {0}}, {{0, 1, 0}, {1}}, {{0, 0, 1}, {1}}, {{0, 0, 0}, {0}}}


Now we can turn it into graphics, putting it into another GraphicsGrid, this time with a Frame.

In[28]:= CARuleGraphics[rule_List] :=
GraphicsGrid[{Map[CATransitionGraphics[#] &, CARulePattern[rule]]},
Frame -> All]


In[29]:= CARuleGraphics[ElementaryRule[30]]
Out[29]=

Screenshot 2016-01-03 14.13.52.png

At last! We' ve got the beautiful transition diagrams that Wolfram has in his book. And we want to apply it to a row with a single cell:

In[30]:= CAMeshGraphics[{CenterList[43]}]
Out[30]=

Screenshot 2016-01-03 14.13.59.png

What does that look like? Well, we once again take our CAEvolveList function from before, but rather than formatting it with Raster, we format it with our CAMeshGraphics:

In[31]:= CAMeshGraphics[CAEvolveList[ElementaryRule[30], CenterList[43], 20]]
Out[31]=

Screenshot 2016-01-03 14.14.26.png

And now we' ve got all the parts of the graphics which appear in the initial diagram of this page. Just to work it out a bit further, let’s write a single function to put all the graphics together, and try it out on rule 110, the rule which Wolfram discovered could effectively simulate any possible program, making it effectively a universal computer:

In[22]:= CAApplicationGraphics[rule_Integer, size_Integer] := Column[
{CAMeshGraphics[{CenterList[size]}],
   CARuleGraphics[ElementaryRule[rule]],
   CAMeshGraphics[
CAEvolveList[ElementaryRule[rule], CenterList[size],
   Floor[size/2] - 1]]},
Center]

In[23]:= CAApplicationGraphics[110, 43]
Out[23]=


Screenshot 2016-01-03 14.14.47.png

It doesn' t come out quite the way it did in Photoshop, but we' re getting close. Further learning of the rules of Mathematica graphics will probably help me, but that's neither here nor there. We've got a set of tools for displaying diagrams, which we can craft into what we need.

Which happens to be a non-standard number system unfolding itself into hyperbolic space, God help me.

Wish me luck.

-the Centaur

P.S. While I' m going to do a standard blogpost on this, I' m also going to try creating a Mathematica Computable Document Format (.cdf) for your perusal. Wish me luck again - it's my first one of these things.

P.P.S. I think it' s worthwhile to point out that while the tools I just built help visualize the application of a rule in the small …

In[24]:= CAApplicationGraphics[105, 53]
Out[24]=

Screenshot 2016-01-03 14.14.58.png

... the tools Wolfram built help visualize rules in the very, very large:

In[25]:= Show[CAGraphics[CAEvolveList[ElementaryRule[105], CenterList[10003], 5000]]]

Out[25]=

rule-105-a-lot.png

That's 10,000 times bigger - 100 times bigger in each direction - and Mathematica executes and displays it flawlessly.

Why yes, I’m running a deep learning system on a MacBook Air. Why?

centaur 1
deeplearning.png Yep, that’s Python consuming almost 300% of my CPU - guess what, I guess that means this machine has four processing cores, since I saw it hit over 300% - running the TensorFlow tutorial. For those that don’t know, "deep learning” is a relatively recent type of learning which uses improvements in both processing power and learning algorithms to train learning networks that can have dozens or hundreds of layers - sometimes as many layers as neural networks in the 1980’s and 1990’s had nodes. For those that don’t know even that, neural networks are graphs of simple nodes that mimic brain structures, and you can train them with data that contains both the question and the answer. With enough internal layers, neural networks can learn almost anything, but they require a lot of training data and a lot of computing power. Well, now we’ve got lots and lots of data, and with more computing power, you’d expect we’d be able to train larger networks - but the first real trick was discovering mathematical tricks that keep the learning signal strong deep, deep within the networks. The second real trick was wrapping all this amazing code in a clean software architecture that enables anyone to run the software anywhere. TensorFlow is one of the most recent of these frameworks - it’s Google’s attempt to package up the deep learning technology it uses internally so that everyone in the world can use it - and it’s open source, so you can download and install it on most computers and try out the tutorial at home. The CPU-baking example you see running here, however, is not the simpler tutorial, but a test program that runs a full deep neural network. Let’s see how it did: Screenshot 2016-02-08 21.08.40.png Well. 99.2% correct, it seems. Not bad for a couple hundred lines of code, half of which is loading the test data - and yeah, that program depends on 200+ files worth of Python that the TensorFlow installation loaded onto my MacBook Air, not to mention all the libraries that the TensorFlow Python installation depends on in turn … But I still loaded it onto a MacBook Air, and it ran perfectly. Amazing what you can do with computers these days. -the Centaur

Welcome to 2016

centaur 0

20151219_063113.jpg

Hi, I’m Anthony! I love to write books and eat food, activities that I power by fiddling with computers. Welcome to 2016! It’s a year. I hope it’s a good one, but hope is not a strategy, so here’s what I’m going to do to make 2016 better for you.

First, I’m writing books. I’ve got a nearly-complete manuscript of a steampunk novel JEREMIAH WILLSTONE AND THE CLOCKWORK TIME MACHINE which I’m wrangling with the very excellent editor Debra Dixon at Bell Bridge Books. God willing, you’ll see this come out this year. Jeremiah appears in a lot of short stories in the anthologies UnCONventional, 12 HOURS LATER, and 30 DAYS LATER - more on that one in a bit.

I also have completed drafts of the urban fantasy novels SPECTRAL IRON and HEX CODE, starring Dakota Frost and her adopted daughter Cinnamon Frost, respectively. If you like magical tattoos, precocious weretigers, and the trouble they can get into, look for these books coming soon - or check out FROST MOON, BLOOD ROCK and LIQUID FIRE, the first three Dakota books. (They’re all still on sale, by the way).

Second, I’m publishing books. I and some author/artist friends in the Bay Area founded Thinking Ink Press, and we are publishing the steampunk anthology 30 DAYS LATER edited by Belinda Sikes, AJ Sikes and Dover Whitecliff. We’re hoping to also re-release their earlier anthology 12 HOURS LATER; both of these were done for the Clockwork Alchemy conference, and we’re proud to have them.

We’re also publishing a lot more - FlashCards and InstantBooks and SnapBooks and possibly even a reprint of a novel which recently went out of print. Go to Thinking Ink Press for more news; for things I’m an editor/author on I’ll also announce them here.

Third, I’m doing more computing. Cinnamon Frost is supposed to be a mathematical genius, so to simulate her thought process I write computer programs (no joke). I’ve written up some few articles on this for publication on this blog, and hope to do more over the year to come.

Fourth, I’m going to keep doing art. Most of my art is done in preparation for either book frontispieces or for 24-Hour Comics Day, but I’m going to step that up a bit this year - I have to, if I’m going to get (ulp) three frontispieces done over the next year. Must draw faster!

Finally, I’m going to blog more. I’m already doing it, right now, but one way I’m trying to get ahead is to write two blog posts at a time, publishing one and saving one in reserve. This way I can keep getting ahead, but if I fall behind I’ve got some backlog to fall back on. I feel hounded by all the ideas in my head, so I’m going to loose them on all of you.

As for New Year’s Resolutions? Fah. I could say “exercise more, blog every day, and clean up the piles of papers” but we all know New Year’s Resolution’s are a joke, unless your name is Jim Davies, in which case they’re performance art.

SO ANYWAY, 2016. It’s going to be a year. I hope we can make it a great one!

-the Centaur

Pictured: The bookshelves of Cafe Intermezzo in the Atlanta airport, one place where I like to write books and eat food.

Going on the Record about Donald Trump

centaur 0

americanflag.png

AS some of you may have noticed, real estate mogul Donald Trump is making his second (or third) run for the presidency (depending on how you count), and has been having quite a good show of it - topping many polls despite saying and doing a lot of things that would have doomed another candidate - such as disparaging American prisoners of war, associating immigrants with criminals, and, most recently, associating his opponents with pedophiles.

As a left-leaning moderate, I’m not fond of many of Donald Trump’s policies. But I am fond of Dilbert, and the excellent blog by Dilbert creator Scott Adams, in which Scott wrestles with many difficult and interesting ideas so you don’t have to (but you should). In the blog, Scott’s been chronicling Trump’s rise to power with what he calls the Master Wizard Hypothesis, which, in a nutshell, says that there are great techniques of persuasion, Trump is an acknowledged master, and most of the crazy things that Trump is doing are carefully engineered to get and keep your attention. Regardless of your politics, Scott says, you should pay attention to what Trump is doing, because you’re watching a master class in persuasion unfold on a national stage.

Scott, a trained hypnotist and student of persuasion himself, goes further to say that a Master Wizard’s persuasion often puts people into cognitive dissonance, where a person becomes uncomfortable when they are presented with information they don’t want to accept. Well, as a trained cognitive scientist, that characterization makes me a bit uncomfortable, because I see the conscious (or unconscious) persuasion embedded in that characterization, persuasion which is in the favor of someone trying to be a persuader: the framing is that someone presented with “information” is “feeling uncomfortable,” hence is being irrational. However, because one thing that can trigger discomfort is someone exhibiting a violation of what you perceive to be a standard, it’s also perfectly possible that you can feel uncomfortable confronted by new “information” that contradicts new beliefs not just because you are inconsistent … but because the presented “information" is wrong. So, in this argument, people could possibly just be upset with Trump not because he’s a Master Wizard … but because they sincerely disagree with him in their judgments about facts and policies.

As it happens, I’ve entertained for a while an alternate hypothesis about what’s been going on about Donald Trump, and it seems like it might be playing out. In fact, I’ve almost been scooped on it, so at first I wasn’t going to write anything. But Scott Adams has done something great with his hypotheses: he’s put his predictions about Trump on the table, so he can be proved wrong later. Feynman argued the same thing: you’ve got to stick your neck out far enough for it to get cut off in order to really see the truth. So, I wanted to go on the record about what I think’s going on with Donald Trump.

For reference, here’s what I think people are saying about Donald Trump:

  • Malignant Narcissist Theory: Donald Trump is an insufferable blowhard who’s doing well because he’s an outrageous bully with an ego so enormous he’s resistant to normal modes of shame, and is airing all the dirty laundry of the Republican party that the politer and saner politicians with greater experience have tried to sweep under the rug. Many political analysts hold this theory, and assume Trump will eventually implode somewhere between the debates and the campaign trail because the majority of Republican voters, and certainly most Democratic voters, will never vote for him (and there’s data for that). The idea, you see, is that roughly twenty five percent of people is the most who’d ever vote Trump, so he’s maxed out.
  • Master Wizard Hypothesis: Donald Trump is a highly experienced, well-trained businessman, expert at the art of the deal and his own brand management, who’s mastered a semi-secret art of persuasion. His campaign is a sequence of carefully crafted stunts designed to implode his opponents, one by one, because Donald Trump has no shame, merely a cold, calculating, highly trained brain designed to put the whammy on people, slowly convincing them to turn his way so he can ultimately get his way. Scott Adams believes this, and has analyzed in depth how many seemingly weird things Trump does actually make a lot of sense.
  • Tell It Like It Is Hypothesis: Donald Trump is a smart, intelligent, conservative man who’s gotten fed up with the way things are going in this country, like many other conservatives, and is gaining popularity because (a) he’s saying what many conservatives are thinking (b) he’s telling it like it is, without a filter (c) he’s got a lot of experience running a successful business and (d) now he’s applying his decades of experience to politics, hopefully making America great again.   

These all seem like alternatives, but they’re actually closer than you think. They’re all based on the idea that Trump has no shame (which isn’t likely true), has a lot of experience at business (which is almost certainly true), and is saying things that the Republican base wants to hear. The spectrum seems to be whether you think some of his more colorful antics are because he’s an arrogant bully (politicos), a skilled persuader (Adams), or a genuine conservative (the Republican base).

Now my hypothesis.

  • Genius Brand Management. Donald Trump is a billionaire whose greatest asset is his brand, and he’s an American who cares about his country. Running for President, while it costs money, gives Trump an enormous amount of free publicity - he’s getting an enormous force multiplier from all this media attention, far more than he could by building more hotels or casinos, starting another reality TV show, or running ads. While doing this, he decided to - sincerely - raise all the issues he really cares about in the election, or at least the things he cares about which resonate with Republican voters. Trump simultaneously gets an enormous brand uplift and sets the tone of the presidential campaign to be about issues which matter to him. If he’s elected, great: he’s run a mammoth multinational corporation, and can handle the Presidency. If not, he’ll bow out … just as he’s bowed out of every other flirtation at candidacy since 1988.

So, under this theory, Donald Trump would likely implode sometime between the debates and the campaign trail (where a majority of votes, not just topping a poll, matters, and a mammoth grassroots organization is needed), but regardless of whether he implodes, he’s going to have a huge uplift in his brand, and will have set the course of the campaign.

Last week, Trump appears to have imploded with a long winded speech, different from his usual polished self, in which he ranted about his opponents, outlined his policy approaches about just about everything, and ultimately finished with "How stupid are the people of the country to believe this crap?” His opponents have gone wild, and Janell Ross wrote an article which crystalized what I’d already been thinking: Donald Trump might be self-sabotaging. You read it there first, folks, but just so I would have the opportunity to be proved wrong, here’s what the other people predict.

  • Malignant Narcissist Hypothesis: The arrogant blowhard’s finally imploding. Example: at HuffPo.
  • Master Wizard Hypothesis: Trump’s now moving against Carson. See Scott Adams’ analysis, in which he points out Trump’s engineered a linguistic kill shot comparing Ben Carson’s pathological temper to incurable pedophilia.
  • Tell It Like It Is Hypothesis: Trump is just speaking from his heart, and won’t be hurt by telling it like it is. See this New York Times article "Republican strategists in the state were skeptical that Mr. Trump’s latest over-the-top outburst would seriously erode his support."

And now my take:

  • Genius Brand Management: Trump, having watched campaigns since the eighties, is fully aware that at one point half of Republican voters said they would never vote for him, and that falling behind Carson at this point could cost him the jockeying position he needs to get the nomination. So he makes an impassioned plea for attention, simultaneously trashing his rival as a last ditch hope, giving his brand one last spike - and reiterating what he thinks is important about the campaign.

As Scott might say, I remind you I don’t know who’s going to be President. I’d be a dumb man to bet against the author of Dilbert; I literally have his book on systems versus goals on my desk at work. (I haven’t gotten to it yet, but soon - I get the gist from his blog). And other politicos certainly are more practiced at this than me; I’ve only been following politics closely since, oh, when Bush was running. Bush Senior. The first time. Remember, against Reagan? I do.

SO anyway, the best hypothesis will win, because you can’t fake reality any way whatsoever. I’m going on the record saying I think Trump is bowing out of the race. If I’m wrong, I’m wrong. But if Trump has started to bow out, I’ll think about my Genius Brand Management hypothesis, recall that I said to myself that a smart man wouldn’t just use all this free publicity to pump his brand, but to make a statement to the American people about what he cared about. And then I’ll think about this phrase from his speech:

"I've really enjoyed being with you," Trump said. "It's sad in many ways because we're talking about so many negative topics, but in certain ways it's beautiful. It's beautiful."

Sure sounds to me like someone who has issues he cares about, bowing out after he’s said his peace.

-the Centaur

Getting it together

centaur 0
What you see there is my "working stack" at home ... the piles of books for my most active projects. These include Dakota Frost (shelves to the left and right that you can't quite see), Cinnamon Frost (middle shelf on the right, middle center shelf and others below), robotics at work (top shelf on the right), Thinking Ink Press (bottom visible shelf on the right and middle center shelf), Lovecraft studies (middle center shelf and top shelf on the left you can't quite see), and general writing (above, below, all around). I accumulate lots and lots of books - too many, some people think - but there's a careful method to this madness, as most of these books are not recreational, but topical, filling out a library around things I'm trying to accomplish. This means that when I'm working on a problem on, say, a Cinnamon Frost novel, and get stumped, I can have the pleasant experience I had last night of glaring at a Wolfram MathWorld article, not finding all the info I needed, peering through the references ... and finding that the references pointed to a book I had on the topic, right in the Cinnamon shelf (pictured above). For a long time I was terrified of my own library. Well, not terrified, but I'd piled up and accumulated so much stuff that I couldn't effectively use it. This has been accumulating since the days of my condo in Atlanta, which was approaching near gravitational collapse, but I've made two major pushes to clean up the library since I moved to California, which organized it usefully, as I've reported on previously, and since then two major pushes to clean up the files. I've still got a lot go go - you can see more piles below - but now I've got a better system for organizing paper, I am starting to develop a system to get things out of the library and back to used bookstores (slowly, grudgingly, occasionally) and ... I actually find myself wanting to go in here again. The piles are still scary, but now I've got a nice reading area set up, which I can lean back and be cozy in... My current reading pile and art projects are intimidating, but now organized and useful and even attractive ... My cognitive science section has developed a cozy, hallowed feel, that makes me want to dig in more ... ... and at last I once again have a workspace which makes me want to sit down and work, or write: I can't tell you how healthy that feels. I need to stay on top of that. But for now ... time to get back to it. -the Centaur P.S. Yes, I do actually use all those computers and monitors, though the one on the far right is slowly getting replaced by the floating hoverboard of an iMac that is now struggling to supplant my MacBook Air as my primary computer (good luck, you'll need it). For reference, there's my ancient MacBook Pro on the left, which formerly served as my home server; the iMac that's replacing it, hovering over the desk, a MacBook Air which is my primary computer, and the secondary keyboard and monitor for my old Linux workstation, which is about to be replaced because it's not beefy enough for my experiments with ROS.

Meanwhile, Back at GDC

centaur 0

on-the-road-2015a.png

View from my hotel in San Francisco. It may seem strange to get a hotel for a conference in San Francisco when I live in the San Francisco Bay Area, but the truth is that I "live in the Bay Area" only by a generous border-case interpretation of "Bay Area" (we're literally on the last page of the 500-page Bay Area map book that I bought when i came out here). The trip from my house to the Moscone Center in the morning is two to two and a half hours - you could drive from Greenville, SC to Atlanta, Georgia in that time, so by that logic I should have commuted from home to Georgia Tech. So. Not. Going. To. Happen.  

So why am I heading to the Moscone Center this week? The Game Developer's Conference, of course. At the request of my wife, I may not directly blog from wherever it is that I am, so I'll be posting with a delay about this conference. So far, I've attended the AI Game Programmer's Guild dinner Sunday night, which was a blast seeing old friends, meeting new ones, renewing friendships, and talking about the robot apocalypse and the future of artificial intelligence research. GDC is a blast even if you don't directly program games, because game developers are constantly pushing the boundaries of the possible - so I try not to miss it. I've been coming for roughly 15 years now - and already have close to 15 pages of notes. Good stuff.

One thing does occur to me, though, about games and "Gamer Gate." If you're into games, you may or may not have heard of the Gamer Gate controversy; some people claim it's about corruption in games journalism, while others openly state it's motivated by the invasion of gaming by so-called "social justice warriors" who are trying to destroy traditional male-oriented games in favor of thinly disguised social commentary. Still others suspect that the entire controversy is a manufactured excuse for misogynists to abuse women in games - and there's evidence that shows that at least some miscreants are doing just that.

But let's go back to the first reason, ethics in games journalism. I can't really speak to this from the inside, but in the circles in which I've been playing games for the past thirty-five years, no one cares about game reviews. Occasionally we use game magazines to find neat screenshots of new games, but, seriously - everything is word of mouth.

What about the second, the "invasion of social justice warriors?" I can speak about this: in the circles that I've traveled in the game industry in the past fifteen years, no one cares about this controversy. At GDC, women who speak about games are much more likely to be speaking about technical issues like constraint systems and procedural content generation than they are about social issues - and men are as likely as women to speak about women's issues or the treatment of other minorities.

These issues are important issues - but they're not big issues. Out of a hundred books in the conference bookstore, perhaps a dozen were on social issues, and only two of those dealt with women's culture or alternative culture. But traditional games are going strong - and are getting bigger and better and brighter and more vibrant as time goes along.

People like the games they like, and developers build them. No-one is threatened by the appearance of a game that breaks traditional stereotypes. No-one imagines that popular games that appeal to men are going to go away. All we really care about is make it fun, make it believable, finish it in a reasonable time and something approximating a reasonable budget.

Look, I get it: change is scary. And not just emotionally; these issues run deep. At a crowd simulation talk today, a researcher showed that you can mathematically measure a person's discomfort navigating in crowds - and showed a very realistic-looking behavior where a single character facing a phalanx of oncoming agents turned tail and ran away.

But this wasn't an act of fear; it was an act of avoidance. The appearance of an onrushing wall of people made that straightforward algorithm, designed to prove to the agent that it wouldn't run into trouble, choose a path that went the other way. An agent with more options to act might have chosen to lash out - to try to force a path.

But none of that was necessary. A slightly more sophisticated algorithm, based on study of actual human crowd behavior, showed that if an agent choose to boldly go forward into a space which slightly risked collisions, avoiding a bit harder if people got too close, worked just as well. It was easily able to wade through the phalanx - and the phalanx smoothly moved around it.

The point is that many humans don't want to run into things that are different. If the oncoming change is big enough, the simplest path may involve turning tail and running away - and if you don't want to run away, you might want to lash out. But it isn't necessary. Step forward with confidence moving towards the things that you want, and people will make space for you.

Yes, change is coming.

But change won't stop game developers from making games aimed at every demographic of fun. Chill out.

-the Centaur

P.S. Yes, it is a bit ridiculous to refer to a crowd avoidance algorithm that can mathematically prove that it avoids collision as "simple", and it's debatable whether that system, ORCA, which is based on linear programming over a simplification of velocity obstacles, is really "simpler" than the TTC force method based on combining goal acceleration with avoidance forces derived from a discomfort energy gradient defined within a velocity obstacle. For the sake of this anecdote, ORCA shows slightly "simpler" behavior than TTC, because ORCA's play-it-safe strategy causes it to avoid areas of velocity space that TTC will try, leading to slightly more "sophisticated" crowd behaviors emerging naturally in TTC based systems. Look up http://motion.cs.umn.edu/PowerLaw if you want more information - this is an anecdote tortured into an extended metaphor, not a tutorial.