Press "Enter" to skip to content

Posts published in “Science”

[twenty twenty-four day ninety-eight]: no, the anthill doesn’t come back stronger and better designed

centaur 0

Above is what looks like a massive anthill at the border of the "lawn" and "forest" parts of our property. It's been getting bigger and bigger over the years, and that slow growth always reminds me of Mr. Morden's comments in Babylon 5 about the Shadows' plan to make lesser races fight:

JUSTIN: "It's really simple. You bring two sides together. They fight. A lot of them die, but those who survive are stronger, smarter and better."
MORDEN: "It's like knocking over an ant-hill. Every new generation gets stronger, the ant-hill gets redesigned, made better."

Babylon 5: Z'ha'dum

But the Shadows were wrong, and what we're seeing there isn't a redesigned anthill: it is a catastrophe, a multigenerational ant catastrophe caused by climate, itself brought to light by a larger, slow-motion human catastrophe caused by climate change.

Humans have farmed, built and burnt for a long time, but only now, in the dawn of the Anthropocene - that period of time where human impacts on climate start to exceed natural variation of climate itself, beginning roughly in the 1900s - have those effects really come back to bite us on a global, rather than local, scale.

For my wife and I, this took the form of fire. Fire was not new in California: friends who lived on homes on ridges complained about their high insurance costs as far back as I can remember. But more and more fires started burning across our area, forcing other friends to move away. Then three burned within five miles of our home, with no end to the drought in sight, and we decided we'd had enough.

We moved to my ancestral home, a place where water falls from the sky, aptly named Greenville. And we moved into a house whose builders knew about rain, and placed it on a hill with carefully designed drainage. They created great rolling lawns, manicured in the traditional Greenville "let's fucking force it with chemicals and lawnmowers to look like it was Astroturf" which we are slowly letting go back to nature.

In this grass, and in the absence of pesticides, the ants flourished. But this isn't precisely a natural environment: they're flourishing in an expanse of grass that is wider and more rounded than the rough, ridged forest around it. In the forest, runoff from the rains is channeled into proto-streams leading to the nearby creek; at the edge of the lawn, water from the house and lawn spills out in a flood.

Each heavy rain, the anthills building up in the sloped grass are washed to the mulch beds that mark the boundary of the forest, and there the ants start to re-build. But lighter rains can destroy these more exposed anthills, forcing them to slowly migrate back up into the grass. That had already happened here: that was no longer a live anthill, and unbeknownst to me, I was standing in its replacement.

No worries, for them or me; I noticed the anthill was dead, looked down, and moved off their territory just as the ants were swarming out of their antholes, fit to kill (or at least to annoyingly nibble). But the great red field there, as wide as a man is tall and twice as long, was not a functioning anthill: it was the accumulated wreckage of generation after generation of ant catastrophes.

In the quote, Mr. Morden was wrong: knocking over an anthill doesn't make it come back better designed. Justin got it a little better: the strongest and smartest do often survive a battle - but they walk away with scars, and sometimes the winners may just be the lucky ones. Conflict may not make people better - it can just leave scarred soldiers, wounded refugees and a destroyed landscape.

Now, the Shadows were the villains of the story, but every good villain needs a good soundbite that makes them sound at least a little bit good, and it's worth demolishing this one. "The anthill comes back better stronger and better designed" is designed to riff on the survival of the fittest - the notion that creating survival pressure will lead to stronger, smarter, and better individuals.

But evolution doesn't work that way. Those stronger, smarter, and better individuals have to have existed in the population in the first place. Evolution only leads to improvements over time at all if the variation of the population continues to yield increasingly better individuals generation after generation - and that is not at all guaranteed. The actual historical pattern is far closer to the opposite.

Now, people who should know better often claim that evolution has no direction. I think that's because there's a cartoon version of evolution where things tend to get more complex over time, and they want to replace it with another cartoon version of evolution which is blind and random - perhaps spillover from Dawkins' attempts to argue with creationists using his Blind Watchmaker idea.

But that's not how evolution works at all. Evolution does have a direction - just like gravity does. Only at the narrow level of the fundamental laws operating on idealized, homogeneous substrates can we say gravity is symmetric, or evolution is directionless. Once the scope of our investigation expands and the structure of the world gets complex - once symmetry is broken - then gravity clumps matter into planets and gives us "up", and evolution molds organisms into ecosystems and gives us "progress towards complexity".

But the direction of evolution is a lot more like the gradient of air around a planet than it is any kind of "great chain of being". Once an ecosystem exists, increased complexity provides an advantage for a small set of organisms, and as they spread into the ecosystem, a niche is created for even more complex organisms to exceed them. But, just like most of the atmosphere is closest to the surface of a planet, most of the organisms will remain the simplest ones.

Adding additional selection pressure won't give you more complex organisms: it will give you fewer of them. The more stress on the ecosystem, the harder it is for anything to survive, the size of the various niches will shrink, and even if the ecosystem still provides enough resources to support complex organisms, the size of the population that can evolve will drop, making it less likely for even more complex ones to arise - and that's assuming it doesn't get so rough that the complex organisms go extinct.

Eventually, atoms bouncing around in the atmosphere may fly off into space - just like, eventually, evolution produced a Neil Armstrong who flew to the moon. But pouring energy into the atmosphere may slough the upper layers off into space, leaving a thin remnant closest to the planet - and, so, stressing an ecosystem will not produce more astronauts; it may kill them off and leave everyone down in the muck.

Which gives us a hint to what the Shadows' real plan was. They're portrayed as an ancient learned race, so presumably they knew everything I just shared - but they're also portrayed as the villains, after all, and so they ultimately had a self-serving goal in mind. And if knocking over an anthill doesn't make it come back better designed, then their real goal was to keep kicking over anthills so they themselves would stay on top.

-the Centaur

Pictured: Me, near sunset, taking picture of what I thought was a live anthill - until I looked more closely.

[twenty twenty-four day thirty-six]: accepting reality is not denying rationality

centaur 0

One of the most frustrating things reading the philosophy of Ayn Rand is her constant evasions of reality. Rand's determinedly objective approach is a bracing blast of fresh air in philosophy, but, often, as soon as someone raises potential limits to a rational approach - or, even, in the cases where she imagines some strawman might raise a potential limit - she denies the limit and launches unjustified ad-hominems.

It reminds me a lot of "conservative" opponents to general relativity - which, right there, should tell you something, as an actual political conservative should have no objections to a hundred-and-twenty year old well tested physical theory - who are upset because it introduces "relativism" into philosophy. Well, no, actually, Einstein considered calling relativity "invariant theory" because the deep guts of the theory actually are a quest for formulating theories in terms that are invariant between two observers, like the space-time interval ds^2, which is the same no matter how the relative observers are moving.

In Rand's case, she and Peikoff admit up front in several places that human reason is fallible and prone to error - but as soon as a specific issue is raised, they either deny that failure is possible or claim that critics are trying to destroy rationality. Among things they claim as infallible products of reason are notions such as existence, identity, and consciousness, deterministic causality, the infallibility of sense perception, the formation of concepts, reason (when properly conducted), and even Objectivism itself.

In reality, all of these things are fallible, and that's OK.

Our perception of what exists, what things are, and even aspects of our consciousness can be fooled, and that's OK, because a rational agent can construct scientific procedures and instruments to untangle the difference between our perception of our phenomenal experience and the nature of reality. Deterministic causality breaks down in our stochastic world, but we can build more solid probabilistic and quantum methods that enable us to make highly reliable predictions even in the face of a noisy world. Our senses can fail, but there is a rich library of error correcting methods both in natural systems and in in robotics that help us recover reliable information that is useful enough to act upon with confidence.

As for the Objectivist theory of concepts, it isn't a terrible normative theory of how we might want concepts to work in an ideal world, but it is a terrible theory of how concept formation actually works in the real world, either in the human animal or in how you'd build an engineering system to recognize concepts - Rand's notion of "non-contradictory identification" would in reality fail to give any coherent output in a world of noisy input sensors, and systems like Rand's ideas were supplanted by techniques such as support vector machines long before we got neural networks.

And according to Godel's theorem and related results, reasoning itself must either be incomplete or inconsistent - and evidence of human inconsistency abounds in the cognitive science literature. But errors in reasoning itself can be handled by Pollock's notion of "defeasible" reasoning or Minsky's notion of "commonsense" reasoning, and as for Objectivism itself being something that Rand got infallibly right ... well, we just showed how well that worked out.

Accepting the limits of rationality that we have discovered in reality is not an attack on rationality itself, for we have found ways to work around those limits to produce methods for reaching reliable conclusions. And that's what's so frustrating reading Rand and Peikoff - their attacks on strawmen weaken their arguments, rather than strengthening them, by both denying reality and denying themselves access to the tools we have developed over the centuries to help us cope with reality.

-the Centaur

[twenty twenty-four day thirty-four]: chromodivergent and chromotypical

centaur 0

I sure do love color, but I suck at recognizing it - at least in the same way that your average person does. I'm partially colorblind - and I have to be quick to specify "partial", because otherwise people immediately ask if I can't tell red from green (I can, just not as good as you) or can't see colors at all.

In fact, sometimes I prefer to say "my color perception is deficient" or, even more specifically, "I have a reduced ability to discriminate colors." The actual reality is a little more nuanced: while there are colors I can't distinguish well, my primary deficit is not being able to NOTICE certain color distinctions - certain things just look the same to me - but once the distinctions are pointed out, I can often reliably see it.

This is a whole nother topic on its own, but, the gist is, I have three color detectors in my eyes, just like a person with typical color vision. Just, one of those detectors - I go back and forth between guessing it's the red one or the green one - is a little bit off compared to a typical person's. As one colleague at Google put it, "you have a color space just like anyone else, just your axes are tilted compared to the norm."

The way this plays out is that some color concepts are hard for me to name - I don't want to apply a label to them, perhaps because I'm not consistently seeing people use the same name for those colors. There's one particular nameless color, a particularly blah blend of green and red, that makes me think if there were more people like me, we'd call it "gred" or "reen" the way typical people have a name for "purple".

Another example: there's a particular shade of grey - right around 50% grey - that I see as a kind of army green, again, because one of my detectors is resonating more with the green in the grey. If the world were filled with people like me, we'd have to develop a different set of reference colors.

SO, this made me think that, in parallel to the concepts of "neurotypical and neurodivergent", we could use concepts like "chromotypical and chromodivergent". Apparently I'm not the only one who thinks this: here's an artist who argues that "colorblind" can be discouraging to artists, and other people think we should drop the typical in neurotypical as it too can be privileging to certain neurotypes.

I'm not so certain I'd go the second route. Speaking as someone who's been formally diagnosed "chromodivergent" (partially red-green colorblind) and is probably carrying around undiagnosed "neurodivergence" (social anxiety disorder with possibly a touch of "adult autism"), I think there's some value to recognizing some degree of "typicality" and "norms" to help us understand conditions.

If you had a society populated with people with color axes like me and another society populated with "chromotypical" receptors, both societies would get on fine, both with each other and the world; you'd just have to be careful to use the right set of color swatches when decorating a room. But a person with a larger chromodivergence - say, someone who was wholly red-green colorblind - might have be less adaptive than a chromotypical person - say, because they couldn't tell when fruit was ripe.

Nevertheless, even if some chromodivergences or neurodivergences might be maladaptive in a non-civilized environment, prioritizing the "typical" can still lead to discrimination and ableism. For those who don't understand "ableism", it's a discriminatory behavior where "typical" people de-personalize people with "disabilities" and decide to make exclusionary decisions for them without consulting them.

There are great artists who are colorblind - for example, Howard Chaykin. There's no need to discourage people who are colorblind from becoming artists, or to prevent them from trying: they can figure out how to handle that on their own, hiring a colorist or specializing in black-and-white art if they need to.

All you need to do is to decide whether you like their art.

-the Centaur

Pictured: some colorful stuff from my evening research / writing / art run.

[twenty twenty-four day thirty-three]: roll the bones

centaur 0

As both Ayn Rand and Noam Chomsky have both said in slightly different ways, concepts and language are primarily tools of thought, not communication. But cognitive science has demonstrated that our access to the contents of our thought are actually relatively poor - we often have an image of what is in our head which is markedly different from the reality, as in the case where we're convinced we remember a friend's phone number but actually have it wrong, or have forgotten it completely.

One of the great things about writing is that it forces you to turn these abstract ideas about our ideas into concrete realizations - that is, you may think you know what you think, but even if you think about it a lot, you don't really know the difference between your internal mental judgments about your thoughts and their actual reality. The perfect example is a mathematical proof: you may think you've proved a theorem, but until you write it down and check your work, there's no guarantee that you actually HAVE a proof.

So my recent article on problems with Ayn Rand's philosophy is a good example. I stand by it completely, but I think that many of my points could be refined considerably. I view Ayn Rand's work with regards to philosophy the way that I do Euclid for mathematics or Newton for physics: it's not an accurate model of the world, but it is a stage in our understanding of the world which we need to go through, and which remains profitable even once we go on to more advanced models like non-Euclidean geometry or general relativity. Entire books are written on Newtonian approximations to relativity, and one useful mathematical tool is a "Lie algebra", which enables us to examine even esoteric mathematical objects by looking at the locally at the Euclidean tangent space generated around a particular point.

So it's important to not throw the baby out with the bathwater with regards to Ayn Rand, and to be carefully specific about where her ideas work and where they fail. For example, there are many, many problems with her approach to the law of identity - the conceptual idea that things are what they are, or A is A - but the basic idea is sound. One would say that it almost approaches tautological except for the fact that many people seem to ignore it. However, you cannot fake reality in any way whatever - and you cannot make physical extrapolations about reality through philosophical analysis of a conceptual entity like identity.

Narrowing in on a super specific example, Rand tries to derive the law of causality from the law of identity - and it works well, right up unto the point where she tries to draw conclusions about it. Her argument goes like this: every existent has a unique nature due to the law of identity: A is A, or things are what they are, or a given existent has a specific nature. What happens to an existent over time - the action of that entity - is THE action of THAT entity, and is therefore determined by the nature of that entity. So far, so good.

But then Rand and Peikoff go off the rails: "In any given set of circumstances, therefore, there is only one action possible to an entity, the action expressive of its identity." It is difficult to grasp the level of evasion which might produce such a confusion of ideas: to make such a statement, one must throw out not just the tools of physics, mathematics and philosophy, but also personal experience with objects as simple as dice.

First, the evasion of personal experience, and how it plays out through mathematics and physics. Our world is filled with entities which may produce one action out of many - not just entities like dice, but even from Rand and Peikoff's own examples, a rattle makes a different sound every time you rattle it. We have developed an entire mathematical formalism to help understand the behavior of such entities: we call them stochastic and treat them with the tools of probability. As our understanding has grown, physicists have found that this stochastic nature is fundamental to the nature of reality: the rules of quantum mechanics essentially say that EVERY action of an entity is drawn from a probability distribution, but for most macroscopic actions this probabilistic nature gets washed out.

Next, the evasion of validated philosophical methods. Now, one might imagine Rand and Peikoff saying, "well, the roll of the dice is only apparently stochastic: in actuality, the dice when you throw it is in a given state, which determines the single action that it will take." But this is a projective hypothesis about reality: it is taking a set of concepts, determining their implications, and then stating how we expect those implications to play out in reality. Reality, however, is not required to oblige us. This form of philosophical thinking goes back to the Greeks: the notion that if you begin with true premises and proceed through true inference rules, you will end up with a true conclusion. But this kind of philosophical thinking is invalid - does not work in reality - because any one of these elements - your concepts, your inference rules, or your mapping between conclusions and states - may be specious: appearing to be true without actually reflecting the nuance of reality. To fix this problem, the major achievement of the scientific method is to replace "if you reach a contradiction, check your premises" with "if you reach a conclusion, check your work" - or, in the words of Richard Feynman, "The sole test of any idea is experiment."

Let's get really concrete about this. Rand and Peikoff argue "If, under the same circumstances, several actions were possible - e.g., a balloon could rise or fall (or start to emit music like a radio, or turn into a pumpkin), everything else remaining the same - such incompatible outcomes would have to derive from incompatible (contradictory) aspects of the entity's nature." This statement is wrong on at least two levels, physical and philosophical - and much of the load-bearing work is in the suspicious final dash.

First, physical: we actually do indeed live in a world where several actions are possible for an entity - this is one of the basic premises of quantum mechanics, which is one of the most well-tested scientific theories in history. For each entity in a given state, a set of actions are possible, governed by a probability amplitude over those states: when the entity interacts with another entity in a destructive way the probability amplitude collapses into a probability distribution over the actions, one of which is "observed". In Rand's example, the balloon's probability amplitude for rising is high, falling is small, emitting radio sounds is still smaller, and turning into a pumpkin is near zero (due to the vast violation of conservation of mass).

If one accepts this basic physical fact about our world - that entities that are not observed exist in a superposition of states governed by probability amplitudes, and that observations involve probabilistically selecting a next state from the resulting distribution - one can create amazing technological instruments and extraordinary scientific predictions - lasers and integrated circuits and quantum tunneling and prediction of physical variables with a precision of twelve orders of magnitude - a little bit like measuring the distance between New York and Los Angeles with an error less than a thousandth of an inch.

But Rand's statement is also philosophically wrong, and it gets clearer if we take out that distracting example: "If, under the same circumstances, several actions were possible, such incompatible outcomes would have to derive from incompatible aspects of the entity's nature." What's wrong with this? There's no warrant to this argument. A warrant is the thing that connects the links in a reasoning chain - an inference rule in a formal system, or a more detailed explanation of the reasoning step in question.

But there is no warrant possible in this case, only a false lurking premise. The erroneous statement is that "such incompatible outcomes would have to derive from incompatible aspects of the entity's nature." Why? Why can't an entity's nature be to emit one of a set of possible actions, as in a tossed coin or a die? Answer: Blank out. There is no good answer to this question, because there are ready counterexamples from human experience, which we have processed through mathematics, and ultimately determined through the tools of science that, yes, it is the nature of every entity to produce one of a set of possible outcomes, based on a probability distribution, which itself is completely lawlike and based entirely on the entity's nature.

You cannot fake reality any way whatever: this IS the nature of entities, to produce one of a set of actions. This is not a statement that they are "contradictory" in any way: this is how they behave. This is not a statement that they are "uncaused" in any way: the probability amplitude must be non-zero in a space in order for an action to be observed, and it is a real physical entity with energy content, not merely a mathematical convenience, that leads to the observation. And it's very likely not sweeping under the rug some hidden mechanism that actually causes it: while the jury is still out on whether quantum mechanics is a final view of reality, we do know due to Bell's theorem that there are no "hidden variables" behind the curtain (a theorem that had been experimentally validated as of the time of Peikoff's book).

So reality is stochastic. What's wrong with that? Imagine a correct version of Ayn Rand's earlier statement: "In any given set of circumstances, therefore, there is only one type of behavior possible for an entity, the behavior expressive of its entity. This behavior may result in one of several outcomes, as in the rolling of a die, but the probability distribution over those set of outcomes is the distribution that is caused and necessitated by the entity's nature." Why didn't Peikoff and Rand write something like that?

We have a hint in the next few paragraphs: "Cause and effect, therefore, is a universal law of reality. Every action has a cause (the cause is the nature of the entity that acts); and the same cause leads to the same effect (the same entity, under the same circumstances, will perform the same action). The above is not to be taken as a proof of the law of cause and effect. I have merely made explicit what is known implicitly in the perceptual grasp of reality." That sounds great ... but let's run the chain backwards, shall we?

"We know implicitly in the perceptual grasp of reality a law which we might explicitly call cause and effect. We cannot prove this law, but we can state that the same entity in the same circumstances will perform the same action - that is, the same cause leads to the same effect. Causes are the nature of the entities that act, and every action has a cause. Therefore, cause and effect is a universal law of reality."

I hope you can see what's wrong with this, but if you don't, I'm agonna tell you, because I don't believe in the Socratic method as a teaching tool. First and foremost, our perceptual grasp of reality is very shaky: massive amounts of research in cognitive science reveal a nearly endless list of biases and errors, and the history of physics has been one of replacing erroneous perceptions with better laws of reality. One CANNOT go directly from the implicit knowledge of perceptual reality to any actual laws, much less universal ones: we need experiment and the tools of physics and cognitive science to do that.

But even from a Randian perspective this is wrong, because it is an argument from the primacy of consciousness. One of the fundamental principles of Objectivist philosophy is the primacy of existence over consciousness: the notion that thinking a thing does not make it so. Now, this is worth a takedown of its own - it is attempting to draw an empirically verifiable physical conclusion from a conceptual philosophical argument, which is invalid - but, more or less, I think Rand is basically right that existence is primary over consciousness. Yet above, Rand and Peikoff purport to derive a universal law from perceptual intuition. They may try to call it "implicit knowledge" but perception literally doesn't work that way.

If they admit physics into their understanding of the law of causality, they have to admit you cannot directly go from a conceptual analysis of the axioms to universally valid laws, but must subject all their so-called philosophical arguments to empirical validation. But that is precisely what you have to do if you are working in ontology or epistemology: you MUST learn the relevant physics and cognitive science before you attempt to philosophize, or you end up pretending to invent universal laws that are directly contradicted by human experience.

Put another way, whether you're building a bridge or a philosophy, you can't fake reality in any way whatsoever, or, sooner or later, the whole thing will come falling down.

-the Centaur

[twenty twenty-four day thirty-two]: if you do what you’ve always done

centaur 0
Something new

"If you do what you've always done, you'll get what you've always gotten," or so the saying goes.

That isn't always true - ask my wife what it's like for a paint company to silently change the formula on a product right when she's in the middle of a complicated faux finish that depended on the old formulas chemical properties - but there's a lot of wisdom to it.

It's also true that it's work to decide. When a buddy of mine and I finished 24 Hour Comic Day one year and were heading to breakfast, he said, "I don't want to go anyplace new or try anything new, because I have no brains left. I want to go to a Dennys and order something that I know will be good, so I don't have to think about it."

But as we age, we increasingly rely on past decisions - so-called crystallized intelligence, an increasingly vast but increasingly rigid collection of wisdom. If we don't want to get frozen, we need to continue exercising the muscle of trying things that are new.

At one of my favorite restaurants, I round-robin through the same set of menu items. But this time, I ildy flipped the menu over to the back page I never visit and saw a burrito plate whose fillings were simmered in beer. I mean, what! And the server claimed it was one of the best things on the menu, a fact I can confirm.

It can be scary to step outside our circle. But if you do what you've always done, you'll miss out on opportunities to find your new favorite.

-the Centaur

[twenty twenty-four day thirty-one]: to be or not to be in degree

centaur 0

I've recently been having fun with a new set of "bone conduction" headphones, walking around the nearby forest while listening to books on tape [er, CD, er, MP3, er, streaming via Audible]. Today's selection was from Leonard Peikoff's Objectivism: The Philosophy of Ayn Rand. Listening to the precision with which they define concepts is wonderful - it's no secret that I think Ayn Rand is one of the most important philosophers that ever lived - but at the same time they have some really disturbing blind spots.

And I don't mean in the political sense in which many people find strawman versions of Rand's conclusions personally repellent, and therefore reject her whole philosophy without understanding the good parts. No, I mean that, unfortunately, Ayn Rand and Leonard Peikoff frequently make specious arguments - arguments that on the surface appear logical, but which actually lack warrants for their conclusions. Many of these seem to be tied to a desire to appear objective emotionally by demanding an indefensibly precise base for their arguments, rather than standing the more solid ground of accurate, if fuzzier concepts, which actually exist in a broader set of structures which are more objective than their naive pseudo-objective counterparts.

Take the notion that "existence exists". Peikoff explains the foundation of Ayn Rand's philosophy to be the Randian axioms: existence, identity, and consciousness - that is, there is a world, things are what they are, and we're aware of them. I think Rand's take on these axioms is so important that I use her words to label two them in my transaxiomatic catalog of axioms: EE, "existence exists," AA, "A is A", and CC, where Rand doesn't have a catchy phrase, but let's say "creatures are conscious". Whether these are "true", in their view, is less important than that they are validated as soon as you reach the level of having a debate: if someone disagrees with you about the validity of the axioms, there's no meaningful doubt that you and they exist, that you're both aware of the axioms, and that they have a nature which is being disputed.

Except ... hang on a bit. To make that very argument, Peikoff presents a condensed dialog between the defender of the axioms, A, and a denier of the axioms, B, quickly coming to the conclusion that someone who exists, is aware of your opinions, and is disagreeing with their nature specifically by denying that things exist, that people are aware of anything, and that things have a specific nature is ... probably someone you shouldn't spend your time arguing with. At the very best, they're trapped in a logical error; at the worst, they're either literally delusional or arguing in bad faith. That all sounds good. But A and B don't exist.

More properly, the arguing parties A and B only exist as hypothetical characters in Peikoff's made-up dialog. And here's where the entire edifice of language-based philosophy starts to break down: what is existence, really? Peikoff argues you cannot define existence in terms of other things, but can only do so ostensively, by pointing to examples - but this is not how language works, either in day-to-day life or in philosophy, which is why science has abandoned language in favor of mathematical modeling. If you're intellectually honest, you should agree that Ayn Rand and Leonard Peikoff exist in a way that A and B in Peikoff's argument do not.

Think about me in relationship to Sherlock Holmes. I exist in a way that Sherlock Holmes does not. I also exist in a way which Arthur Conan Doyle does not. Sherlock Holmes himself exists in a way that an alternate version of Holmes from a hypothetical unproduced TV show does not, and I as a real concrete typing these words exists in a way that the generic idea of me does not. One could imagine an entire hierarchy of degrees of existence, from absolute nothingness of the absence of a thing or concept, to contradictions in terms that could be named but do not exist, to hypothetical versions of Sherlock Holmes that do not exist, to Sherlock Holmes, who only exists as a character, to Arthur Conan Doyle who once existed, to me who existed as of this writing, to the concrete me writing this now, to existence itself, which exists whether I do or not.

Existence is what Marvin Minsky calls a "suitcase word": it's a stand in for a wide variety of other distinct but usefully similar concepts, from conceptual entities to physical existents to co-occurring physical objects in the same interacting region of space-time. And it's no good attempting to fall back on the idea that Ayn Rand was actually trying to define "existence" as the sum total of "existents" because pinning down "existence" or "existent" outside of an ostensible "I can point at it" definition is precisely what Rand and Peikoff don't want to do - first off, because they really do mean it to be "everything", in almost the precise same way that Carl Sagan uses the word "Cosmos" to refer to everything that ever is, was, or will be, and secondly, because if it loses its function as a suitcase word, it is no longer useful in their arguments.

In reality, if you say "existence exists", and someone attempts to contradict you, it does you no good to say "well, you're contradicting yourself, because you had to exist to even say that". You do need to actually put your money where your mouth is and say what concrete propositions you intend to draw from the terms "existence" and "exists" and the floating abstraction "existence exists" - and so do they. If you can't do this, you're not actually arguing with them; you're talking past them; if they can't do this, they're at best not arguing coherently, and at worst not arguing in good faith. If you both DO this, however, you may come to profitable conclusions, such as, "yes, we agree that SOMETHING exists, at least to the level where we had this debate; but we can also agree that the word existence should not extend to this unwanted implication."

This approach - reinforcing your axioms with sets of elaborations, models and even propositions that are examples of of the axioms, along with similar sets that should be considered counterexamples - is what I call the "transaxiomatic" approach. Rather than simply assuming the axioms are unassailable and attempting to pseudo-define their terms by literally waving one's hand around and saying "this is what I mean by existence" - and simply hoping people will "get it" - we need to reinforce the ostensible concretes we use to define the axioms with more carefully refined abstractions that tell us what we mean when we use the terms in the axioms, and what propositions we hope people should derive from it.

This is part of an overall move from the philosophical way of tackling problems towards a more scientific one. And it's why I think Ayn Rand was, in a sense, too early, and too late. She's too early in the sense that many of the things that she studied philosophically - ontology and epistemology - are no longer properly the domain of philosophy, but have been supplanted - firmly supplanted - by findings from science - ontology is largely subsumed into physics and cosmology, and epistemology is largely subsumed into cognitive science and artificial intelligence. That's not to say that that philosophy is done with those areas, but instead that philosophy has definitively lost its primary position within them: one must first learn the science of what is known in those areas before trying to philosophize about it. One cannot meaningfully say anything at all about epistemology without understanding computational learning theory. And she's too late in that she was trying to DO philosophy at a point in time where her subject matter was already starting to become science. Introduction to Objectivist Epistemology is an interesting book, but it was written a decade after "The Magical Number Seven, Plus or Minus Two" and two decades before the "Probably Approximately Correct" theory of learning, and you will learn much more about epistemology by looking up the "No Free Lunch" learning theorems and pulling on that thread than by anything Ayn Rand ever wrote (or, try reading "Probability Theory: The Logic of Science" for a good one-volume starting point). Which is not to say that Ayn Rand's philosophizing is not valuable - it is almost transcendently valuable - but if she was writing today, many of the more conceptually problematic structures of her philosophy could simply be dropped in favor of references to the rich conceptual resources of cognitive science and probability theory, and then she could have gotten on with convincing people that you can indeed derive "ought" from "is".

Or, maybe, just maybe, she might have done science in addition to philosophy, and perhaps even had something scientific to contribute to the great thread rolling forward from Bayes and Boole.

Existence does exist. But before you agree, ask, "What do you really mean by that?"

-the Centaur

Pictured: Loki, existing in a fuzzy state.

Congratulations Richard Branson (and/or Jeff Bezos)

centaur 0
branson in spaaace

Congratulations, Sir Richard Branson, on your successful space flight! (Yes, yes, I *know* it's technically just upper atmosphere, I *know* there's no path to orbit (yet) but can we give the man some credit for an awesome achievement?) And I look forward to Jeff Bezos making a similar flight later this month.

Now, I stand by my earlier statement: the way you guys are doing this, a race, is going to get someone killed, perhaps one of you guys. A rocketship is not a racecar, and moves into realms of physics where we do not have good human intuition. Please, all y'all, take it easy, and get it right.

That being said, congratulations on being the first human being to put themselves into space as part of a rocket program that they themselves set in motion. That's an amazing achievement, no-one can ever take that away from you, and maybe that's why you look so damn happy. Enjoy it!

-the Centaur

P.S. And day 198, though I'll do an analysis of the drawing at a later time.

RIP Jeff Bezos (and/or Richard Branson)

centaur 0
rip jeff bezos

You know, Jeff Bezos isn’t likely to die when he flies July 20th. And Richard Branson isn’t likely to die when he takes off at 9am July 11th (tomorrow morning, as I write this). But the irresponsible race these fools have placed them in will eventually get somebody killed, as surely as Elon Musk’s attempt to build self-driving cars with cameras rather than lidar was doomed to (a) kill someone and (b) fail. It’s just, this time, I want to be caught on record saying I think this is hugely dangerous, rather than grumbling about it to my machine learning brethren.

Whether or not a spacecraft is ready to launch is not a matter of will; it’s a matter of natural fact. This is actually the same as many other business ventures: whether we’re deciding to create a multibillion-dollar battery factory or simply open a Starbucks, our determination to make it succeed has far less to do with its success than the realities of the market—and its physical situation. Either the market is there to support it, and the machinery will work, or it won’t.

But with normal business ventures, we’ve got a lot of intuition, and a lot of cushion. Even if you aren’t Elon Musk, you kind of instinctively know that you can’t build a battery factory before your engineering team has decided what kind of battery you need to build, and even if your factory goes bust, you can re-sell the land or the building. Even if you aren't Howard Schultz, you instinctively know it's smarter to build a Starbucks on a busy corner rather than the middle of nowhere, and even if your Starbucks goes under, it won't explode and take you out with it.

But if your rocket explodes, you can't re-sell the broken parts, and it might very well take you out with it. Our intuitions do not serve us well when building rockets or airships, because they're not simple things operating in human-scaled regions of physics, and we don't have a lot of cushion with rockets or self-driving cars, because they're machinery that can kill you, even if you've convinced yourself otherwise.

The reasons behind the likelihood of failure are manyfold here, and worth digging into in greater depth; but briefly, they include:

  • The Paradox of the Director's Foot, where a leader's authority over safety personnel - and their personal willingness to take on risk - ends up short-circuiting safety protocols and causing accidents. This actually happened to me personally when two directors in a row had a robot run over their foot at a demonstration, and my eagle-eyed manager recognized that both of them had stepped into the safety enclosure to question the demonstrating engineer, forcing the safety engineer to take over audience questions - and all three took their eyes off the robot. Shoe leather degradation then ensued, for both directors. (And for me too, as I recall).
  • The Inexpensive Magnesium Coffin, where a leader's aesthetic desire to have a feature - like Steve Job's desire for a magnesium case on the NeXT machines - led them to ignore feedback from engineers that the case would be much more expensive. Steve overrode his engineers ... and made the NeXT more expensive, just like they said it would, because wanting the case didn't make it cheaper. That extra cost led to the product's demise - that's why I call it a coffin. Elon Musk's insistence on using cameras rather than lidar on his self-driving cars is another Magnesium Coffin - an instance of ego and aesthetics overcoming engineering and common sense, which has already led to real deaths. I work in this precise area - teaching robots to navigate with lidar and vision - and vision-only navigation is just not going to work in the near term. (Deploy lidar and vision, and you can drop lidar within the decade with the ground-truth data you gather; try going vision alone, and you're adding another decade).
  • Egotistical Idiot's Relay Race (AKA Lord Thomson's Suicide by Airship). Finally, the biggest reason for failure is the egotistical idiot's relay race. I wanted to come up with some nice, catchy parable name to describe why the Challenger astronauts died, or why the USS Macon crashed, but the best example is a slightly older one, the R101 disaster, which is notable because the man who started the R101 airship program - Lord Thomson - also rushed the program so he could make a PR trip to India, with the consequence that the airship was certified for flight without completing its endurance and speed trials. As a result, on that trip to India - its first long distance flight - the R101 crashed, killing 48 of the 54 passengers - Lord Thomson included. Just to be crystal clear here, it's Richard Branson who moved up his schedule to beat Jeff Bezos' announced flight, so it's Sir Richard Branson who is most likely up for a Lord Thomson's Suicide Award.

I don't know if Richard Branson is going to die on his planned spaceflight tomorrow, and I don't know that Jeff Bezos is going to die on his planned flight on the 20th. I do know that both are in an Egotistical Idiot's Relay Race for even trying, and the fact that they're willing to go up themselves, rather than sending test pilots, safety engineers or paying customers, makes the problem worse, as they're vulnerable to the Paradox of the Director's Foot; and with all due respect to my entire dot-com tech-bro industry, I'd be willing to bet the way they're trying to go to space is an oversized Inexpensive Magnesium Coffin.

-the Centaur

P.S. On the other hand, when Space X opens for consumer flights, I'll happily step into one, as Musk and his team seem to be doing everything more or less right there, as opposed to Branson and Bezos.

P.P.S. Pictured: Allegedly, Jeff Bezos, quick Sharpie sketch with a little Photoshop post-processing.

Day 168

centaur 0
eckener sketch

Dr. Hugo Eckener, the "Pope" of airship pilots. Even though I carefully noted the angle of the head, I nevertheless tilted the eyebrows wrong - and even caught myself doing it. But, even though I saw the problem, and did some work to correct it, it was too late to recreate the fullness of the face:

eckener headshot

The comparison shows a 5 degree tilt and 10 degree horizontal squash, but, frankly, there's no way to make everything line up no matter how you stretch it, as the nose is misproportioned compared to the eyes, which led the dent on the face on the left side of the page compared to the original.

eckener comparison

Ah well. Drawing every day.

-the Centaur

The Science of Airships at Clockwork Alchemy 2021

taidoka 0
the science of airships
Hail, fellow adventurers! Clockwork Alchemy goes virtual this year, and tomorrow at 10am I'll be on a panel on the Science of Airships with moderator Laurel Anne Hill and fellow panelists Madeline Holly-Rosing and Mike Tierney. We'll be talking about everything we can fit in 45 minutes, including:
  • Zeppelins, dirigibles and blimps: what do all these terms mean?
  • The history of airships, starting with an airborne chicken.
  • The science of airships, including innovations for flight.
  • The failures of airships - what brought them down?
  • The future of airships - airships on the drawing board!

Sign up here, and the full schedule is also online.

We're the first panel, at 10am Saturday, and our panelists include:

Laurel Anne Hill [Moderator]

Laurel Anne Hill—author and former underground storage tank operator—grew up in San Francisco, with more dreams of adventure than good sense or money. Her close brushes with death, love of family, respect for honor and belief in a higher power continue to influence her writing and her life. She has authored two award-winning novels: The Engine Woman’s Light (Sand Hill Review Press), a gripping spirits-meet-steampunk, coming-of-age heroic journey, and Heroes Arise. Laurel’s published short stories and nonfiction pieces total over forty. She has served as a program participant at many science fiction/fantasy conventions, including the World Science Fiction Con and World Fantasy Con. She’s the Literary Stage Manager for the annual San Mateo County Fair, a speaker, writing contest judge, and editor. And she’s even engineered a steam locomotive. For more about her, go to

Madeleine Holly-Rosing

Madeleine Holly-Rosing is the writer/creator of the award-winning Boston Metaphysical Society graphic novel series. Previously self-published, it is now published by Source Point Press. The series also includes the award winning prequel novel, A Storm of Secrets, and an anthology.  After running eight successful crowdfunding campaigns, she published the book, Kickstarter for the Independent Creator.  Other comic anthology projects include: The Scout (The 4th Monkey), The Sanctuary (The Edgar Allan Poe Chronicles), The Marriage Counselor (Cthulhu is Hard to Spell), The Glob (Night Wolf), The Infinity Tree (Menagerie: Declassified), and the upcoming, The Birth (Stan Yak Vampire Anthology).

Michael Tierney

Michael Tierney writes steampunk-laced alternative historical fiction stories from his Victorian home in Silicon Valley. After writing technical and scientific publications for many years, he turned his sights to more imaginative genres. Trained as a chemist, he brings an appreciation of both science and history to his stories. His latest novel is Mr. Darwin’s Dragon. Visit his blog at

Anthony Francis

By day, Anthony Francis teaches robots to learn; by night he writes science fiction and draws comic books. Anthony’s best known for his Skindancer urban fantasy series of novels including the Epic eBook Award winner Frost Moon and its sequels Blood Rock and Liquid Fire, all following the misadventures of magical tattoo artist Dakota Frost trying to raise her weretiger daughter Cinnamon in Atlanta.

Anthony also writes the Jeremiah Willstone steampunk series, following a young female soldier in a world where women’s liberation happened a century early – and so, with twice as many brains working on hard problems, the Victorians invented rayguns and time travel. In addition to her debut novel Jeremiah Willstone and the Clockwork Time Machine, Jeremiah appears in a dozen other stories, including “Steampunk Fairy Chick” in the UnCONventional anthology.

Anthony is co-editor of the anthology Doorways to Extra Time and a co-founder of Thinking Ink Press, publisher of the steampunk anthologies Twelve Hours Later, Thirty Days Later, and Some Time Later. He’s the artist of the webcomic fanu fiku and he’s co-author of the 24 Hour Comic Day Survival Guide. He’s participated in National Novel Writing Month and its related challenges over 20 times, recently cracking one million words written in Nano.

Anthony lives in San Jose with his wife and cats, but his heart will always belong in Atlanta. To learn more about Dakota Frost, visit or; to learn more about Jeremiah Willstone, visit; to learn more about Anthony and his appearances, visit his blog

You can also take a look at my previous presentations on the science of airships, which I've been doing on and off for about 10 years now, for more details ...

Hope to see you virtually there, or in the air!

-the Centaur

It’s been a long time since I’ve thrown a book …

taidoka 0
chuck that junk Yeah, so that happened on my attempt to get some rest on my Sabbath day. I'm not going to cite the book - I'm going to do the author the courtesy of re-reading the relevant passages to make sure I'm not misconstruing them, but I'm not going to wait to blog my reaction - but what caused me to throw this book, an analysis of the flaws of the scientific method, was this bit: Imagine an experiment with two possible outcomes: the new theory (cough EINSTEIN) and the old one (cough NEWTON). Three instruments are set up. Two report numbers consistent with the new theory; the third one, missing parts, possibly configured improperly and producing noisy data, matches the old. Wow! News flash: any responsible working scientist would say these results favored the new theory. In fact, if they were really experienced, they might have even thrown out the third instrument entirely - I've learned, based on red herrings from bad readings, that it's better not to look too closely at bad data. What did the author say, however? Words to the effect: "The scientists ignored the results from the third instrument which disproved their theory and supported the original, and instead, pushing their agenda, wrote a paper claiming that the results of the experiment supported their idea." Pushing an agenda? Wait, let me get this straight, Chester Chucklewhaite: we should throw out two results from well-functioning instruments that support theory A in favor of one result from an obviously messed-up instrument that support theory B - oh, hell, you're a relativity doubter, aren't you? Chuck-toss. I'll go back to this later, after I've read a few more sections of E. T. Jaynes's Probability Theory: The Logic of Science as an antidote. -the Centaur P. S. I am not saying relativity is right or wrong, friend. I'm saying the responsible interpretation of those experimental results as described would be precisely the interpretation those scientists put forward - though, in all fairness to the author of this book, the scientist involved appears to have been a super jerk.  

Day 051

centaur 0
mount tabor sketch Mount Tabor, sketched to commemorate the transfiguration of Jesus, that moment when Jesus is transformed on a mountaintop as he communes with Moses and Elijah, and Peter somehow loses a screw and decides it's a great time to start building houses. As Reverend Karen of St. Stephens in-the-Field and St. John the Divine memorably said in today's sermon, this was the moment that the disciples went from knowing Jesus only as a human teacher they admired to seeing him as touched with divinity. (And speaking as a religious person from a scientific perspective, this is a great example of why there always will be a gap between science and religion: even if the event actually happened exactly as described, we're unlikely to ever prove so scientifically, since it is a one-time event that cannot be probed with replicable experiments; the events of the day, even if true, really do have to be taken purely on faith. This is, of course, assuming that tomorrow someone doesn't invent a device for reviewing remote time). Roughed on Strathmore, then rendered on tracing paper, based on the following shot taken in 2011: צילם: אלי זהבי, כפר תבור, CC BY 2.5 , via Wikimedia Commons (Author: Eli Zehavi, Kfar Thabor) I mean, look at that. That mountain is just begging for God do something amazing there. And if God doesn't want it, the Close Encounters mothership and H.P. Lovecraft are top of the waitlist. It really is proving useful to ink my own rough sketches by hand, then to trace my own art. It is interesting to me though how I vertically exaggerated the mountain when I drew it, which probably explains why a few things kept not lining up the way that I wanted them to. Still ... Drawing every day. -the Centaur P.S. And yes, I accidentally drew the Ascension rather than the Transfiguration, which I guess is fine, because the Mount of Olives looks harder to draw. Check out that 2,000 year old tree though.

What is “Understanding”?

taidoka 1
When I was growing up - or at least when I was a young graduate student in a Schankian research lab - we were all focused on understanding: what did it mean, scientifically speaking, for a person to understand something, and could that be recreated on a computer? We all sort of knew it was what we'd call nowadays an ill-posed problem, but we had a good operational definition, or at least an operational counterexample: if a computer read a story and could not answer the questions that a typical human being could answer about that story, it didn't understand it at all. But there are at least two ways to define a word. What I'll call a practical definition is what a semanticist might call the denotation of a word: a narrow definition, one which you might find in a dictionary, which clearly specifies the meaning of the concept, like a bachelor being an unmarried man. What I'll call a philosophical definition, the connotations of a word, are the vast web of meanings around the core concept, the source of the fine sense of unrightness that one gets from describing Pope Francis as a bachelor, the nuances of meaning embedded in words that Socrates spent his time pulling out of people, before they went and killed him for being annoying. It's those connotations of "understanding" that made all us Schankians very leery of saying our computer programs fully "understood" anything, even as we were pursuing computer understanding as our primary research goal. I care a lot about understanding, deep understanding, because, frankly, I cannot effectively do my job of teaching robots to learn if I do not deeply understand robots, learning, computers, the machinery surrounding them, and the problem I want to solve; when I do not understand all of these things, I stumble in the dark, I make mistakes, and end up sad. And it's pursuing a deeper understanding about deep learning where I got a deeper insight into deep understanding. I was "deep reading" the Deep Learning book (a practice in which I read, or re-read, a book I've read, working out all the equations in advance before reading the derivations), in particular section 5.8.1 on Principal Components Analysis, and the authors made the same comment I'd just seen in the Hands-On Machine Learning book: "the mean of the samples must be zero prior to applying PCA." Wait, what? Why? I mean, thank you for telling me, I'll be sure to do that, but, like ... why? I didn't follow up on that question right away, because the authors also tossed off an offhand comment like, "XX is the unbiased sample covariance matrix associated with a sample x" and I'm like, what the hell, where did that come from? I had recently read the section on variance and covariance but had no idea why this would be associated with the transpose of the design matrix X multiplied by X itself. (In case you're new to machine learning, if x stands for an example input to a problem, say a list of the pixels of an image represented as a column of numbers, then the design matrix X is all the examples you have, but each example listed as a row. Perfectly not confusing? Great!) So, since I didn't understand why Var[x] = XX, I set out to prove it myself. (Carpenters say, measure twice, cut once, but they'd better have a heck of a lot of measuring and cutting under their belts - moreso, they'd better know when to cut and measure before they start working on your back porch, or you and they will have a bad time. Same with trying to teach robots to learn: it's more than just practice; if you don't know why something works, it will come back to bite you, sooner or later, so, dig in until you get it). And I quickly found that the "covariance matrix of a variable x" was a thing, and quickly started to intuit that the matrix multiplication would produce it. This is what I'd call surface level understanding: going forward from the definitions to obvious conclusions. I knew the definition of matrix multiplication, and I'd just re-read the definition of covariance matrices, so I could see these would fit together. But as I dug into the problem, it struck me: true understanding is more than just going forward from what you know: "The brain does much more than just recollect; it inter-compares, it synthesizes, it analyzes, it generates abstractions" - thank you, Carl Sagan. But this kind of understanding is a vast, ill-posed problem - meaning, a problem without a unique and unambiguous solution. But as I was continuing to dig through the problem, reading through the sections I'd just read on "sample estimators," I had a revelation. (Another aside: "sample estimators" use the data you have to predict data you don't, like estimating the height of males in North America from a random sample of guys across the country; "unbiased estimators" may be wrong but their errors are grouped around the true value). The formula for the unbiased sample estimator for the variance actually doesn't look quite the matrix transpose - but it depends on the unbiased estimator of sample mean. Suddenly, I felt that I understood why PCA data had to have a mean of 0. Not driving forward from known facts and connecting their inevitable conclusions, but driving backwards from known facts to hypothesize a connection which I could explore and see. I even briefly wrote a draft of the ideas behind this essay - then set out to prove what I thought I'd seen. Setting the mean of the samples to zero made the sample mean drop out of sample variance - and then the matrix multiplication formula dropped out. Then I knew I understood why PCA data had to have a mean of 0 - or how to rework PCA to deal with data which had a nonzero mean. This I'd call deep understanding: reasoning backwards from what we know to provide reasons for why things are the way they are. A recent book on science I read said that some regularities, like the length of the day, may be predictive, but other regularities, like the tides, cry out for explanation. And once you understand Newton's laws of motion and gravitation, the mystery of the tides is readily solved - the answer falls out of inertia, angular momentum, and gravitational gradients. With apologies to Larry Niven, of course a species that understands gravity will be able to predict tides. The brain does do more than just remember and predict to guide our next actions: it builds structures that help us understand the world on a deeper level, teasing out rules and regularities that help us not just plan, but strategize. Detective Benoit Blanc from the movie Knives Out claimed to "anticipate the terminus of gravity's rainbow" to help him solve crimes; realizing how gravity makes projectiles arc, using that to understand why the trajectory must be the observed parabola, and strolling to the target. So I'd argue that true understanding is not just forward-deriving inferences from known rules, but also backward-deriving causes that can explain behavior. And this means computing the inverse of whatever forward prediction matrix you have, which is a more difficult and challenging problem, because that matrix may have a well-defined inverse. So true understanding is indeed a deep and interesting problem! But, even if we teach our computers to understand this way ... I suspect that this won't exhaust what we need to understand about understanding. For example: the dictionary definitions I've looked up don't mention it, but the idea of seeking a root cause seems embedded in the word "under - standing" itself ... which makes me suspect that the other half of the word, standing, itself might hint at the stability, the reliability of the inferences we need to be able to make to truly understand anything. I don't think we've reached that level of understanding of understanding yet. -the Centaur Pictured: Me working on a problem in a bookstore. Probably not this one.

Work, Finish, Publish!

taidoka 0
So I think a lot about how to be a better scientist, and during my reading I found a sparkly little gem by one of the greatest experimentalists of all time, Michael Faraday. It's quoted in Analysis and Presentation of Experimental Results as above, but from Wikiquote we get the whole story:
"The secret is comprised in three words — Work, finish, publish." His well-known advice to the young William Crookes, who had asked him the secret of his success as a scientific investigator, as quoted in Michael Faraday (1874) by John Hall Gladstone, p. 123
Well said. The middle part often seems the hardest for many people, in my experience: it's all too easy to work on something without finishing it, or to rush to publish something before it's really ready. The hard part is pushing through all three in the right order with the appropriate level of effort. -the Centaur Pictured: Michael Faraday, Photograph by Maull & Polyblank. Credit: Wellcome Collection. CC BY.

The Sole Test of Any Idea

taidoka 0
Inspirational physicist Richard Feynman once said "the sole test of any idea is experiment." I prefer the formulation "the sole test of any idea open to observation is experiment," because opening our ideas to observation - rather than relying on just belief, instrumentation, or arguments - is often the hardest challenge in making progress on otherwise seemingly unresolvable problems. -the Centaur

The Centaur at Clockwork Alchemy

centaur 0


This Memorial Day Weekend, I’ll be appearing at the Clockwork Alchemy steampunk convention! I’m on a whole passel of panels this year, including the following (all in the Monterey room near the Author’s Alley, as far as I know):

Friday, May 26
4PM: NaNoWriMo - Beat the Clock! [Panelist]

Saturday, May 27
12NOON: Working with Editors [Panelist]
1PM: The Science of Airships [Presenter]
5PM: Versimilitude in Fiction [Panelist]

Sunday, May 28
10AM: Applied Plotonium [Panelist]
12NOON: Organizing an Anthology [Panelist]
1PM: Instill Caring in Readers [Panelist]
2PM: Overcoming Writer's Block [Presenter]

Monday, May 29
11AM: Past, Present, Future - Other! [Moderator]

Of course, if you don’t want to hear me yap, there are all sorts of other reasons to be there. Many great authors will be in attendance in the Author’s Alley:


There’s a great dealer’s room and a wonderful art show filled with steampunk maker art:


For yet another more year, we’ll be co-hosted with Fanime Con, so there will be buses back and forth and fans of both anime and steampunk in attendance:


As usual, I will have all my latest releases, including Jeremiah Willstone and the Clockwork Time Machine, the steampunk novel I have like been promising you all like for ever!


In addition to my fine books, there will also be new titles from Thinking Ink Press, including the steampunk anthologies TWELVE HOURS LATER, THIRTY DAYS LATER, and SOME TIME LATER!


I think I have about as much fun at Clockwork Alchemy as I do at Dragon Con, and that’s saying something. So I hope you come join us, fellow adventurers, in celebrating all things steampunk!


-the Centaur

Clockwork Alchemy Schedule

centaur 0


Ahoy, fellow adventurers, if you’re interested in tales from a traveler who’s voyaged far and wide across the sea of unending stories, yet somehow returned to the shores we know, you can come listen to me talk at Clockwork Alchemy this year - I’m on four panels!

4PM: Overcoming Writer's Block
Scheduled Presentation Time: Saturday 4pm - 4:50pm
Location: Author's Salon (Monterey Room)

10AM: Writing Victorian Sci-Fi
Scheduled Presentation Time: Sunday 10am - 10:50am
Location: Author's Salon (Monterey Room)

12 Noon: The Science of Airships
Scheduled Presentation Time: Sunday Noon - 12:50pm
Location: The Academy (San Martin Room)

2PM: Organizing an Anthology
Scheduled Presentation Time: Sunday 2pm-2:50pm
Location: Author's Salon (Monterey Room)

I’ve given the "Science of Airships" before, and have done panels similar to “Writing Victorian Sci-Fi” and “Organizing an Anthology”, but “Overcoming Writer’s Block” I’ve not presented before to a public audience, so it should be interesting!

Come check it out!

-the Centaur

At Clockwork Alchemy this Memorial Day

centaur 0


This Memorial Day weekend, I'll be at the Clockwork Alchemy conference in the Author's Salon. I'll have on hand the new steampunk anthology TWELVE HOURS LATER, plus of course the newly released third Dakota Frost, Skindancer book LIQUID FIRE, which, despite the presence of an airship, is firmly an urban fantasy novel.

If I'm not at my table, I will likely be appearing at:

  • The Science of Airships Saturday, May 23 from 2pm - 3pm in the San Juan Workshop Room
  • Steampunk Comics Saturday, May 23 from 6pm to 7pm in the Author's Salon.
  • Writing Steampunk: Sunday, May 24 from 2pm to 3 pm in the Carmel Fashion Room

In addition to TWELVE HOURS LATER and LIQUID FIRE … I may have something else at the table. Stay tuned.

The Secret Post Snapshot v1.png

-the Centaur

Sunday’s Events at Clockwork Alchemy

centaur 0


Today's talk on Real Women of the Victorian Era, led by the redoubtable T.E. MacArthur, went well. In a weird bit of synergy, an audiobook I was reading, Victorian Britain in the Great Courses series, had a section on Florence Nightingale which was not just directly relevant … it played just as I was driving up to the hotel. Perfect.

Tomorrow, Sunday, May 25th, I will be appearing on the panels Avoiding Historical Mistakes at noon in the Monterey Room (it is rumored that Harry Turtledove will be on the panel as well) and Victorian Technology at 2pm in the San Carlos room (not 1 as I said earlier), and giving a solo talk on The Science of Airships at 4pm also at San Carlos.

The rest of the time, I will largely be at my table above, which will look more or less like you see it above, except I may be wearing a different outfit. :-D

-the Centaur

The Science of Airships, Redux

centaur 0


Once again, I will be giving a talk on The Science of Airships at Clockwork Alchemy this year, this time at 11AM on Monday. I had to suffer doing all the airship research for THE CLOCKWORK TIME MACHINE, so you should too! Seriously, I hope the panel is fun and informative and it was received well at previous presentations. From the online description:

Steampunk isn't just brown, boots and buttons: our adventurers need glorious flying machines! This panel will unpack the science of lift, the innovations of Count Zeppelin, how airships went down in flames, and how we might still have cruise liners of the air if things had gone a bit differently. Anthony Francis is a science fiction author best known for his Dakota Frost urban fantasy series, beginning with the award winning FROST MOON. His forays into Steampunk include two stories and the forthcoming novel THE CLOCKWORK TIME MACHINE.

Yes, yes, I know THE CLOCKWORK TIME MACHINE is long in forthcoming, but at least it's closer now. I'll also be appearing on two panels, "Facts with Your Fiction" moderated by Sharon Cathcartat 5pm on Saturday and "Multi-cultural Influences in Steampunk" moderated by Madeline Holly at 5pm on Sunday. With that, BayCon and Fanime, looks to be a busy weekend.

-the Centaur