Press "Enter" to skip to content

Posts published in “Philosophy”

[twenty twenty-four day thirty-six]: accepting reality is not denying rationality

centaur 0

One of the most frustrating things reading the philosophy of Ayn Rand is her constant evasions of reality. Rand's determinedly objective approach is a bracing blast of fresh air in philosophy, but, often, as soon as someone raises potential limits to a rational approach - or, even, in the cases where she imagines some strawman might raise a potential limit - she denies the limit and launches unjustified ad-hominems.

It reminds me a lot of "conservative" opponents to general relativity - which, right there, should tell you something, as an actual political conservative should have no objections to a hundred-and-twenty year old well tested physical theory - who are upset because it introduces "relativism" into philosophy. Well, no, actually, Einstein considered calling relativity "invariant theory" because the deep guts of the theory actually are a quest for formulating theories in terms that are invariant between two observers, like the space-time interval ds^2, which is the same no matter how the relative observers are moving.

In Rand's case, she and Peikoff admit up front in several places that human reason is fallible and prone to error - but as soon as a specific issue is raised, they either deny that failure is possible or claim that critics are trying to destroy rationality. Among things they claim as infallible products of reason are notions such as existence, identity, and consciousness, deterministic causality, the infallibility of sense perception, the formation of concepts, reason (when properly conducted), and even Objectivism itself.

In reality, all of these things are fallible, and that's OK.

Our perception of what exists, what things are, and even aspects of our consciousness can be fooled, and that's OK, because a rational agent can construct scientific procedures and instruments to untangle the difference between our perception of our phenomenal experience and the nature of reality. Deterministic causality breaks down in our stochastic world, but we can build more solid probabilistic and quantum methods that enable us to make highly reliable predictions even in the face of a noisy world. Our senses can fail, but there is a rich library of error correcting methods both in natural systems and in in robotics that help us recover reliable information that is useful enough to act upon with confidence.

As for the Objectivist theory of concepts, it isn't a terrible normative theory of how we might want concepts to work in an ideal world, but it is a terrible theory of how concept formation actually works in the real world, either in the human animal or in how you'd build an engineering system to recognize concepts - Rand's notion of "non-contradictory identification" would in reality fail to give any coherent output in a world of noisy input sensors, and systems like Rand's ideas were supplanted by techniques such as support vector machines long before we got neural networks.

And according to Godel's theorem and related results, reasoning itself must either be incomplete or inconsistent - and evidence of human inconsistency abounds in the cognitive science literature. But errors in reasoning itself can be handled by Pollock's notion of "defeasible" reasoning or Minsky's notion of "commonsense" reasoning, and as for Objectivism itself being something that Rand got infallibly right ... well, we just showed how well that worked out.

Accepting the limits of rationality that we have discovered in reality is not an attack on rationality itself, for we have found ways to work around those limits to produce methods for reaching reliable conclusions. And that's what's so frustrating reading Rand and Peikoff - their attacks on strawmen weaken their arguments, rather than strengthening them, by both denying reality and denying themselves access to the tools we have developed over the centuries to help us cope with reality.

-the Centaur

[twenty twenty-four day thirty-four]: chromodivergent and chromotypical

centaur 0

I sure do love color, but I suck at recognizing it - at least in the same way that your average person does. I'm partially colorblind - and I have to be quick to specify "partial", because otherwise people immediately ask if I can't tell red from green (I can, just not as good as you) or can't see colors at all.

In fact, sometimes I prefer to say "my color perception is deficient" or, even more specifically, "I have a reduced ability to discriminate colors." The actual reality is a little more nuanced: while there are colors I can't distinguish well, my primary deficit is not being able to NOTICE certain color distinctions - certain things just look the same to me - but once the distinctions are pointed out, I can often reliably see it.

This is a whole nother topic on its own, but, the gist is, I have three color detectors in my eyes, just like a person with typical color vision. Just, one of those detectors - I go back and forth between guessing it's the red one or the green one - is a little bit off compared to a typical person's. As one colleague at Google put it, "you have a color space just like anyone else, just your axes are tilted compared to the norm."

The way this plays out is that some color concepts are hard for me to name - I don't want to apply a label to them, perhaps because I'm not consistently seeing people use the same name for those colors. There's one particular nameless color, a particularly blah blend of green and red, that makes me think if there were more people like me, we'd call it "gred" or "reen" the way typical people have a name for "purple".

Another example: there's a particular shade of grey - right around 50% grey - that I see as a kind of army green, again, because one of my detectors is resonating more with the green in the grey. If the world were filled with people like me, we'd have to develop a different set of reference colors.

SO, this made me think that, in parallel to the concepts of "neurotypical and neurodivergent", we could use concepts like "chromotypical and chromodivergent". Apparently I'm not the only one who thinks this: here's an artist who argues that "colorblind" can be discouraging to artists, and other people think we should drop the typical in neurotypical as it too can be privileging to certain neurotypes.

I'm not so certain I'd go the second route. Speaking as someone who's been formally diagnosed "chromodivergent" (partially red-green colorblind) and is probably carrying around undiagnosed "neurodivergence" (social anxiety disorder with possibly a touch of "adult autism"), I think there's some value to recognizing some degree of "typicality" and "norms" to help us understand conditions.

If you had a society populated with people with color axes like me and another society populated with "chromotypical" receptors, both societies would get on fine, both with each other and the world; you'd just have to be careful to use the right set of color swatches when decorating a room. But a person with a larger chromodivergence - say, someone who was wholly red-green colorblind - might have be less adaptive than a chromotypical person - say, because they couldn't tell when fruit was ripe.

Nevertheless, even if some chromodivergences or neurodivergences might be maladaptive in a non-civilized environment, prioritizing the "typical" can still lead to discrimination and ableism. For those who don't understand "ableism", it's a discriminatory behavior where "typical" people de-personalize people with "disabilities" and decide to make exclusionary decisions for them without consulting them.

There are great artists who are colorblind - for example, Howard Chaykin. There's no need to discourage people who are colorblind from becoming artists, or to prevent them from trying: they can figure out how to handle that on their own, hiring a colorist or specializing in black-and-white art if they need to.

All you need to do is to decide whether you like their art.

-the Centaur

Pictured: some colorful stuff from my evening research / writing / art run.

[twenty twenty-four day thirty-three]: roll the bones

centaur 0

As both Ayn Rand and Noam Chomsky have both said in slightly different ways, concepts and language are primarily tools of thought, not communication. But cognitive science has demonstrated that our access to the contents of our thought are actually relatively poor - we often have an image of what is in our head which is markedly different from the reality, as in the case where we're convinced we remember a friend's phone number but actually have it wrong, or have forgotten it completely.

One of the great things about writing is that it forces you to turn these abstract ideas about our ideas into concrete realizations - that is, you may think you know what you think, but even if you think about it a lot, you don't really know the difference between your internal mental judgments about your thoughts and their actual reality. The perfect example is a mathematical proof: you may think you've proved a theorem, but until you write it down and check your work, there's no guarantee that you actually HAVE a proof.

So my recent article on problems with Ayn Rand's philosophy is a good example. I stand by it completely, but I think that many of my points could be refined considerably. I view Ayn Rand's work with regards to philosophy the way that I do Euclid for mathematics or Newton for physics: it's not an accurate model of the world, but it is a stage in our understanding of the world which we need to go through, and which remains profitable even once we go on to more advanced models like non-Euclidean geometry or general relativity. Entire books are written on Newtonian approximations to relativity, and one useful mathematical tool is a "Lie algebra", which enables us to examine even esoteric mathematical objects by looking at the locally at the Euclidean tangent space generated around a particular point.

So it's important to not throw the baby out with the bathwater with regards to Ayn Rand, and to be carefully specific about where her ideas work and where they fail. For example, there are many, many problems with her approach to the law of identity - the conceptual idea that things are what they are, or A is A - but the basic idea is sound. One would say that it almost approaches tautological except for the fact that many people seem to ignore it. However, you cannot fake reality in any way whatever - and you cannot make physical extrapolations about reality through philosophical analysis of a conceptual entity like identity.

Narrowing in on a super specific example, Rand tries to derive the law of causality from the law of identity - and it works well, right up unto the point where she tries to draw conclusions about it. Her argument goes like this: every existent has a unique nature due to the law of identity: A is A, or things are what they are, or a given existent has a specific nature. What happens to an existent over time - the action of that entity - is THE action of THAT entity, and is therefore determined by the nature of that entity. So far, so good.

But then Rand and Peikoff go off the rails: "In any given set of circumstances, therefore, there is only one action possible to an entity, the action expressive of its identity." It is difficult to grasp the level of evasion which might produce such a confusion of ideas: to make such a statement, one must throw out not just the tools of physics, mathematics and philosophy, but also personal experience with objects as simple as dice.

First, the evasion of personal experience, and how it plays out through mathematics and physics. Our world is filled with entities which may produce one action out of many - not just entities like dice, but even from Rand and Peikoff's own examples, a rattle makes a different sound every time you rattle it. We have developed an entire mathematical formalism to help understand the behavior of such entities: we call them stochastic and treat them with the tools of probability. As our understanding has grown, physicists have found that this stochastic nature is fundamental to the nature of reality: the rules of quantum mechanics essentially say that EVERY action of an entity is drawn from a probability distribution, but for most macroscopic actions this probabilistic nature gets washed out.

Next, the evasion of validated philosophical methods. Now, one might imagine Rand and Peikoff saying, "well, the roll of the dice is only apparently stochastic: in actuality, the dice when you throw it is in a given state, which determines the single action that it will take." But this is a projective hypothesis about reality: it is taking a set of concepts, determining their implications, and then stating how we expect those implications to play out in reality. Reality, however, is not required to oblige us. This form of philosophical thinking goes back to the Greeks: the notion that if you begin with true premises and proceed through true inference rules, you will end up with a true conclusion. But this kind of philosophical thinking is invalid - does not work in reality - because any one of these elements - your concepts, your inference rules, or your mapping between conclusions and states - may be specious: appearing to be true without actually reflecting the nuance of reality. To fix this problem, the major achievement of the scientific method is to replace "if you reach a contradiction, check your premises" with "if you reach a conclusion, check your work" - or, in the words of Richard Feynman, "The sole test of any idea is experiment."

Let's get really concrete about this. Rand and Peikoff argue "If, under the same circumstances, several actions were possible - e.g., a balloon could rise or fall (or start to emit music like a radio, or turn into a pumpkin), everything else remaining the same - such incompatible outcomes would have to derive from incompatible (contradictory) aspects of the entity's nature." This statement is wrong on at least two levels, physical and philosophical - and much of the load-bearing work is in the suspicious final dash.

First, physical: we actually do indeed live in a world where several actions are possible for an entity - this is one of the basic premises of quantum mechanics, which is one of the most well-tested scientific theories in history. For each entity in a given state, a set of actions are possible, governed by a probability amplitude over those states: when the entity interacts with another entity in a destructive way the probability amplitude collapses into a probability distribution over the actions, one of which is "observed". In Rand's example, the balloon's probability amplitude for rising is high, falling is small, emitting radio sounds is still smaller, and turning into a pumpkin is near zero (due to the vast violation of conservation of mass).

If one accepts this basic physical fact about our world - that entities that are not observed exist in a superposition of states governed by probability amplitudes, and that observations involve probabilistically selecting a next state from the resulting distribution - one can create amazing technological instruments and extraordinary scientific predictions - lasers and integrated circuits and quantum tunneling and prediction of physical variables with a precision of twelve orders of magnitude - a little bit like measuring the distance between New York and Los Angeles with an error less than a thousandth of an inch.

But Rand's statement is also philosophically wrong, and it gets clearer if we take out that distracting example: "If, under the same circumstances, several actions were possible, such incompatible outcomes would have to derive from incompatible aspects of the entity's nature." What's wrong with this? There's no warrant to this argument. A warrant is the thing that connects the links in a reasoning chain - an inference rule in a formal system, or a more detailed explanation of the reasoning step in question.

But there is no warrant possible in this case, only a false lurking premise. The erroneous statement is that "such incompatible outcomes would have to derive from incompatible aspects of the entity's nature." Why? Why can't an entity's nature be to emit one of a set of possible actions, as in a tossed coin or a die? Answer: Blank out. There is no good answer to this question, because there are ready counterexamples from human experience, which we have processed through mathematics, and ultimately determined through the tools of science that, yes, it is the nature of every entity to produce one of a set of possible outcomes, based on a probability distribution, which itself is completely lawlike and based entirely on the entity's nature.

You cannot fake reality any way whatever: this IS the nature of entities, to produce one of a set of actions. This is not a statement that they are "contradictory" in any way: this is how they behave. This is not a statement that they are "uncaused" in any way: the probability amplitude must be non-zero in a space in order for an action to be observed, and it is a real physical entity with energy content, not merely a mathematical convenience, that leads to the observation. And it's very likely not sweeping under the rug some hidden mechanism that actually causes it: while the jury is still out on whether quantum mechanics is a final view of reality, we do know due to Bell's theorem that there are no "hidden variables" behind the curtain (a theorem that had been experimentally validated as of the time of Peikoff's book).

So reality is stochastic. What's wrong with that? Imagine a correct version of Ayn Rand's earlier statement: "In any given set of circumstances, therefore, there is only one type of behavior possible for an entity, the behavior expressive of its entity. This behavior may result in one of several outcomes, as in the rolling of a die, but the probability distribution over those set of outcomes is the distribution that is caused and necessitated by the entity's nature." Why didn't Peikoff and Rand write something like that?

We have a hint in the next few paragraphs: "Cause and effect, therefore, is a universal law of reality. Every action has a cause (the cause is the nature of the entity that acts); and the same cause leads to the same effect (the same entity, under the same circumstances, will perform the same action). The above is not to be taken as a proof of the law of cause and effect. I have merely made explicit what is known implicitly in the perceptual grasp of reality." That sounds great ... but let's run the chain backwards, shall we?

"We know implicitly in the perceptual grasp of reality a law which we might explicitly call cause and effect. We cannot prove this law, but we can state that the same entity in the same circumstances will perform the same action - that is, the same cause leads to the same effect. Causes are the nature of the entities that act, and every action has a cause. Therefore, cause and effect is a universal law of reality."

I hope you can see what's wrong with this, but if you don't, I'm agonna tell you, because I don't believe in the Socratic method as a teaching tool. First and foremost, our perceptual grasp of reality is very shaky: massive amounts of research in cognitive science reveal a nearly endless list of biases and errors, and the history of physics has been one of replacing erroneous perceptions with better laws of reality. One CANNOT go directly from the implicit knowledge of perceptual reality to any actual laws, much less universal ones: we need experiment and the tools of physics and cognitive science to do that.

But even from a Randian perspective this is wrong, because it is an argument from the primacy of consciousness. One of the fundamental principles of Objectivist philosophy is the primacy of existence over consciousness: the notion that thinking a thing does not make it so. Now, this is worth a takedown of its own - it is attempting to draw an empirically verifiable physical conclusion from a conceptual philosophical argument, which is invalid - but, more or less, I think Rand is basically right that existence is primary over consciousness. Yet above, Rand and Peikoff purport to derive a universal law from perceptual intuition. They may try to call it "implicit knowledge" but perception literally doesn't work that way.

If they admit physics into their understanding of the law of causality, they have to admit you cannot directly go from a conceptual analysis of the axioms to universally valid laws, but must subject all their so-called philosophical arguments to empirical validation. But that is precisely what you have to do if you are working in ontology or epistemology: you MUST learn the relevant physics and cognitive science before you attempt to philosophize, or you end up pretending to invent universal laws that are directly contradicted by human experience.

Put another way, whether you're building a bridge or a philosophy, you can't fake reality in any way whatsoever, or, sooner or later, the whole thing will come falling down.

-the Centaur

[twenty twenty-four day thirty-two]: if you do what you’ve always done

centaur 0
Something new

"If you do what you've always done, you'll get what you've always gotten," or so the saying goes.

That isn't always true - ask my wife what it's like for a paint company to silently change the formula on a product right when she's in the middle of a complicated faux finish that depended on the old formulas chemical properties - but there's a lot of wisdom to it.

It's also true that it's work to decide. When a buddy of mine and I finished 24 Hour Comic Day one year and were heading to breakfast, he said, "I don't want to go anyplace new or try anything new, because I have no brains left. I want to go to a Dennys and order something that I know will be good, so I don't have to think about it."

But as we age, we increasingly rely on past decisions - so-called crystallized intelligence, an increasingly vast but increasingly rigid collection of wisdom. If we don't want to get frozen, we need to continue exercising the muscle of trying things that are new.

At one of my favorite restaurants, I round-robin through the same set of menu items. But this time, I ildy flipped the menu over to the back page I never visit and saw a burrito plate whose fillings were simmered in beer. I mean, what! And the server claimed it was one of the best things on the menu, a fact I can confirm.

It can be scary to step outside our circle. But if you do what you've always done, you'll miss out on opportunities to find your new favorite.

-the Centaur

[twenty twenty-four day thirty-one]: to be or not to be in degree

centaur 0

I've recently been having fun with a new set of "bone conduction" headphones, walking around the nearby forest while listening to books on tape [er, CD, er, MP3, er, streaming via Audible]. Today's selection was from Leonard Peikoff's Objectivism: The Philosophy of Ayn Rand. Listening to the precision with which they define concepts is wonderful - it's no secret that I think Ayn Rand is one of the most important philosophers that ever lived - but at the same time they have some really disturbing blind spots.

And I don't mean in the political sense in which many people find strawman versions of Rand's conclusions personally repellent, and therefore reject her whole philosophy without understanding the good parts. No, I mean that, unfortunately, Ayn Rand and Leonard Peikoff frequently make specious arguments - arguments that on the surface appear logical, but which actually lack warrants for their conclusions. Many of these seem to be tied to a desire to appear objective emotionally by demanding an indefensibly precise base for their arguments, rather than standing the more solid ground of accurate, if fuzzier concepts, which actually exist in a broader set of structures which are more objective than their naive pseudo-objective counterparts.

Take the notion that "existence exists". Peikoff explains the foundation of Ayn Rand's philosophy to be the Randian axioms: existence, identity, and consciousness - that is, there is a world, things are what they are, and we're aware of them. I think Rand's take on these axioms is so important that I use her words to label two them in my transaxiomatic catalog of axioms: EE, "existence exists," AA, "A is A", and CC, where Rand doesn't have a catchy phrase, but let's say "creatures are conscious". Whether these are "true", in their view, is less important than that they are validated as soon as you reach the level of having a debate: if someone disagrees with you about the validity of the axioms, there's no meaningful doubt that you and they exist, that you're both aware of the axioms, and that they have a nature which is being disputed.

Except ... hang on a bit. To make that very argument, Peikoff presents a condensed dialog between the defender of the axioms, A, and a denier of the axioms, B, quickly coming to the conclusion that someone who exists, is aware of your opinions, and is disagreeing with their nature specifically by denying that things exist, that people are aware of anything, and that things have a specific nature is ... probably someone you shouldn't spend your time arguing with. At the very best, they're trapped in a logical error; at the worst, they're either literally delusional or arguing in bad faith. That all sounds good. But A and B don't exist.

More properly, the arguing parties A and B only exist as hypothetical characters in Peikoff's made-up dialog. And here's where the entire edifice of language-based philosophy starts to break down: what is existence, really? Peikoff argues you cannot define existence in terms of other things, but can only do so ostensively, by pointing to examples - but this is not how language works, either in day-to-day life or in philosophy, which is why science has abandoned language in favor of mathematical modeling. If you're intellectually honest, you should agree that Ayn Rand and Leonard Peikoff exist in a way that A and B in Peikoff's argument do not.

Think about me in relationship to Sherlock Holmes. I exist in a way that Sherlock Holmes does not. I also exist in a way which Arthur Conan Doyle does not. Sherlock Holmes himself exists in a way that an alternate version of Holmes from a hypothetical unproduced TV show does not, and I as a real concrete typing these words exists in a way that the generic idea of me does not. One could imagine an entire hierarchy of degrees of existence, from absolute nothingness of the absence of a thing or concept, to contradictions in terms that could be named but do not exist, to hypothetical versions of Sherlock Holmes that do not exist, to Sherlock Holmes, who only exists as a character, to Arthur Conan Doyle who once existed, to me who existed as of this writing, to the concrete me writing this now, to existence itself, which exists whether I do or not.

Existence is what Marvin Minsky calls a "suitcase word": it's a stand in for a wide variety of other distinct but usefully similar concepts, from conceptual entities to physical existents to co-occurring physical objects in the same interacting region of space-time. And it's no good attempting to fall back on the idea that Ayn Rand was actually trying to define "existence" as the sum total of "existents" because pinning down "existence" or "existent" outside of an ostensible "I can point at it" definition is precisely what Rand and Peikoff don't want to do - first off, because they really do mean it to be "everything", in almost the precise same way that Carl Sagan uses the word "Cosmos" to refer to everything that ever is, was, or will be, and secondly, because if it loses its function as a suitcase word, it is no longer useful in their arguments.

In reality, if you say "existence exists", and someone attempts to contradict you, it does you no good to say "well, you're contradicting yourself, because you had to exist to even say that". You do need to actually put your money where your mouth is and say what concrete propositions you intend to draw from the terms "existence" and "exists" and the floating abstraction "existence exists" - and so do they. If you can't do this, you're not actually arguing with them; you're talking past them; if they can't do this, they're at best not arguing coherently, and at worst not arguing in good faith. If you both DO this, however, you may come to profitable conclusions, such as, "yes, we agree that SOMETHING exists, at least to the level where we had this debate; but we can also agree that the word existence should not extend to this unwanted implication."

This approach - reinforcing your axioms with sets of elaborations, models and even propositions that are examples of of the axioms, along with similar sets that should be considered counterexamples - is what I call the "transaxiomatic" approach. Rather than simply assuming the axioms are unassailable and attempting to pseudo-define their terms by literally waving one's hand around and saying "this is what I mean by existence" - and simply hoping people will "get it" - we need to reinforce the ostensible concretes we use to define the axioms with more carefully refined abstractions that tell us what we mean when we use the terms in the axioms, and what propositions we hope people should derive from it.

This is part of an overall move from the philosophical way of tackling problems towards a more scientific one. And it's why I think Ayn Rand was, in a sense, too early, and too late. She's too early in the sense that many of the things that she studied philosophically - ontology and epistemology - are no longer properly the domain of philosophy, but have been supplanted - firmly supplanted - by findings from science - ontology is largely subsumed into physics and cosmology, and epistemology is largely subsumed into cognitive science and artificial intelligence. That's not to say that that philosophy is done with those areas, but instead that philosophy has definitively lost its primary position within them: one must first learn the science of what is known in those areas before trying to philosophize about it. One cannot meaningfully say anything at all about epistemology without understanding computational learning theory. And she's too late in that she was trying to DO philosophy at a point in time where her subject matter was already starting to become science. Introduction to Objectivist Epistemology is an interesting book, but it was written a decade after "The Magical Number Seven, Plus or Minus Two" and two decades before the "Probably Approximately Correct" theory of learning, and you will learn much more about epistemology by looking up the "No Free Lunch" learning theorems and pulling on that thread than by anything Ayn Rand ever wrote (or, try reading "Probability Theory: The Logic of Science" for a good one-volume starting point). Which is not to say that Ayn Rand's philosophizing is not valuable - it is almost transcendently valuable - but if she was writing today, many of the more conceptually problematic structures of her philosophy could simply be dropped in favor of references to the rich conceptual resources of cognitive science and probability theory, and then she could have gotten on with convincing people that you can indeed derive "ought" from "is".

Or, maybe, just maybe, she might have done science in addition to philosophy, and perhaps even had something scientific to contribute to the great thread rolling forward from Bayes and Boole.

Existence does exist. But before you agree, ask, "What do you really mean by that?"

-the Centaur

Pictured: Loki, existing in a fuzzy state.

RIP Jeff Bezos (and/or Richard Branson)

centaur 0
rip jeff bezos

You know, Jeff Bezos isn’t likely to die when he flies July 20th. And Richard Branson isn’t likely to die when he takes off at 9am July 11th (tomorrow morning, as I write this). But the irresponsible race these fools have placed them in will eventually get somebody killed, as surely as Elon Musk’s attempt to build self-driving cars with cameras rather than lidar was doomed to (a) kill someone and (b) fail. It’s just, this time, I want to be caught on record saying I think this is hugely dangerous, rather than grumbling about it to my machine learning brethren.

Whether or not a spacecraft is ready to launch is not a matter of will; it’s a matter of natural fact. This is actually the same as many other business ventures: whether we’re deciding to create a multibillion-dollar battery factory or simply open a Starbucks, our determination to make it succeed has far less to do with its success than the realities of the market—and its physical situation. Either the market is there to support it, and the machinery will work, or it won’t.

But with normal business ventures, we’ve got a lot of intuition, and a lot of cushion. Even if you aren’t Elon Musk, you kind of instinctively know that you can’t build a battery factory before your engineering team has decided what kind of battery you need to build, and even if your factory goes bust, you can re-sell the land or the building. Even if you aren't Howard Schultz, you instinctively know it's smarter to build a Starbucks on a busy corner rather than the middle of nowhere, and even if your Starbucks goes under, it won't explode and take you out with it.

But if your rocket explodes, you can't re-sell the broken parts, and it might very well take you out with it. Our intuitions do not serve us well when building rockets or airships, because they're not simple things operating in human-scaled regions of physics, and we don't have a lot of cushion with rockets or self-driving cars, because they're machinery that can kill you, even if you've convinced yourself otherwise.

The reasons behind the likelihood of failure are manyfold here, and worth digging into in greater depth; but briefly, they include:

  • The Paradox of the Director's Foot, where a leader's authority over safety personnel - and their personal willingness to take on risk - ends up short-circuiting safety protocols and causing accidents. This actually happened to me personally when two directors in a row had a robot run over their foot at a demonstration, and my eagle-eyed manager recognized that both of them had stepped into the safety enclosure to question the demonstrating engineer, forcing the safety engineer to take over audience questions - and all three took their eyes off the robot. Shoe leather degradation then ensued, for both directors. (And for me too, as I recall).
  • The Inexpensive Magnesium Coffin, where a leader's aesthetic desire to have a feature - like Steve Job's desire for a magnesium case on the NeXT machines - led them to ignore feedback from engineers that the case would be much more expensive. Steve overrode his engineers ... and made the NeXT more expensive, just like they said it would, because wanting the case didn't make it cheaper. That extra cost led to the product's demise - that's why I call it a coffin. Elon Musk's insistence on using cameras rather than lidar on his self-driving cars is another Magnesium Coffin - an instance of ego and aesthetics overcoming engineering and common sense, which has already led to real deaths. I work in this precise area - teaching robots to navigate with lidar and vision - and vision-only navigation is just not going to work in the near term. (Deploy lidar and vision, and you can drop lidar within the decade with the ground-truth data you gather; try going vision alone, and you're adding another decade).
  • Egotistical Idiot's Relay Race (AKA Lord Thomson's Suicide by Airship). Finally, the biggest reason for failure is the egotistical idiot's relay race. I wanted to come up with some nice, catchy parable name to describe why the Challenger astronauts died, or why the USS Macon crashed, but the best example is a slightly older one, the R101 disaster, which is notable because the man who started the R101 airship program - Lord Thomson - also rushed the program so he could make a PR trip to India, with the consequence that the airship was certified for flight without completing its endurance and speed trials. As a result, on that trip to India - its first long distance flight - the R101 crashed, killing 48 of the 54 passengers - Lord Thomson included. Just to be crystal clear here, it's Richard Branson who moved up his schedule to beat Jeff Bezos' announced flight, so it's Sir Richard Branson who is most likely up for a Lord Thomson's Suicide Award.

I don't know if Richard Branson is going to die on his planned spaceflight tomorrow, and I don't know that Jeff Bezos is going to die on his planned flight on the 20th. I do know that both are in an Egotistical Idiot's Relay Race for even trying, and the fact that they're willing to go up themselves, rather than sending test pilots, safety engineers or paying customers, makes the problem worse, as they're vulnerable to the Paradox of the Director's Foot; and with all due respect to my entire dot-com tech-bro industry, I'd be willing to bet the way they're trying to go to space is an oversized Inexpensive Magnesium Coffin.

-the Centaur

P.S. On the other hand, when Space X opens for consumer flights, I'll happily step into one, as Musk and his team seem to be doing everything more or less right there, as opposed to Branson and Bezos.

P.P.S. Pictured: Allegedly, Jeff Bezos, quick Sharpie sketch with a little Photoshop post-processing.

It’s been a long time since I’ve thrown a book …

taidoka 0
chuck that junk Yeah, so that happened on my attempt to get some rest on my Sabbath day. I'm not going to cite the book - I'm going to do the author the courtesy of re-reading the relevant passages to make sure I'm not misconstruing them, but I'm not going to wait to blog my reaction - but what caused me to throw this book, an analysis of the flaws of the scientific method, was this bit: Imagine an experiment with two possible outcomes: the new theory (cough EINSTEIN) and the old one (cough NEWTON). Three instruments are set up. Two report numbers consistent with the new theory; the third one, missing parts, possibly configured improperly and producing noisy data, matches the old. Wow! News flash: any responsible working scientist would say these results favored the new theory. In fact, if they were really experienced, they might have even thrown out the third instrument entirely - I've learned, based on red herrings from bad readings, that it's better not to look too closely at bad data. What did the author say, however? Words to the effect: "The scientists ignored the results from the third instrument which disproved their theory and supported the original, and instead, pushing their agenda, wrote a paper claiming that the results of the experiment supported their idea." Pushing an agenda? Wait, let me get this straight, Chester Chucklewhaite: we should throw out two results from well-functioning instruments that support theory A in favor of one result from an obviously messed-up instrument that support theory B - oh, hell, you're a relativity doubter, aren't you? Chuck-toss. I'll go back to this later, after I've read a few more sections of E. T. Jaynes's Probability Theory: The Logic of Science as an antidote. -the Centaur P. S. I am not saying relativity is right or wrong, friend. I'm saying the responsible interpretation of those experimental results as described would be precisely the interpretation those scientists put forward - though, in all fairness to the author of this book, the scientist involved appears to have been a super jerk.  

Day 051

centaur 0
mount tabor sketch Mount Tabor, sketched to commemorate the transfiguration of Jesus, that moment when Jesus is transformed on a mountaintop as he communes with Moses and Elijah, and Peter somehow loses a screw and decides it's a great time to start building houses. As Reverend Karen of St. Stephens in-the-Field and St. John the Divine memorably said in today's sermon, this was the moment that the disciples went from knowing Jesus only as a human teacher they admired to seeing him as touched with divinity. (And speaking as a religious person from a scientific perspective, this is a great example of why there always will be a gap between science and religion: even if the event actually happened exactly as described, we're unlikely to ever prove so scientifically, since it is a one-time event that cannot be probed with replicable experiments; the events of the day, even if true, really do have to be taken purely on faith. This is, of course, assuming that tomorrow someone doesn't invent a device for reviewing remote time). Roughed on Strathmore, then rendered on tracing paper, based on the following shot taken in 2011: צילם: אלי זהבי, כפר תבור, CC BY 2.5 , via Wikimedia Commons (Author: Eli Zehavi, Kfar Thabor) I mean, look at that. That mountain is just begging for God do something amazing there. And if God doesn't want it, the Close Encounters mothership and H.P. Lovecraft are top of the waitlist. It really is proving useful to ink my own rough sketches by hand, then to trace my own art. It is interesting to me though how I vertically exaggerated the mountain when I drew it, which probably explains why a few things kept not lining up the way that I wanted them to. Still ... Drawing every day. -the Centaur P.S. And yes, I accidentally drew the Ascension rather than the Transfiguration, which I guess is fine, because the Mount of Olives looks harder to draw. Check out that 2,000 year old tree though.

What is “Understanding”?

taidoka 1
When I was growing up - or at least when I was a young graduate student in a Schankian research lab - we were all focused on understanding: what did it mean, scientifically speaking, for a person to understand something, and could that be recreated on a computer? We all sort of knew it was what we'd call nowadays an ill-posed problem, but we had a good operational definition, or at least an operational counterexample: if a computer read a story and could not answer the questions that a typical human being could answer about that story, it didn't understand it at all. But there are at least two ways to define a word. What I'll call a practical definition is what a semanticist might call the denotation of a word: a narrow definition, one which you might find in a dictionary, which clearly specifies the meaning of the concept, like a bachelor being an unmarried man. What I'll call a philosophical definition, the connotations of a word, are the vast web of meanings around the core concept, the source of the fine sense of unrightness that one gets from describing Pope Francis as a bachelor, the nuances of meaning embedded in words that Socrates spent his time pulling out of people, before they went and killed him for being annoying. It's those connotations of "understanding" that made all us Schankians very leery of saying our computer programs fully "understood" anything, even as we were pursuing computer understanding as our primary research goal. I care a lot about understanding, deep understanding, because, frankly, I cannot effectively do my job of teaching robots to learn if I do not deeply understand robots, learning, computers, the machinery surrounding them, and the problem I want to solve; when I do not understand all of these things, I stumble in the dark, I make mistakes, and end up sad. And it's pursuing a deeper understanding about deep learning where I got a deeper insight into deep understanding. I was "deep reading" the Deep Learning book (a practice in which I read, or re-read, a book I've read, working out all the equations in advance before reading the derivations), in particular section 5.8.1 on Principal Components Analysis, and the authors made the same comment I'd just seen in the Hands-On Machine Learning book: "the mean of the samples must be zero prior to applying PCA." Wait, what? Why? I mean, thank you for telling me, I'll be sure to do that, but, like ... why? I didn't follow up on that question right away, because the authors also tossed off an offhand comment like, "XX is the unbiased sample covariance matrix associated with a sample x" and I'm like, what the hell, where did that come from? I had recently read the section on variance and covariance but had no idea why this would be associated with the transpose of the design matrix X multiplied by X itself. (In case you're new to machine learning, if x stands for an example input to a problem, say a list of the pixels of an image represented as a column of numbers, then the design matrix X is all the examples you have, but each example listed as a row. Perfectly not confusing? Great!) So, since I didn't understand why Var[x] = XX, I set out to prove it myself. (Carpenters say, measure twice, cut once, but they'd better have a heck of a lot of measuring and cutting under their belts - moreso, they'd better know when to cut and measure before they start working on your back porch, or you and they will have a bad time. Same with trying to teach robots to learn: it's more than just practice; if you don't know why something works, it will come back to bite you, sooner or later, so, dig in until you get it). And I quickly found that the "covariance matrix of a variable x" was a thing, and quickly started to intuit that the matrix multiplication would produce it. This is what I'd call surface level understanding: going forward from the definitions to obvious conclusions. I knew the definition of matrix multiplication, and I'd just re-read the definition of covariance matrices, so I could see these would fit together. But as I dug into the problem, it struck me: true understanding is more than just going forward from what you know: "The brain does much more than just recollect; it inter-compares, it synthesizes, it analyzes, it generates abstractions" - thank you, Carl Sagan. But this kind of understanding is a vast, ill-posed problem - meaning, a problem without a unique and unambiguous solution. But as I was continuing to dig through the problem, reading through the sections I'd just read on "sample estimators," I had a revelation. (Another aside: "sample estimators" use the data you have to predict data you don't, like estimating the height of males in North America from a random sample of guys across the country; "unbiased estimators" may be wrong but their errors are grouped around the true value). The formula for the unbiased sample estimator for the variance actually doesn't look quite the matrix transpose - but it depends on the unbiased estimator of sample mean. Suddenly, I felt that I understood why PCA data had to have a mean of 0. Not driving forward from known facts and connecting their inevitable conclusions, but driving backwards from known facts to hypothesize a connection which I could explore and see. I even briefly wrote a draft of the ideas behind this essay - then set out to prove what I thought I'd seen. Setting the mean of the samples to zero made the sample mean drop out of sample variance - and then the matrix multiplication formula dropped out. Then I knew I understood why PCA data had to have a mean of 0 - or how to rework PCA to deal with data which had a nonzero mean. This I'd call deep understanding: reasoning backwards from what we know to provide reasons for why things are the way they are. A recent book on science I read said that some regularities, like the length of the day, may be predictive, but other regularities, like the tides, cry out for explanation. And once you understand Newton's laws of motion and gravitation, the mystery of the tides is readily solved - the answer falls out of inertia, angular momentum, and gravitational gradients. With apologies to Larry Niven, of course a species that understands gravity will be able to predict tides. The brain does do more than just remember and predict to guide our next actions: it builds structures that help us understand the world on a deeper level, teasing out rules and regularities that help us not just plan, but strategize. Detective Benoit Blanc from the movie Knives Out claimed to "anticipate the terminus of gravity's rainbow" to help him solve crimes; realizing how gravity makes projectiles arc, using that to understand why the trajectory must be the observed parabola, and strolling to the target. So I'd argue that true understanding is not just forward-deriving inferences from known rules, but also backward-deriving causes that can explain behavior. And this means computing the inverse of whatever forward prediction matrix you have, which is a more difficult and challenging problem, because that matrix may have a well-defined inverse. So true understanding is indeed a deep and interesting problem! But, even if we teach our computers to understand this way ... I suspect that this won't exhaust what we need to understand about understanding. For example: the dictionary definitions I've looked up don't mention it, but the idea of seeking a root cause seems embedded in the word "under - standing" itself ... which makes me suspect that the other half of the word, standing, itself might hint at the stability, the reliability of the inferences we need to be able to make to truly understand anything. I don't think we've reached that level of understanding of understanding yet. -the Centaur Pictured: Me working on a problem in a bookstore. Probably not this one.

Work, Finish, Publish!

taidoka 0
So I think a lot about how to be a better scientist, and during my reading I found a sparkly little gem by one of the greatest experimentalists of all time, Michael Faraday. It's quoted in Analysis and Presentation of Experimental Results as above, but from Wikiquote we get the whole story:
"The secret is comprised in three words — Work, finish, publish." His well-known advice to the young William Crookes, who had asked him the secret of his success as a scientific investigator, as quoted in Michael Faraday (1874) by John Hall Gladstone, p. 123
Well said. The middle part often seems the hardest for many people, in my experience: it's all too easy to work on something without finishing it, or to rush to publish something before it's really ready. The hard part is pushing through all three in the right order with the appropriate level of effort. -the Centaur Pictured: Michael Faraday, Photograph by Maull & Polyblank. Credit: Wellcome Collection. CC BY.

The Sole Test of Any Idea

taidoka 0
Inspirational physicist Richard Feynman once said "the sole test of any idea is experiment." I prefer the formulation "the sole test of any idea open to observation is experiment," because opening our ideas to observation - rather than relying on just belief, instrumentation, or arguments - is often the hardest challenge in making progress on otherwise seemingly unresolvable problems. -the Centaur