So, I'm proud to announce my next venture: Logical Robotics, a robot intelligence firm focused on making learning robots work better for people. My research agenda is to combine the latest advances of deep learning with the rich history of classical artificial intelligence, using human-robot interaction research and my years of experience working on products and benchmarking to help robots make a positive impact.
Recent advances in large language model planning, combined with deep learning of robotic skills, have enabled almost magical developments in explainable artificial intelligence, where it is now possible to ask robots to do things in plain language and for the robots to write their own programs to accomplish those goals, building on deep learned skills but reporting results back in plain language. But applying these technologies to real problems will require a deep understanding of both robot performance benchmarks to refine those skills and human psychological studies to evaluate how these systems benefit human users, particularly in the areas of social robotics where robots work in crowds of people.
Logical Robotics will begin accepting new clients in May, after my obligations to my previous employer have come to a close (and I have taken a break after 17 years of work at the Search Engine That Starts With a G). In the meantime, I am available to answer general questions about what we'll be doing; if you're interested, please feel free to drop me a line at via centaur at logicalrobotics.com or take a look at our website.
It's been a difficult few weeks due to "the Kerfluffle" which I hope to blog about shortly (those on my LinkedIn have seen it already) but equally as much from a Stanford extension class I was taking on Deep Reinforcement Learning (XCS234 - speaking as an expert in this area seeking to keep my skills sharp, I can highly recommend it: I definitely learned some things, and according to the graphs, so did my programs).
Finally, that's over, and I have a moment to breathe.
And maybe start blogging again.
-the Centaur
Pictured: A mocha from Red Rock Cafe, excellent as always, and a learning curve from one of my programs from class (details suppressed since we're not supposed to share the assignments).
... the block editor of Wordpress seems to be making my old non-block-editor posts turn into solid walls of text. See the post "Pascal's Wager and Purchasing Parsley":
Yeah, it's not supposed to be looking like that. Gotta track those down and fix them.
In other news, my Half-Cheetah policy is successfully training to "expected" levels of performance. Yay! I guess that means my code for the assignment is ... sorta correct? Time to clean it up and submit it.
What happens when deep learning hits the real world? Find out at the Embodied AI Workshop this Sunday, June 20th! We’ll have 8 speakers, 3 live Q&A sessions with questions on Slack, and 10 embodied AI challenges. Our speakers will include:
Motivation for Embodied AI Research
Hyowon Gweon, Stanford
Embodied Navigation
Peter Anderson, Google
Aleksandra Faust, Google
Robotics
Anca Dragan, UC Berkeley
Chelsea Finn, Stanford / Google
Akshara Rai, Facebook AI Research
Sim-2-Real Transfer
Sanja Fidler, University of Toronto, NVIDIA Konstantinos Bousmalis, Google
... came up as my wife and I were discussing the "creative hangers-on form" of Stigler's Law. The original Stigler's Law, discovered by Roger Merton and popularized by Stephen Stigler, is the idea that in science, no discovery is named after its original discoverer.
In creative circles, it comes up when someone who had little or nothing to do with a creative process takes credit for it. A few of my wife's friends were like this, dropping by to visit her while she was in the middle of a creative project, describing out loud what she was doing, then claiming, "I told her to do that."
In the words of Finn from The Rise of Skywalker: "You did not!"
In computing circles, the old joke referred to the Java programming language. I've heard several variants, but the distilled version is "He thinks he invented Java because he was in the room when someone made coffee." Apparently this is a good description of how Java itself was named, down to at least one person claiming they came up with the name Java and others disputing that, even suggesting that they opposed it, claiming instead that someone else in the room was responsible - while that person in turn rejected the idea, noting only that there was some coffee in the room from Peet's.
Hail, fellow adventurers: to prove I do something more than just draw and write, I'd like to send out a reminder of the Second Embodied AI Workshop at the CVPR 2021 computer vision conference. In the last ten years, artificial intelligence has made great advances in recognizing objects, understanding the basics of speech and language, and recommending things to people. But interacting with the real world presents harder problems: noisy sensors, unreliable actuators, incomplete models of our robots, building good simulators, learning over sequences of decisions, transferring what we've learned in simulation to real robots, or learning on the robots themselves.
The Embodied AI Workshop brings together many researchers and organizations interested in these problems, and also hosts nine challenges which test point, object, interactive and social navigation, as well as object manipulation, vision, language, auditory perception, mapping, and more. These challenges enable researchers to test their approaches on standardized benchmarks, so the community can more easily compare what we're doing. I'm most involved as an advisor to the Stanford / Google iGibson Interactive / Social Navigation Challenge, which forces robots to maneuver around people and clutter to solve navigation problems. You can read more about the iGibson Challenge at their website or on the Google AI Blog.
Most importantly, the Embodied AI Workshop has a call for papers, with a deadline of TODAY.
Call for Papers
We invite high-quality 2-page extended abstracts in relevant areas, such as:
Simulation Environments
Visual Navigation
Rearrangement
Embodied Question Answering
Simulation-to-Real Transfer
Embodied Vision & Language
Accepted papers will be presented as posters. These papers will be made publicly available in a non-archival format, allowing future submission to archival journals or conferences.
Submission
The submission deadline is May 14th (Anywhere on Earth). Papers should be no longer than 2 pages (excluding references) and styled in the CVPR format. Paper submissions are now open.
I assume anyone submitting to this already has their paper well underway, but this is your reminder to git'r done.
Christianity is a tall ask for many skeptically-minded people, especially if you come from the South, where a lot of folks express Christianity in terms of having a close personal relationship with a person claimed to be invisible, intangible and yet omnipresent, despite having been dead for 2000 years.
On the other hand, I grew up with a fair number of Christians who seem to have no skeptical bones at all, even at the slightest and most explainable of miracles, like my relative who went on a pilgrimage to the Virgin Mary apparitions at Conyers and came back "with their silver rosary having turned to gold."
Or, perhaps - not to be a Doubting Thomas - it was always of a yellowish hue.
Being a Christian isn't just a belief, it's a commitment. Being a Christian is hard, and we're not supposed to throw up stumbling blocks for other believers. So, when I encounter stories like these, which don't sound credible to me and which I don't need to support my faith, I often find myself biting my tongue.
But despite these stories not sounding credible, I do nevertheless admit that they're technically possible. In the words of one comedian, "The Virgin Mary has got the budget for it," and in a world where every observed particle event contains irreducible randomness, God has left Himself the room He needs.
But there's a long tradition in skeptical thought to discount rare events like alleged miracles, rooted in Enlightenment philosopher David Hume's essay "Of Miracles". I almost wrote "scientific thought", but this idea is not at all scientific - it's actually an injection of one of philosophy's worst sins into science.
Philosophy! Who needs it? Well, as Ayn Rand once said: everyone. Philosophy asks the basic questions What is there? (ontology), How do we know it? (epistemology), and What should we do? (ethics). The best philosophy illuminates possibilities for thought and persuasively argues for action.
But philosophy, carving its way through the space of possible ideas, must necessarily operate through arguments, principally verbal arguments which can never conclusively convince. To get traction, we must move beyond argument to repeatable reasoning - mathematics - backed up by real-world evidence.
And that's precisely what was happening right as Hume was working on his essay "Of Miracles" in the 1740's: the laws of probability and chance were being worked out by Hume's contemporaries, some of whom he corresponded with, but he couldn't wait - or couldn't be bothered to learn - their real findings.
I'm not trying to be rude to Hume here, but making a specific point: Hume wrote about evidence, and people claim his arguments are based in rationality - but Hume's arguments are only qualitative, and the quantitative mathematics of probability being developed don't support his idea.
But they can reproduce his idea, and the ideas of the credible believer, in a much sounder framework.
In all fairness, it's best not to be too harsh with Hume, who wrote "Of Miracles" almost twenty years before Reverend Thomas Bayes' "An Essay toward solving a Problem in the Doctrine of Chances," the work which gave us Bayes' Theorem, which became the foundation of modern probability theory.
If the ground is wet, how likely is it that it rained? Intuitively, this depends on how likely it is that the rain would wet the ground, and how likely it is to rain in the first place, discounted by the chance the ground would be wet on its own, say from a sprinkler system.
In Greenville, South Carolina, it rains a lot, wetting the ground, which stays wet because it's humid, and sprinklers don't run all the time, so a wet lawn is a good sign of rain. Ask that question in Death Valley, with rare rain, dry air - and you're watering a lawn? Seriously? - and that calculus changes considerably.
Bayes' Theorem formalizes this intuition. It tells us the probability of an event given the evidence is determined by the likelihood of the evidence given the event, times the probability of the event, divided by the probability of the evidence happening all by its lonesome.
Since Bayes's time, probabilistic reasoning has been considerably refined. In the book Probability Theory: The Logic of Science, E. T. Jaynes, a twentieth-century physicist, shows probabilistic reasoning can explain cognitive "errors," political controversies, skeptical disbelief and credulous believers.
Jaynes's key idea is that for things like commonsense reasoning, political beliefs, and even interpreting miracles, we aren't combining evidence we've collected ourselves in a neat Bayesian framework: we're combining claims provided to us by others - and must now rate the trustworthiness of the claimer.
In our rosary case, the claimer drove down to Georgia to hear a woman speak at a farmhouse. I don't mean to throw up a stumbling block to something that's building up someone else's faith, but when the Bible speaks of a sign not being given to this generation, I feel like its speaking to us today.
But, whether you see the witness as credible or not, Jaynes points out we also weigh alternative explanations. This doesn't affect judging whether a wet lawn means we should bring an umbrella, but when judging a silver rosary turning to gold, there are so many alternatives: lies, delusions, mistakes.
Jaynes shows, with simple math, that when we're judging a claim of a rare event with many alternative explanations, our trust in the claimer that dominates the change in our probabilistic beliefs. If we trust the claimer, we're likely to believe the claim; if we distrust the claimer, we're likely to mistrust the claim.
What's worse, there's a feedback loop between the trust and belief: if we trust someone, and they claim something we come to believe is likely, our trust in them is reinforced; if we distrust someone, and they claim something we come to believe is not likely, our distrust of them is reinforced too.
It shouldn't take a scientist or a mathematician to realize that this pattern is a pathology. Regardless of what we choose to believe, the actual true state of the world is a matter of natural fact. It did or did not rain, regardless of whether the ground is wet; the rosary did or did not change, whether it looks gold.
Ideally, whether you believe in the claimer - your opinions about people - shouldn't affect what you believe about reality - the facts about the world. But of course, it does. This is the real problem with rare events, much less miracles: they're resistant to experiment, which is our normal way out of this dilemma.
Many skeptics argue we should completely exclude the possibility of the supernatural. That's not science, it's just atheism in a trench coat trying to sell you a bad idea. What is scientific, in the words of Newton, is excluding from our scientific hypotheses any causes not necessary or sufficient to explain phenomena.
A one-time event, such as my alleged phone call to my insurance agent today to talk about a policy for my new car, is strictly speaking not a subject for scientific explanation. To analyze the event, it must be in a class of phenomena open to experiments, such as cell phone calls made by me, or some such.
Otherwise, it's just a data point. An anecdote, an outlier. If you disbelieve me - if you check my cell phone records and argue it didn't happen - scientifically, that means nothing. Maybe I used someone else's phone because mine was out of charge. Maybe I misremembered a report of a very real event.
Your beliefs don't matter. I'll still get my insurance card in a couple of weeks.
So-called "supernatural" events, such as the alleged rosary transmutation, fall into this category. You can't experiment on them to resolve your personal bias, so you have to fall back on your trust for the claimer. But that trust is, in a sense, a personal judgment, not a scientific one.
Don't get me wrong: it's perfectly legitimate to exclude "supernatural" events from your scientific theories - I do, for example. We have to: following Newton, for science to work, we must first provide as few causes as possible, with as many far-reaching effects as possible, until experiment says otherwise.
But excluding rare events from our scientific view of the world forecloses the ability of observation to revise our theories. And excluding supernatural events from our broader view of the world is not a requirement of science, but a personal choice - a deliberate choice not to believe.
That may be right. That may be wrong. What happens, happens, and doesn't happen any other way. Whether that includes the possibility of rare events is a matter of natural fact, not personal choice; whether that includes the possibility of miracles is something you have to take on faith.
-the Centaur
Pictured: Allegedly, Thomas Bayes, though many have little faith in the claimants who say this is him.
If you've ever gone to a funeral, watched a televangelist, or been buttonholed by a street preacher, you've probably heard Christianity is all about saving one's immortal soul - by believing in Jesus, accepting the Bible's true teaching on a social taboo, or going to the preacher's church of choice.
(Only the first of these actually works, by the way).
But what the heck is a soul? Most religious people seem convinced that we've got one, some ineffable spiritual thing that isn't destroyed when you die but lives on in the afterlife. Many scientifically minded people have trouble believing in spirits and want to wash their hands of this whole soul idea.
Strangely enough, modern Christian theology doesn't rely too much on the idea of the soul. God exists, of course, and Jesus died for our sins, sending the Holy Spirit to aid us; as for what to do with that information, theology focuses less on what we are and more on what we should believe and do.
If you really dig into it, Christian theology gets almost existential, focusing on us as living beings, present here on the Earth, making decisions and taking consequences. Surprisingly, when we die, our souls don't go to heaven: instead, you're just dead, waiting for the Resurrection and the Final Judgement.
(About that, be not afraid: Jesus, Prince of Peace, is the Judge at the Final Judgment).
This model of Christianity doesn't exclude the idea of the soul, but it isn't really needed: When we die, our decision making stops, defining our relationship to God, which is why it's important to get it right in this life; when it's time for the Resurrection, God has the knowledge and budget to put us back together.
That's right: according to the standard interpretation of the Bible as recorded in the Nicene creed, we're waiting in joyful hope for a bodily resurrection, not souls transported to a purely spiritual Heaven. So if there's no need for a soul in this picture, is there any room for it? What is the idea of the soul good for?
Well, quite a lot, as it turns out.
The theology I'm describing should be familiar to many Episcopals, but it's more properly Catholic, and more specifically, "Thomistic", teachings based on the writings of Saint Thomas Aquinas, a thirteenth-century friar who was recognized - both now and then - as one of the greatest Christian philosophers.
Aquinas was a brilliant man who attempted to reconcile Aristotle's philosophy with Church doctrine. The synthesis he produced was penetratingly brilliant, surprisingly deep, and, at least in part, is documented in books which are packed in boxes in my garage. So, at best, I'm going to riff on Thomas here.
Ultimately, that's for the best. Aquinas's writings predate the scientific revolution, using a scholastic style of argument which by its nature cannot be conclusive, and built on a foundation of topics about the world and human will which have been superseded by scientific findings on physics and psychology.
But the early date of Aquinas's writings affects his theology as well. For example (riffing as best I can without the reference book I want), Aquinas was convinced that the rational human soul necessarily had to be immaterial because it could represent abstract ideas, which are not physical objects.
But now we're good at representing abstract ideas in physical objects. In fact, the history of the past century and a half of mathematics, logic, computation and AI can be viewed as abstracting human thought processes and making them reliable enough to implement in physical machines.
Look, guys - I am not, for one minute, going to get cocky about how much we've actually cracked of the human intellect, much less the soul. Some areas, like cognitive skills acquisition, we've done quite well at; others, like consciousness, are yielding to insights; others, like emotion, are dauntingly intractable.
But it's no longer a logical necessity to posit an intangible basis for the soul, even if practically it turns out to be true. But digging even deeper into Aquinas's notion of a rational soul helps us understand what it is - and why the decisions we make in this life are so important, and even the importance of grace.
The idea of a "form" in Thomistic philosophy doesn't mean shape: riffing again, it means function. The form of a hammer is not its head and handle, but that it can hammer. This is very similar to the modern notion of functionalism in artificial intelligence - the idea that minds are defined by their computations.
Aquinas believed human beings were distinguished from animals by their rational souls, which were a combination of intellect and will. "Intellect" in this context might be described in artificial intelligence terms as supporting a generative knowledge level: the ability to represent essentially arbitrary concepts.
Will, in contrast, is selecting an ideal model of yourself and attempting to guide your actions to follow it. This is a more sophisticated form of decision making than typically used in artificial intelligence; one might describe it as a reinforcement learning agent guided by a self-generated normative model.
What this means, in practice, is that the idea of believing in Jesus and choosing to follow Him isn't simply a good idea: it corresponds directly to the basic functions of the rational soul - intellect, forming an idea of Jesus as a (divinely) good role model, and attempting to follow in His footsteps in our choice of actions.
But the idea of the rational soul being the form of the body isn't just its instantaneous function at one point in time. God exists out of time - and all our thoughts and choices throughout our lives are visible to Him. Our souls are the sum of all of these - making the soul the form of the body over our entire lives.
This means the history of our choices live in God's memory, whether it's helping someone across the street, failing to forgive an irritating relative, going to confession, or taking communion. Even sacraments like baptism that supposedly "leave an indelible spiritual character on the soul" fit in this model.
This model puts the following Jesus, trying to do good and avoid evil, and partaking in sacraments in perspective. God knows what we sincerely believe in our hearts, whether we live up to it or not, and is willing to cut us slack through the mechanisms of worship and grace that add to our permanent record.
Whether souls have a spiritual nature or not - whether they come from the Guf, are joined to our bodies in life, and hang out in Hades after death awaiting reunion at the Resurrection, or whether they simply don't - their character is affected by what we believe, what we do, and how we worship here and now.
And that's why it's important to follow Jesus on this Earth, no matter what happens in the afterlife.
Alan Turing, rendered over my own roughs using several layers of tracing paper. I started with the below rough, in which I tried to pay careful attention to the layout of the face - note the use of the 'third eye' for spacing and curved contour lines - and the relationship of the body, the shoulders and so on.
I then corrected that into the following drawing, trying to correct the position and angles of the eyes and mouth - since I knew from previous drawings that I tended to straighten things that were angled, I looked for those flaws and attempted to correct them. (Still screwed up the hair and some proportions).
This was close enough for me to get started on the rendering. In the end, I like how it came out, even though I flattened the curves of the hair and slightly squeezed the face and pointed the eyes slightly wrong, as you can see if you compare it to the following image from this New Yorker article:
-the Centaur