Press "Enter" to skip to content

Posts published in “Research”

Posts and pages about my scientific research and the scientific topics I’m interested in.

Announcing the Embodied AI Workshop #4 at CVPR 2023

centaur 0

Hey folks, I am proud to announce the 4th annual Embodied AI Workshop, held once again at CVPR 2023! EAI is a multidisciplinary workshop bringing computer vision researchers, machine learning researchers and roboticists to study the problem of creating intelligent systems that interact with their worlds.

For a highlight of previous workshops, see our Retrospectives paper. This year, EAI #4 will feature dozens of researchers, over twenty participating institutions, and ten distinct embodied AI challenges. Our three main themes for this year's workshop are:

  • Foundation Models: large, pretrained models that can solve many tasks few-shot or zero-shot
  • Generalist Agents: agents capable of solving a wide variety of problems
  • Sim to Real Transfer: learning in simulation but deploying in reality.

We will have presentations from all the challenges discussing their tasks, progress in the community, and winning approaches. We will also have six speakers on a variety of topics, and at the end of the workshop I'll be moderating a panel discussion among them.

I hope you can join us, in the real or virtually, at EAI #4 at CVPR 2023 in Vancouver!

-the Centaur

[forty-seven] minus twenty-one: i hear there’s a new ai hotness

centaur 0

SO automatic image generation is a controversial thing I think about a lot. Perhaps I should comment on it sometime. Regardless, I thought I'd show off the challenges that come from using this technology using a simple example. If you recall, I did a recent post with a warped bookstore picture, and attempted to regenerate it using generative AI with Midjourney. Unfortunately, the prompt

a magical three-dimensional impossible bookstore in the style of M.C. Escher

me

failed to pick up the image for some reason. After a few iterations with the Midjourney Discord interface, I got the very nice, but nonsensical and generic, AI generated image you see up top. After playing around with the API, I realized that I likely had formulated my prompt wrong, and tried again to include this image:

On the second pass, I got another, more on-point, yet still nonsensical image as you see below:

These systems do LOOK impressive. But they work like ... amateurs who've learned to render well. They can produce things that are cool, but it's very hard to make them produce something on point.

And this is above and beyond the massive copyright issues that arise from a system that regurgitates other people's copyrighted art, much less the impact on jobs, much less the impact on the human soul.

-the Centaur

[thirty-nine] minus twenty-one: what a team effort

centaur 0

Wow. We're done with the paper. And what a team effort! So many people came together on this one - research, infra, operations, human-robot interaction folks, the whole nine yards. It's amazing to me how interdisciplinary robotics is becoming. A few years ago 7 authors on a paper was unusual. But out of the last 5 papers I helped submit, the two shortest papers had 8 authors, and all the others were 15 or more.

And it's not citation inflation. True, this most recent paper had a smaller set of authors actively working on the draft, collating contributions from a larger group running the experiments ... but the previous paper had more than 25 authors, all of whom materially contributed content directly to the draft.

What a wonderful time to be alive.

And to recover from food poisoning.

-the Centaur

Pictured: this afternoon's draft of the paper, just prior to a video conference to hammer out some details.

[twenty-eight] minus twenty: re-ju-ven-ate!

centaur 0

Oh, look, it's a Dalek acting as a security guard! Nothing can go wrong with this trend. :-/

Though, as a roboticist seeing this gap between terminals, I can't help but wonder whether it just undocked from its charger, whether it is about to dock with its charger, whether it needs help from a human to dock with its charger, or whether it has failed to dock with its charger and is about to run out of power in the dark and the cold where all the wolves are.

-the Centaur

Announcing Logical Robotics

centaur 0

So, I'm proud to announce my next venture: Logical Robotics, a robot intelligence firm focused on making learning robots work better for people. My research agenda is to combine the latest advances of deep learning with the rich history of classical artificial intelligence, using human-robot interaction research and my years of experience working on products and benchmarking to help robots make a positive impact.

Recent advances in large language model planning, combined with deep learning of robotic skills, have enabled almost magical developments in explainable artificial intelligence, where it is now possible to ask robots to do things in plain language and for the robots to write their own programs to accomplish those goals, building on deep learned skills but reporting results back in plain language. But applying these technologies to real problems will require a deep understanding of both robot performance benchmarks to refine those skills and human psychological studies to evaluate how these systems benefit human users, particularly in the areas of social robotics where robots work in crowds of people.

Logical Robotics will begin accepting new clients in May, after my obligations to my previous employer have come to a close (and I have taken a break after 17 years of work at the Search Engine That Starts With a G). In the meantime, I am available to answer general questions about what we'll be doing; if you're interested, please feel free to drop me a line at via centaur at logicalrobotics.com or take a look at our website.

-the Centaur

do, or do not. there is no blog

centaur 0

One reason blogging suffers for me is that I always prioritize doing over blogging. That sounds cool and all, but it's actually just another excuse. There's always something more important than doing your laundry ... until you run out of underwear. Blogging has no such hard failure mode, so it's even easier to fall out of the habit. But the reality is, just like laundry, if you set aside a little time for it, you can stay ahead - and you'll feel much healthier and more comfortable if you do.

-the Centaur

Pictured: "Now That's A Steak Burger", a 1-pound monster from Willard Hicks, where I took a break from my million other tasks to catch up on Plans and the Structure of Behavior, the book that introduced idea of the test-operate-test-exit (TOTE) loop as a means for organizing behavior, a device I'm finding useful as I delve into the new field of large language model planning.

What is “Understanding”?

taidoka 1
When I was growing up - or at least when I was a young graduate student in a Schankian research lab - we were all focused on understanding: what did it mean, scientifically speaking, for a person to understand something, and could that be recreated on a computer? We all sort of knew it was what we'd call nowadays an ill-posed problem, but we had a good operational definition, or at least an operational counterexample: if a computer read a story and could not answer the questions that a typical human being could answer about that story, it didn't understand it at all. But there are at least two ways to define a word. What I'll call a practical definition is what a semanticist might call the denotation of a word: a narrow definition, one which you might find in a dictionary, which clearly specifies the meaning of the concept, like a bachelor being an unmarried man. What I'll call a philosophical definition, the connotations of a word, are the vast web of meanings around the core concept, the source of the fine sense of unrightness that one gets from describing Pope Francis as a bachelor, the nuances of meaning embedded in words that Socrates spent his time pulling out of people, before they went and killed him for being annoying. It's those connotations of "understanding" that made all us Schankians very leery of saying our computer programs fully "understood" anything, even as we were pursuing computer understanding as our primary research goal. I care a lot about understanding, deep understanding, because, frankly, I cannot effectively do my job of teaching robots to learn if I do not deeply understand robots, learning, computers, the machinery surrounding them, and the problem I want to solve; when I do not understand all of these things, I stumble in the dark, I make mistakes, and end up sad. And it's pursuing a deeper understanding about deep learning where I got a deeper insight into deep understanding. I was "deep reading" the Deep Learning book (a practice in which I read, or re-read, a book I've read, working out all the equations in advance before reading the derivations), in particular section 5.8.1 on Principal Components Analysis, and the authors made the same comment I'd just seen in the Hands-On Machine Learning book: "the mean of the samples must be zero prior to applying PCA." Wait, what? Why? I mean, thank you for telling me, I'll be sure to do that, but, like ... why? I didn't follow up on that question right away, because the authors also tossed off an offhand comment like, "XX is the unbiased sample covariance matrix associated with a sample x" and I'm like, what the hell, where did that come from? I had recently read the section on variance and covariance but had no idea why this would be associated with the transpose of the design matrix X multiplied by X itself. (In case you're new to machine learning, if x stands for an example input to a problem, say a list of the pixels of an image represented as a column of numbers, then the design matrix X is all the examples you have, but each example listed as a row. Perfectly not confusing? Great!) So, since I didn't understand why Var[x] = XX, I set out to prove it myself. (Carpenters say, measure twice, cut once, but they'd better have a heck of a lot of measuring and cutting under their belts - moreso, they'd better know when to cut and measure before they start working on your back porch, or you and they will have a bad time. Same with trying to teach robots to learn: it's more than just practice; if you don't know why something works, it will come back to bite you, sooner or later, so, dig in until you get it). And I quickly found that the "covariance matrix of a variable x" was a thing, and quickly started to intuit that the matrix multiplication would produce it. This is what I'd call surface level understanding: going forward from the definitions to obvious conclusions. I knew the definition of matrix multiplication, and I'd just re-read the definition of covariance matrices, so I could see these would fit together. But as I dug into the problem, it struck me: true understanding is more than just going forward from what you know: "The brain does much more than just recollect; it inter-compares, it synthesizes, it analyzes, it generates abstractions" - thank you, Carl Sagan. But this kind of understanding is a vast, ill-posed problem - meaning, a problem without a unique and unambiguous solution. But as I was continuing to dig through the problem, reading through the sections I'd just read on "sample estimators," I had a revelation. (Another aside: "sample estimators" use the data you have to predict data you don't, like estimating the height of males in North America from a random sample of guys across the country; "unbiased estimators" may be wrong but their errors are grouped around the true value). The formula for the unbiased sample estimator for the variance actually doesn't look quite the matrix transpose - but it depends on the unbiased estimator of sample mean. Suddenly, I felt that I understood why PCA data had to have a mean of 0. Not driving forward from known facts and connecting their inevitable conclusions, but driving backwards from known facts to hypothesize a connection which I could explore and see. I even briefly wrote a draft of the ideas behind this essay - then set out to prove what I thought I'd seen. Setting the mean of the samples to zero made the sample mean drop out of sample variance - and then the matrix multiplication formula dropped out. Then I knew I understood why PCA data had to have a mean of 0 - or how to rework PCA to deal with data which had a nonzero mean. This I'd call deep understanding: reasoning backwards from what we know to provide reasons for why things are the way they are. A recent book on science I read said that some regularities, like the length of the day, may be predictive, but other regularities, like the tides, cry out for explanation. And once you understand Newton's laws of motion and gravitation, the mystery of the tides is readily solved - the answer falls out of inertia, angular momentum, and gravitational gradients. With apologies to Larry Niven, of course a species that understands gravity will be able to predict tides. The brain does do more than just remember and predict to guide our next actions: it builds structures that help us understand the world on a deeper level, teasing out rules and regularities that help us not just plan, but strategize. Detective Benoit Blanc from the movie Knives Out claimed to "anticipate the terminus of gravity's rainbow" to help him solve crimes; realizing how gravity makes projectiles arc, using that to understand why the trajectory must be the observed parabola, and strolling to the target. So I'd argue that true understanding is not just forward-deriving inferences from known rules, but also backward-deriving causes that can explain behavior. And this means computing the inverse of whatever forward prediction matrix you have, which is a more difficult and challenging problem, because that matrix may have a well-defined inverse. So true understanding is indeed a deep and interesting problem! But, even if we teach our computers to understand this way ... I suspect that this won't exhaust what we need to understand about understanding. For example: the dictionary definitions I've looked up don't mention it, but the idea of seeking a root cause seems embedded in the word "under - standing" itself ... which makes me suspect that the other half of the word, standing, itself might hint at the stability, the reliability of the inferences we need to be able to make to truly understand anything. I don't think we've reached that level of understanding of understanding yet. -the Centaur Pictured: Me working on a problem in a bookstore. Probably not this one.

Robots in Montreal

centaur 1
A cool hotel in old Montreal.

"Robots in Montreal," eh? Sounds like the title of a Steven Moffat Doctor Who episode. But it's really ICRA 2019 - the IEEE Conference on Robotics and Automation, and, yes, there are quite a few robots!

Boston Dynamics quadruped robot with arm and another quadruped.

My team presented our work on evolutionary learning of rewards for deep reinforcement learning, AutoRL, on Monday. In an hour or so, I'll be giving a keynote on "Systematizing Robot Navigation with AutoRL":

Keynote: Dr. Anthony Francis
Systematizing Robot Navigation with AutoRL: Evolving Better Policies with Better Evaluation

Abstract: Rigorous scientific evaluation of robot control methods helps the field progress towards better solutions, but deploying methods on robots requires its own kind of rigor. A systematic approach to deployment can do more than just make robots safer, more reliable, and more debuggable; with appropriate machine learning support, it can also improve robot control algorithms themselves. In this talk, we describe our evolutionary reward learning framework AutoRL and our evaluation framework for navigation tasks, and show how improving evaluation of navigation systems can measurably improve the performance of both our evolutionary learner and the navigation policies that it produces. We hope that this starts a conversation about how robotic deployment and scientific advancement can become better mutually reinforcing partners.

Bio: Dr. Anthony G. Francis, Jr. is a Senior Software Engineer at Google Brain Robotics specializing in reinforcement learning for robot navigation. Previously, he worked on emotional long-term memory for robot pets at Georgia Tech's PEPE robot pet project, on models of human memory for information retrieval at Enkia Corporation, and on large-scale metadata search and 3D object visualization at Google. He earned his B.S. (1991), M.S. (1996) and Ph.D. (2000) in Computer Science from Georgia Tech, along with a Certificate in Cognitive Science (1999). He and his colleagues won the ICRA 2018 Best Paper Award for Service Robotics for their paper "PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning". He's the author of over a dozen peer-reviewed publications and is an inventor on over a half-dozen patents. He's published over a dozen short stories and four novels, including the EPIC eBook Award-winning Frost Moon; his popular writing on robotics includes articles in the books Star Trek Psychology and Westworld Psychology. as well as a Google AI blog article titled Maybe your computer just needs a hug. He lives in San Jose with his wife and cats, but his heart will always belong in Atlanta. You can find out more about his writing at his website.

Looks like I'm on in 15 minutes! Wish me luck.

-the Centaur