HTTPS should now work, so you should not get nastygrams from Chrome anymore. Enjoy the rest of your day.
HTTPS should now work, so you should not get nastygrams from Chrome anymore. Enjoy the rest of your day.
Upgrading security. May be a no-op from an end-user perspective, but if the site burps, you heard it here first.
tl;dr: to get good at something, you've got to put in a lot of practice
Hail, fellow adventurers! You may have been wondering what's up with the "Drawing Every Day" on this website. Or, hey, maybe you just got here. But I've gotten far enough into it that I feel comfortable taking a short break from developing this habit to tell you about this habit I'm trying to develop.
I've loved comic books since I was a child. I've drawn since I was a young kid. I even started working on comics in graduate school, consciously refining my art until I was able to launch a webcomic, f@nu fiku, partially inspired by anime, manga, and the FLCL anime.
Then I broke my arm. And while I was recovering, someone stole my laptop. I took the opportunity to switch from Windows to Mac, and, as luck would have it, got my first book contract for FROST MOON. By the time I got enough free time from editing and book launches to go back to the webcomic and pick up where I left off, I found out my hand-crafted webcomic software wouldn't work on the Mac.
The real blow, however, was hidden: my confidence in my artwork had collapsed.
I went from fearlessly putting together two-page spreads way beyond my ability, doing bodies and perspective, and changing my layout theory at the drop of a hat, eventually producing pages that appeared in an art show - to being unable, or more precisely, unwilling to draw at all.
I had become intimidated by - embarrased by - my art. My wife is also an artist, and is familiar with the phenomenon. She and I talked about the reasons behind this at length, and like writer's block preventing writers from writers, one of the things that really affects artists is simply getting started.
If you've only done a handful of drawings, well, then, every one is super important, and there's pressure to make it perfect. But if you've done lots of drawings, then each one is an experiment, and if it doesn't turn out good, well, then, you can always draw another one.
We moved recently, and I made it a priority to set up an art studio. But things by themselves don't create good habits - believe me, I know: purchasing a keyboard and bass guitar all those years ago didn't turn me into a musician, because I didn't build the proper habits around them.
But how do you build a habit if you're too intimidated to get started? At the Write to the End writing group, we tackle it by sitting down to write for 20 minutes, no excuses. At Taos Toolbox, Walter Jon Williams pointed out that this seemingly small amount of writing per day could produce a novel.
So I started to come around to the idea: what if I drew every day?
There's this theory in cognitive science that quantity begets quality. A famous example from the book Art and Fear alleges a ceramics professor graded half of a class on quality, the other half on quantity - but the students who produced more pieces also produced the better work.
There are no secrets: if you want to get good, you've got to put in the work. (Well, there are secrets, but the secret is, you have to put in a hell of a lot of work to take advantage of them). This is such a common thing in webcomics that it has its own TV Tropes page on Art Evolution.
I really want to draw again. I want to make science fiction webcomics like the ones I grew up loving in the 80s and 90s. But to do that, I've got to draw. So, once I finally got settled here and the holidays were in the taillights, once I finally got the Cintiq working ... I started drawing every day.
14 days running so far (counting complex drawings that took 2-3 sessions as 1 per session). How long does it take to cement a habit? 2-3 months, it sounds like from the online research; so, a good ways to go. If I keep at it, I'll have +70 more drawings, five times as many as I have so far.
I bet I'll see some changes.
I bet if you have something you want to change, start working on it every day, and keep it up for 2-3 months, you may see some changes too.
Best of luck with that! Wish me luck too.
This week has been so bad I feel like I'm under spiritual attack. It was supposed to be a vacation, but both my cats got sick, I got sick myself, and I had to work in the middle of it. I feel like the protagonist of a Neil Gaiman story I read in M is For Magic where a black cat is protecting a home from supernatural assault.
But now both of my cats are coming home. Gabby, the gold guy above, comes home tonight after a serious asthma attack, and Loki is already home after a serious urinary tract blockage.
Here's hoping two cats and prayers put things back on track.
When I was growing up - or at least when I was a young graduate student in a Schankian research lab - we were all focused on understanding: what did it mean, scientifically speaking, for a person to understand something, and could that be recreated on a computer? We all sort of knew it was what we'd call nowadays an ill-posed problem, but we had a good operational definition, or at least an operational counterexample: if a computer read a story and could not answer the questions that a typical human being could answer about that story, it didn't understand it at all.
But there are at least two ways to define a word. What I'll call a practical definition is what a semanticist might call the denotation of a word: a narrow definition, one which you might find in a dictionary, which clearly specifies the meaning of the concept, like a bachelor being an unmarried man. What I'll call a philosophical definition, the connotations of a word, are the vast web of meanings around the core concept, the source of the fine sense of unrightness that one gets from describing Pope Francis as a bachelor, the nuances of meaning embedded in words that Socrates spent his time pulling out of people, before they went and killed him for being annoying.
It's those connotations of "understanding" that made all us Schankians very leery of saying our computer programs fully "understood" anything, even as we were pursuing computer understanding as our primary research goal. I care a lot about understanding, deep understanding, because, frankly, I cannot effectively do my job of teaching robots to learn if I do not deeply understand robots, learning, computers, the machinery surrounding them, and the problem I want to solve; when I do not understand all of these things, I stumble in the dark, I make mistakes, and end up sad.
And it's pursuing a deeper understanding about deep learning where I got a deeper insight into deep understanding. I was "deep reading" the Deep Learning book (a practice in which I read, or re-read, a book I've read, working out all the equations in advance before reading the derivations), in particular section 5.8.1 on Principal Components Analysis, and the authors made the same comment I'd just seen in the Hands-On Machine Learning book: "the mean of the samples must be zero prior to applying PCA."
Wait, what? Why? I mean, thank you for telling me, I'll be sure to do that, but, like ... why? I didn't follow up on that question right away, because the authors also tossed off an offhand comment like, "X⊤X is the unbiased sample covariance matrix associated with a sample x" and I'm like, what the hell, where did that come from? I had recently read the section on variance and covariance but had no idea why this would be associated with the transpose ⊤ of the design matrix X multiplied by X itself. (In case you're new to machine learning, if x stands for an example input to a problem, say a list of the pixels of an image represented as a column of numbers, then the design matrix X is all the examples you have, but each example listed as a row. Perfectly not confusing? Great!)
So, since I didn't understand why Var[x] = X⊤X, I set out to prove it myself. (Carpenters say, measure twice, cut once, but they'd better have a heck of a lot of measuring and cutting under their belts - moreso, they'd better know when to cut and measure before they start working on your back porch, or you and they will have a bad time. Same with trying to teach robots to learn: it's more than just practice; if you don't know why something works, it will come back to bite you, sooner or later, so, dig in until you get it). And I quickly found that the "covariance matrix of a variable x" was a thing, and quickly started to intuit that the matrix multiplication would produce it.
This is what I'd call surface level understanding: going forward from the definitions to obvious conclusions. I knew the definition of matrix multiplication, and I'd just re-read the definition of covariance matrices, so I could see these would fit together. But as I dug into the problem, it struck me: true understanding is more than just going forward from what you know: "The brain does much more than just recollect; it inter-compares, it synthesizes, it analyzes, it generates abstractions" - thank you, Carl Sagan. But this kind of understanding is a vast, ill-posed problem - meaning, a problem without a unique and unambiguous solution.
But as I was continuing to dig through the problem, reading through the sections I'd just read on "sample estimators," I had a revelation. (Another aside: "sample estimators" use the data you have to predict data you don't, like estimating the height of males in North America from a random sample of guys across the country; "unbiased estimators" may be wrong but their errors are grouped around the true value). The formula for the unbiased sample estimator for the variance actually doesn't look quite the matrix transpose - but it depends on the unbiased estimator of sample mean.
Suddenly, I felt that I understood why PCA data had to have a mean of 0. Not driving forward from known facts and connecting their inevitable conclusions, but driving backwards from known facts to hypothesize a connection which I could explore and see. I even briefly wrote a draft of the ideas behind this essay - then set out to prove what I thought I'd seen. Setting the mean of the samples to zero made the sample mean drop out of sample variance - and then the matrix multiplication formula dropped out. Then I knew I understood why PCA data had to have a mean of 0 - or how to rework PCA to deal with data which had a nonzero mean.
This I'd call deep understanding: reasoning backwards from what we know to provide reasons for why things are the way they are. A recent book on science I read said that some regularities, like the length of the day, may be predictive, but other regularities, like the tides, cry out for explanation. And once you understand Newton's laws of motion and gravitation, the mystery of the tides is readily solved - the answer falls out of inertia, angular momentum, and gravitational gradients. With apologies to Larry Niven, of course a species that understands gravity will be able to predict tides.
The brain does do more than just remember and predict to guide our next actions: it builds structures that help us understand the world on a deeper level, teasing out rules and regularities that help us not just plan, but strategize. Detective Benoit Blanc from the movie Knives Out claimed to "anticipate the terminus of gravity's rainbow" to help him solve crimes; realizing how gravity makes projectiles arc, using that to understand why the trajectory must be the observed parabola, and strolling to the target.
So I'd argue that true understanding is not just forward-deriving inferences from known rules, but also backward-deriving causes that can explain behavior. And this means computing the inverse of whatever forward prediction matrix you have, which is a more difficult and challenging problem, because that matrix may have a well-defined inverse. So true understanding is indeed a deep and interesting problem!
But, even if we teach our computers to understand this way ... I suspect that this won't exhaust what we need to understand about understanding. For example: the dictionary definitions I've looked up don't mention it, but the idea of seeking a root cause seems embedded in the word "under - standing" itself ... which makes me suspect that the other half of the word, standing, itself might hint at the stability, the reliability of the inferences we need to be able to make to truly understand anything.
I don't think we've reached that level of understanding of understanding yet.
Pictured: Me working on a problem in a bookstore. Probably not this one.
Marriott, bottom floor, International Hall South. Follow the signs for the author signing, you can't miss it!
So I will be appearing at "Social Media for Authors" at 4pm at Hyatt Embassy CD. Perhaps they're including me as the counterexample. Here's my advice to you on social media for authors: if you get into it, consistently engage it, and don't let anyone bait you into being a jerk. Imagine anything you say could end up on the front page of the New York Times, and you'll be fine.
This has been a great team effort between David the writer, Sandi the artist, and the team at Thinking Ink - Betsy, Liza and Keiko. I was the editor for this project - making SHATTERED SKY the first novel that I edited. Neat!
Personally, I'd describe the series as THE HUNGER GAMES meets GRAVITY for the LGBTQ set, but from our announcement: "The second book in the Lunar Cycle trilogy, SHATTERED SKY is the sequel to DEBRIS DREAMS. In DEBRIS DREAMS, lunar separatists attack the space elevator above the Earth, forcing offworlder Drusilla Zhao into wartime military service.
In SHATTERED SKY, Dru is honored as a hero and joins her girlfriend Sara on Earth. As Dru begins her new life, she struggles to adapt to a different culture while suffering from PTSD. When Sara’s home is threatened, and the military demand that Dru return to service, she must fight to defend the Alliance while battling enemies inside her own head.
Author David Colby combines hard science details with page-turning action and a diverse cast of characters for a unique science fiction experience that you won’t soon forget."
Good Friday Vigil at Saint Stephen's in-the-Field. We dress down the church and set up a bare wood cross and labyrinth, and encourage people to sign up to stay and pray so we have coverage all night.
I am a night owl, so I signed up for 1 a.m. through 2 a.m. So why am I here with a cough at 2:45 a.m. when I have an early-for-me meeting tomorrow? Someone changed my slot without telling me, to 2 a.m. through 4 a.m.
So I had the double pleasure of waiting fifteen minutes in the cold for the shift change (while I confirmed, via Google Docs history, that I was not misremembering my time), finding out that the person inside was still only partially through their two hour shift, going home to crash, and coming back to wait in the colder cold again while the previous person ran over. (The irony of the sleeping apostles is not lost on me).
This has been my least effective Lent in recent memory. I went to Ash Wednesday service to get ashes, only to get quizzed about it by my favorite server at one of my favorite restaurants, who then to my dismay turned into an insulting, manipulative proselytizer. I have had a surprising share of similar bad reactions with people leaving me more rattled about how I treat and react to people (even though I was never the aggressor) than focused on God or reading the Bible. Visiting the sick has not worked as my friend who is hurt the most is too touch and go for visitors. And giving up alcohol for Lent proved more of an inconvenience than a prompt for reflection.
And yet, like going to church on Sunday, or volunteering for the church Vestry, or reading the Bible, the Vigil is serving its function: to draw my attention back to God.
May God's peace, which passes all understanding, be with you always.
I often say "I teach robots to learn," but what does that mean, exactly? Well, now that one of the projects that I've worked on has been announced - and I mean, not just on arXiv, the public access scientific repository where all the hottest reinforcement learning papers are shared, but actually, accepted into the ICRA 2018 conference - I can tell you all about it!
When I'm not roaming the corridors hammering infrastructure bugs, I'm trying to teach robots to roam those corridors - a problem we call robot navigation. Our team's latest idea combines "traditional planning," where the robot tries to navigate based on an explicit model of its surroundings, with "reinforcement learning," where the robot learns from feedback on its performance.
For those not in the know, "traditional" robotic planners use structures like graphs to plan routes, much in the same way that a GPS uses a roadmap. One of the more popular methods for long-range planning are probabilistic roadmaps, which build a long-range graph by picking random points and attempting to connect them by a simpler "local planner" that knows how to navigate shorter distances. It's a little like how you learn to drive in your neighborhood - starting from landmarks you know, you navigate to nearby points, gradually building up a map in your head of what connects to what.
But for that to work, you have to know how to drive, and that's where the local planner comes in. Building a local planner is simple in theory - you can write one for a toy world in a few dozen lines of code - but difficult in practice, and making one that works on a real robot is quite the challenge. These software systems are called "navigation stacks" and can contain dozens of components - and in my experience they're hard to get working and even when you do, they're often brittle, requiring many engineer-months to transfer to new domains or even just to new buildings.
People are much more flexible, learning from their mistakes, and the science of making robots learn from their mistakes is reinforcement learning, in which an agent learns a policy for choosing actions by simply trying them, favoring actions that lead to success and suppressing ones that lead to failure. Our team built a deep reinforcement learning approach to local planning, using a state-of-the art algorithm called DDPG (Deep Deterministic Policy Gradients) pioneered by DeepMind to learn a navigation system that could successfully travel several meters in office-like environments.
But there's a further wrinkle: the so-called "reality gap". By necessity, the local planner used by a probablistic roadmap is simulated - attempting to connect points on a map. That simulated local planner isn't identical to the real-world navigation stack running on the robot, so sometimes the robot thinks it can go somewhere on a map which it can't navigate safely in the real world. This can have disastrous consequences - causing robots to tumble down stairs, or, worse, when people follow their GPSes too closely without looking where they're going, causing cars to tumble off the end of a bridge.
Our approach, PRM-RL, directly combats the reality gap by combining probabilistic roadmaps with deep reinforcement learning. By necessity, reinforcement learning navigation systems are trained in simulation and tested in the real world. PRM-RL uses a deep reinforcement learning system as both the probabilistic roadmap's local planner and the robot's navigation system. Because links are added to the roadmap only if the reinforcement learning local controller can traverse them, the agent has a better chance of attempting to execute its plans in the real world.
In simulation, our agent could traverse hundreds of meters using the PRM-RL approach, doing much better than a "straight-line" local planner which was our default alternative. While I didn't happen to have in my back pocket a hundred-meter-wide building instrumented with a mocap rig for our experiments, we were able to test a real robot on a smaller rig and showed that it worked well (no pictures, but you can see the map and the actual trajectories below; while the robot's behavior wasn't as good as we hoped, we debugged that to a networking issue that was adding a delay to commands sent to the robot, and not in our code itself; we'll fix this in a subsequent round).
This work includes both our group working on office robot navigation - including Alexandra Faust, Oscar Ramirez, Marek Fiser, Kenneth Oslund, me, and James Davidson - and Alexandra's collaborator Lydia Tapia, with whom she worked on the aerial navigation also reported in the paper. Until the ICRA version comes out, you can find the preliminary version on arXiv:
PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning
We present PRM-RL, a hierarchical method for long-range navigation task completion that combines sampling-based path planning with reinforcement learning (RL) agents. The RL agents learn short-range, point-to-point navigation policies that capture robot dynamics and task constraints without knowledge of the large-scale topology, while the sampling-based planners provide an approximate map of the space of possible configurations of the robot from which collision-free trajectories feasible for the RL agents can be identified. The same RL agents are used to control the robot under the direction of the planning, enabling long-range navigation. We use the Probabilistic Roadmaps (PRMs) for the sampling-based planner. The RL agents are constructed using feature-based and deep neural net policies in continuous state and action spaces. We evaluate PRM-RL on two navigation tasks with non-trivial robot dynamics: end-to-end differential drive indoor navigation in office environments, and aerial cargo delivery in urban environments with load displacement constraints. These evaluations included both simulated environments and on-robot tests. Our results show improvement in navigation task completion over both RL agents on their own and traditional sampling-based planners. In the indoor navigation task, PRM-RL successfully completes up to 215 meters long trajectories under noisy sensor conditions, and the aerial cargo delivery completes flights over 1000 meters without violating the task constraints in an environment 63 million times larger than used in training.
So, when I say "I teach robots to learn" ... that's what I do.