Marriott, bottom floor, International Hall South. Follow the signs for the author signing, you can’t miss it!
Marriott, bottom floor, International Hall South. Follow the signs for the author signing, you can’t miss it!
So I will be appearing at “Social Media for Authors” at 4pm at Hyatt Embassy CD. Perhaps they’re including me as the counterexample. Here’s my advice to you on social media for authors: if you get into it, consistently engage it, and don’t let anyone bait you into being a jerk. Imagine anything you say could end up on the front page of the New York Times, and you’ll be fine.
This has been a great team effort between David the writer, Sandi the artist, and the team at Thinking Ink – Betsy, Liza and Keiko. I was the editor for this project – making SHATTERED SKY the first novel that I edited. Neat!
Personally, I’d describe the series as THE HUNGER GAMES meets GRAVITY for the LGBTQ set, but from our announcement: “The second book in the Lunar Cycle trilogy, SHATTERED SKY is the sequel to DEBRIS DREAMS. In DEBRIS DREAMS, lunar separatists attack the space elevator above the Earth, forcing offworlder Drusilla Zhao into wartime military service.
In SHATTERED SKY, Dru is honored as a hero and joins her girlfriend Sara on Earth. As Dru begins her new life, she struggles to adapt to a different culture while suffering from PTSD. When Sara’s home is threatened, and the military demand that Dru return to service, she must fight to defend the Alliance while battling enemies inside her own head.
Author David Colby combines hard science details with page-turning action and a diverse cast of characters for a unique science fiction experience that you won’t soon forget.”
Good Friday Vigil at Saint Stephen’s in-the-Field. We dress down the church and set up a bare wood cross and labyrinth, and encourage people to sign up to stay and pray so we have coverage all night.
I am a night owl, so I signed up for 1 a.m. through 2 a.m. So why am I here with a cough at 2:45 a.m. when I have an early-for-me meeting tomorrow? Someone changed my slot without telling me, to 2 a.m. through 4 a.m.
So I had the double pleasure of waiting fifteen minutes in the cold for the shift change (while I confirmed, via Google Docs history, that I was not misremembering my time), finding out that the person inside was still only partially through their two hour shift, going home to crash, and coming back to wait in the colder cold again while the previous person ran over. (The irony of the sleeping apostles is not lost on me).
This has been my least effective Lent in recent memory. I went to Ash Wednesday service to get ashes, only to get quizzed about it by my favorite server at one of my favorite restaurants, who then to my dismay turned into an insulting, manipulative proselytizer. I have had a surprising share of similar bad reactions with people leaving me more rattled about how I treat and react to people (even though I was never the aggressor) than focused on God or reading the Bible. Visiting the sick has not worked as my friend who is hurt the most is too touch and go for visitors. And giving up alcohol for Lent proved more of an inconvenience than a prompt for reflection.
And yet, like going to church on Sunday, or volunteering for the church Vestry, or reading the Bible, the Vigil is serving its function: to draw my attention back to God.
May God’s peace, which passes all understanding, be with you always.
I often say “I teach robots to learn,” but what does that mean, exactly? Well, now that one of the projects that I’ve worked on has been announced – and I mean, not just on arXiv, the public access scientific repository where all the hottest reinforcement learning papers are shared, but actually, accepted into the ICRA 2018 conference – I can tell you all about it!
When I’m not roaming the corridors hammering infrastructure bugs, I’m trying to teach robots to roam those corridors – a problem we call robot navigation. Our team’s latest idea combines “traditional planning,” where the robot tries to navigate based on an explicit model of its surroundings, with “reinforcement learning,” where the robot learns from feedback on its performance.
For those not in the know, “traditional” robotic planners use structures like graphs to plan routes, much in the same way that a GPS uses a roadmap. One of the more popular methods for long-range planning are probabilistic roadmaps, which build a long-range graph by picking random points and attempting to connect them by a simpler “local planner” that knows how to navigate shorter distances. It’s a little like how you learn to drive in your neighborhood – starting from landmarks you know, you navigate to nearby points, gradually building up a map in your head of what connects to what.
But for that to work, you have to know how to drive, and that’s where the local planner comes in. Building a local planner is simple in theory – you can write one for a toy world in a few dozen lines of code – but difficult in practice, and making one that works on a real robot is quite the challenge. These software systems are called “navigation stacks” and can contain dozens of components – and in my experience they’re hard to get working and even when you do, they’re often brittle, requiring many engineer-months to transfer to new domains or even just to new buildings.
People are much more flexible, learning from their mistakes, and the science of making robots learn from their mistakes is reinforcement learning, in which an agent learns a policy for choosing actions by simply trying them, favoring actions that lead to success and suppressing ones that lead to failure. Our team built a deep reinforcement learning approach to local planning, using a state-of-the art algorithm called DDPG (Deep Deterministic Policy Gradients) pioneered by DeepMind to learn a navigation system that could successfully travel several meters in office-like environments.
But there’s a further wrinkle: the so-called “reality gap“. By necessity, the local planner used by a probablistic roadmap is simulated – attempting to connect points on a map. That simulated local planner isn’t identical to the real-world navigation stack running on the robot, so sometimes the robot thinks it can go somewhere on a map which it can’t navigate safely in the real world. This can have disastrous consequences – causing robots to tumble down stairs, or, worse, when people follow their GPSes too closely without looking where they’re going, causing cars to tumble off the end of a bridge.
Our approach, PRM-RL, directly combats the reality gap by combining probabilistic roadmaps with deep reinforcement learning. By necessity, reinforcement learning navigation systems are trained in simulation and tested in the real world. PRM-RL uses a deep reinforcement learning system as both the probabilistic roadmap’s local planner and the robot’s navigation system. Because links are added to the roadmap only if the reinforcement learning local controller can traverse them, the agent has a better chance of attempting to execute its plans in the real world.
In simulation, our agent could traverse hundreds of meters using the PRM-RL approach, doing much better than a “straight-line” local planner which was our default alternative. While I didn’t happen to have in my back pocket a hundred-meter-wide building instrumented with a mocap rig for our experiments, we were able to test a real robot on a smaller rig and showed that it worked well (no pictures, but you can see the map and the actual trajectories below; while the robot’s behavior wasn’t as good as we hoped, we debugged that to a networking issue that was adding a delay to commands sent to the robot, and not in our code itself; we’ll fix this in a subsequent round).
This work includes both our group working on office robot navigation – including Alexandra Faust, Oscar Ramirez, Marek Fiser, Kenneth Oslund, me, and James Davidson – and Alexandra’s collaborator Lydia Tapia, with whom she worked on the aerial navigation also reported in the paper. Until the ICRA version comes out, you can find the preliminary version on arXiv:
PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning
We present PRM-RL, a hierarchical method for long-range navigation task completion that combines sampling-based path planning with reinforcement learning (RL) agents. The RL agents learn short-range, point-to-point navigation policies that capture robot dynamics and task constraints without knowledge of the large-scale topology, while the sampling-based planners provide an approximate map of the space of possible configurations of the robot from which collision-free trajectories feasible for the RL agents can be identified. The same RL agents are used to control the robot under the direction of the planning, enabling long-range navigation. We use the Probabilistic Roadmaps (PRMs) for the sampling-based planner. The RL agents are constructed using feature-based and deep neural net policies in continuous state and action spaces. We evaluate PRM-RL on two navigation tasks with non-trivial robot dynamics: end-to-end differential drive indoor navigation in office environments, and aerial cargo delivery in urban environments with load displacement constraints. These evaluations included both simulated environments and on-robot tests. Our results show improvement in navigation task completion over both RL agents on their own and traditional sampling-based planners. In the indoor navigation task, PRM-RL successfully completes up to 215 meters long trajectories under noisy sensor conditions, and the aerial cargo delivery completes flights over 1000 meters without violating the task constraints in an environment 63 million times larger than used in training.
So, when I say “I teach robots to learn” … that’s what I do.
Well, one more orbit around the Sun. Here’s to a better 2018! Onward, friends!
How much can you draw before they bring your food? Fountain at Front Page News L5P, again with no pencils.
Got inspired by all the art at Dragon Con, particularly Comfort and Adam’s Guide to Self Publishing Comics which reminded me of my interview questions about my long stalled comic f@nu fiku … Time to break out the sketchbook again …
Hail, fellow adventurers! I’ll be back at Dragon Con again this year, with a great set of panels! Sometimes that includes dropping in on the Writing Track, but the ones we have officially scheduled so far are:
Also, I was scheduled to do a SAM Talk, but it was inadvertently booked over my author reading, and I pretty much have to prioritize my own author reading over a SAM Talk even if there might be more people at the other room. So if you attend my author reading, you may also get to hear what was intended to be my SAM Talk, “Risk Getting Worse”.
Hope to see you all there – from my end of the table, it kind of looks like this:
Here’s crossing fingers that we get the double booking all worked out!
Taking on a challenge like writing a novel can seem daunting. A good novel can range from 60,000 words for a young adult novel or a romance up to 360,000 word for a fantasy novel, with a typical length closer to 90,000 to 120,000 words. For perspective, a paragraph in a five-paragraph essay can be 100 words, so a 100,000 word novel like my first novel, FROST MOON, is like a thousand-paragraph essay. To someone who had trouble getting those 500 words down, that’s incredibly daunting.
Challenges like National Novel Writing Month can, paradoxically, make it easier. 50,000 words in a month seems daunting, but that’s only half a full-length novel, and even more so, it’s not 50,000 words of a finished novel: it’s 50,000 words of unpolished first draft. You can let yourself write drek you’re not proud of if it gets words on the page. If you’re the kind of person daunted by the thought of writing a whole novel, or paralyzed by perfectionism, National Novel Writing Month offers an easier path up the hill.
Still, it’s a long hill. And it can be daunting, no doubt. Especially if you tend to get behind, like I do, or if you tend to get trapped polishing your words, as I often do. You sometimes need tips and techniques to help yourself get past the stumbling blocks.
Here are a few of the ones that have worked for me in the past.
Your mileage may vary, of course, but these tips helped me.
Writing 50,000 words of rough draft is not writing a novel. You’ve got a lot more to go – between 10,000 and 310,000 words depending on whether you’re aiming at Goosebumps or George R. R. Martin. But if you can get 50,000 words under your belt, you’ll have the pleasure of looking back and realizing you can accomplish quite a climb.