Hail, fellow adventurers: to prove I do something more than just draw and write, I’d like to send out a reminder of the Second Embodied AI Workshop at the CVPR 2021 computer vision conference. In the last ten years, artificial intelligence has made great advances in recognizing objects, understanding the basics of speech and language, and recommending things to people. But interacting with the real world presents harder problems: noisy sensors, unreliable actuators, incomplete models of our robots, building good simulators, learning over sequences of decisions, transferring what we’ve learned in simulation to real robots, or learning on the robots themselves.
The Embodied AI Workshop brings together many researchers and organizations interested in these problems, and also hosts nine challenges which test point, object, interactive and social navigation, as well as object manipulation, vision, language, auditory perception, mapping, and more. These challenges enable researchers to test their approaches on standardized benchmarks, so the community can more easily compare what we’re doing. I’m most involved as an advisor to the Stanford / Google iGibson Interactive / Social Navigation Challenge, which forces robots to maneuver around people and clutter to solve navigation problems. You can read more about the iGibson Challenge at their website or on the Google AI Blog.
Most importantly, the Embodied AI Workshop has a call for papers, with a deadline of TODAY.
Call for Papers
We invite high-quality 2-page extended abstracts in relevant areas, such as:
- Simulation Environments
- Visual Navigation
- Embodied Question Answering
- Simulation-to-Real Transfer
- Embodied Vision & Language
Accepted papers will be presented as posters. These papers will be made publicly available in a non-archival format, allowing future submission to archival journals or conferences.Submission
I assume anyone submitting to this already has their paper well underway, but this is your reminder to git’r done.