Informed Multi-Heuristic Multi Representation A-Star

Guide(s): Dr. Maxim Likhachev
Team Size: 4
Team Member(s): Aditya Agarwal, Shivam Vats, Jay Patrikar
Course: Planning for Robotics (16-872)
Time Period: Aug'19 - Dec'19
Technologies/Concepts Used: Multi-Heuristic Multi-Representation A-Star, Learning Methods for Planning, Conditional Variational Auto-Encoders

Generating motion plans for robots, like humanoids, with many degrees of freedom is a challenging problem because of the high-dimensionality of the resulting search space. To circumvent this, many researchers have made the observation that large parts of the solution plan are often much lower dimensional in nature. Some recent algorithms exploit this by either planning on a graph with adaptive dimensionality or leveraging a decoupling in the robot’s action space. Often, it is possible to gain more fine-grained information about the local dimensionality of the plan from any robot state to the goal which can then be used to inform search. In this work, we present a heuristic-search-based planning algorithm that admits such information as a prior in the form of lower dimensional manifolds (called representations) and a probabilistic mapping (conditioned on the world and the goal) from robot states to these representations. We train a Conditioned Variational Autoencoder (CVAE) for every representation and use them to compute the required probabilitic mapping. Using this additional domain knowledge, our motion planner is able to generate high quality bounded-suboptimal plans. Experimentally, we validate the practicability and efficiency of our approach on the challenging 10 degree-of-freedom mobile manipulation domain.