Neural Motion Planning (MPNet)
Motion planning is a well-known problem in robotics. It can be defined as the process of finding a collision-free path for a robot from its initial to goal position while avoiding collisions with any obstacles or other agents present in the environment. Motion planning is among the fundamental problems of robotics and therefore, have been of tremendous importance to the robotics community. The challenge of building computationally efficient planning algorithms has lasted since the late 1980s. Despite previous efforts to design fast, efficient classical planning algorithms, the current state-of-the-art struggle to offer methods which scale to the high-dimensional setting that is common in many real-world applications such as self-driving cars, robot surgery, space missions, to name a few.
We focus on a new era of planning algorithms called the Neural Motion Planners that take past experiences into account and learn to embed a classical planner. The learned planner upon seeing a new planning problem outputs the collision-free paths without performing an exhaustive search of the given environment. In this aspect, we have proposed a framework called Motion Planning Networks (MPNet). MPNet consists of an encoder network that encodes the robot’s surroundings into a latent space, and a planning network that takes the environment encoding, and start and goal robotic configurations to output a collision-free feasible path connecting the given configurations in the fastest time possible. The proposed method
plans motion irrespective of the obstacle’s geometry,
generate adaptive samples for sampling-based planning algorithms
demonstrates exceptional execution times that scale better than the state-of-art planners
generalizes to new unseen obstacle locations, and
has completeness guarantees.
is a life-long learner.
Our future objectives are twofold. One, solve a perception problem for learning-based planning methods, i.e., to learn plannable state space representations. Second, extend MPNet to solve kinodynamic planning problems by learning lower dimension manifolds.
Students and Collaborators
A.H. Qureshi, J. Dong, A. Choe, M.C. Yip
IEEE Robotics and Automation Letters. vol. 5, no. 4, pp. 6089 - 6096.
A.H Qureshi, Y.L. Miao, A. Simeonov, M.C. Yip
IEEE Transactions on Robotics. Early Access, 2020. [pdf]
Motion Planning Networks
A.H.Qureshi, A.Simeonov, M.J.Bency, M.C. Yip
Active Continual Learning for Planning and Navigation
A.H. Qureshi, Y.L. Miao, M.C. Yip
ICML 2020 Workshop on Real World Experiment Design and Active Learning. [pdf]
Deeply Informed Neural Sampling for Robot Motion Planning
A.H. Qureshi, M.C. Yip
Neural Path Planning: Fixed Time, Near-Optimal Path Generation via Oracle Imitation
M. Bency, A. H. Qureshi, M.C. Yip