top of page

Robot Learning

We investigate learning-based control strategies for controlling robots that interact in human environments. This requires a robot to be contextually aware, have fast reflexes, and account for uncertainty. We develop machine learning and motion planning algorithms that push the limits of control and autonomous behavior on three fronts:

  1. dexterity

  2. agility

  3. precision

This approach allows an artificial intelligence and a control strategy for robots to effectively learn to behave appropriate in a realistic environment, whether it is in a robot-human factory, in a home, or in an operating room.


Active Projects:

  1. Machine Learning for Collision Detection (FASTRON)

  2. Neural Motion Planning (MPNet)

  3. Learning to Automate Surgery

  4. Scalable Reinforcement Learning

  5. Robotic Planetary Explorer (EELS)

bottom of page