How can we interact with physical systems, in particular robots, at a high level? How can we formalize high level goals in a user friendly way? How can we automatically create controllers for such systems and how can we guarantee these systems will behave correctly?
To address these challenges, our research builds on ideas and techniques from different disciplines such as control, hybrid systems, logic, verification, model checking, planning and computational linguistics.
- NRI: Modeling and Verification of Language-based Interaction (NSF, 2014-2017)
- CPS: High-Level Perception and Control for Autonomous Reconfigurable Modular Robots (NSF, 2013-2016)
- Expeditions in Computer Augmented Program Engineering (EXCAPE): Harnessing Synthesis for Software Design (NSF 2012-2017, abstract)
- CAREER: Formal Methods for Robotics and Automation (NSF, 2010-2015)
- DARPA Robotics Challenge (Part of Team ViGIR) (DARPA, 2014-2015)
- DARPA Young Faculty Award: Autonomous robots: Explaining failures and boosting success of high-level tasks (DARPA, 2012-2014)
- Situation Understanding Bot Through Language and Environment (ARO MURI, 2010-2013)
- Simulator and sensors for the iRobot Create: Enhancing MAE and CS courses that span freshmen to graduate students (The MathWorks, 2010)
- Tightly Integrated Perception and Planning in Intelligent Robotics (NSF, 2009-2013)