Skip to main content Skip to secondary navigation

Control

Main content start
AA 203

Optimal and Learning-based Control

Optimal control solution techniques for systems with known and unknown dynamics. Dynamic programming, Hamilton-Jacobi reachability, and direct and indirect methods for trajectory optimization. Introduction to model predictive control. Model-based reinforcement learning, and connections between modern reinforcement learning in continuous spaces and fundamental optimal control ideas.

AA 222

Engineering Design Optimization (CS 361)

Design of engineering systems within a formal optimization framework. This course covers the mathematical and algorithmic fundamentals of optimization, including derivative and derivative-free approaches for both linear and non-linear problems, with an emphasis on multidisciplinary design optimization. Topics will also include quantitative methodologies for addressing various challenges, such as accommodating multiple objectives, automating differentiation, handling uncertainty in evaluations, selecting design points for experimentation, and principled methods for optimization when evaluations are expensive. Applications range from the design of aircraft to automated vehicles. 

 

AA 242A

Classical Dynamics

Accelerating and rotating reference frames. Kinematics of rigid body motion; Euler angles, direction cosines. D'Alembert's principle, equations of motion. Inertia properties of rigid bodies. Dynamics of coupled rigid bodies. Lagrange's equations and their use. Dynamic behavior, stability, and small departures from equilibrium.

AA 289

Robotics and Autonomous Systems Seminar (CS 529)

Seminar talks by researchers and industry professionals on topics related to modern robotics and autonomous systems. Broadly, talks will cover robotic design, perception and navigation, planning and control, and learning for complex robotic systems. May be repeated for credit.

AA 274B

Principles of Robot Autonomy II (AA 174B, CS 237B, EE 260B)

This course teaches advanced principles for endowing mobile autonomous robots with capabilities to autonomously learn new skills and to physically interact with the environment and with humans. It also provides an overview of different robot system architectures. Concepts that will be covered in the course are: Reinforcement Learning and its relationship to optimal control, contact and dynamics models for prehensile and non-prehensile robot manipulation, imitation learning and human intent inference, as well as different system architectures and their verification. Students will earn the theoretical foundations for these concepts and implement them on mobile manipulation platforms. In homeworks, the Robot Operating System (ROS) will be used extensively for demonstrations and hands-on activities.