Skip to main content Skip to secondary navigation

Contributing Laboratories

Main content start
Students working with robotic cars

Autonomous Systems Laboratory (ASL)

ASL is developing methodologies for the analysis, design, and control of autonomous systems, in particular self-driving cars, aerospace vehicles, and future mobility systems. The lab’s expertise in control theory, robotics, optimization, and machine learning guides us in developing new methods for operation in uncertain, rapidly-changing, and potentially adversarial environments aimed at practical, computationally-efficient, and provably-correct algorithms for field deployment.

Biomechatronics Lab 

The Biomechatronics Lab develops wearable robotic devices to improve the efficiency, speed, and balance of walking and running, especially for people with disabilities. With a focus on speed and systematization of the design process, we develop versatile hardware prosthesis and exoskeleton emulators for human-in-the-loop optimization. Researching topics such as the role of ankle push-off in balance maintenance and arm swing in human gait energy economy, we demonstrate efficient exemplar autonomous systems.

Robot picks up small toys

Intelligence through Robotic Interaction at Scale Lab (IRIS)

IRIS acquires scalable robotic intelligence through establishing algorithms for general intelligent behavior through learning and interaction. With methods from deep reinforcement learning over raw sensory inputs, meta-learning from previous experience for accelerating new skill development, and self-supervised learning from interactions without human supervision, we aim for robots able to perform varied learned tasks across diverse environments.

Robotic arm holds a coffee cup

Biomimetics and Dexterous Manipulation Lab (BDML)

BDML draws insights from nature in developing bio-inspired technologies including gecko-like adhesives, soft muscle actuators, under-actuated hands, and flexible tactile sensors enabling robots to climb, fly, perch, handle delicate objects, and interact responsively with humans. We work with biologists on design principles of animal performance and with roboticists to control our solutions in environments from under the sea to inside the human body and in space.

Multi-Robot Systems Lab (MSL) 

MSL develops algorithms for collaboration, coordination, and competition among machine/human teams in unstructured natural environments, including operation in complex traffic understanding and signalling intent, in 3D flying races against human pilots at the edge of the dynamic envelope; and for large-scale drone aerial remote surveys.  Building on optimization, control and game theory, and machine learning, we develop the essentials for robots entering the real world.

Hand wearing a robotic glove


The CHARM Lab develops safe and intuitive haptic interfaces for enhanced physical connection in remote and virtual interaction.  We couple users with teleoperation tasks, increase perceptual realism in virtual environments, and deliver intuitive robotic manipulation via soft, safe, deformable mechanisms. Our solutions assist doctors in robot-assisted surgery, students in extended-realism simulations, disaster recovery specialists in assessment/situation handling, and the disabled for richer life experiences.

Robotic arm picks up small toys

Intelligent and Interactive Autonomous Systems Group (ILIAD)

ILIAD is developing methods to enable groups of autonomous systems and groups of humans to interact safely and reliably with each other. Employing methods from AI, control theory, machine learning, and optimization, we are establishing theory and algorithms for interaction in uncertain and safety-critical environments. Through on-site and with-human learning and observation, we are moving robot teams out of factories and safely into humans’ lives.

Researchers talk behind a robotic arm

Interactive Perception and Robot Learning Lab (IPRL)

IPRL studies autonomous robotic agents for planning and executing complex manipulation tasks in dynamic, uncertain, and unstructured environments. We develop algorithms for autonomous learning that exploit different sensory modalities for robustness, structural priors for scalability, and observed robot interactions for learning. Our solutions will allow manipulation robots to escape the factory floor and move into warehouses, homes, and disaster zones.

Robotics in the ocean

Robotics Lab

The Robotics Lab develops mathematical control algorithms, hardware capabilities, and programming interfaces, and -- acquiring human-level skill through learning -- is advancing rich dexterous physical interaction with the environment. Our hierarchical feedback architecture coupling perception and action enables accommodation to scene dynamics, including people. Ocean One, the lab’s collaborative humanoid underwater robot, demonstrates how avatars can succeed at challenging tasks in inhospitable spaces.

Robot with people looking on

Stanford Vision and Learning Lab (SVL)

SVL develops methods for establishing rich geometric and semantic understanding of the environment. Aimed at enhancing a robot’s perception and action capabilities within the variability and uncertainty of the real world, we address tasks such as handheld-tool use, cooking and cleaning, and navigating crowded public spaces. We develop robust models of intelligent behavior, build these into general-purpose autonomy, and couple them with robots for complex operation.

A group of people sits around a robot

Stanford Intelligent Systems Laboratory (SISL)

SISL studies robust decision-making in settings involving complex and dynamic environments where safety and efficiency must be balanced. We apply our work to challenges including autonomous driving, route planning, deep reinforcement learning, and safety and validation, addressing algorithms for efficiently deriving optimal decision strategies from high-dimensional, probabilistic representations, and establishing confidence in their safe and correct application in the real world.