Human Factors, Part 2

Katherine Driggs-Campbell
University of Illinois at Urbana-Champaign

Intelligent vehicles are becoming tangible technologies that will soon impact the human experience. However, the desirable impacts of such systems are only achievable if the underlying algorithms can handle the unique challenges humans present: People tend to defy expected behaviors and do not conform to many of the standard assumptions made in robotics. To design safe, trustworthy autonomy, we must transform how intelligent systems interact, influence, and predict human agents.

In this tutorial series, we will discuss the different ways humans are integrated into intelligent vehicles. In the first lecture, we will review the levels of autonomy and the concepts behind advanced driver assistance systems (often called ADAS). I will review relevant topics in human factors, like how to measure abstract concepts like trust. In the second lecture, we'll discuss how humans are integrated into autonomous vehicles through various modeling methods (e.g., intent estimation, trajectory prediction). I will present a general framework to formalize and unify models human agents, then give an overview of different modeling concepts for predicting driver and pedestrian motions.

Presentation (PDF File)
View on Youtube

Back to Mathematical Challenges and Opportunities for Autonomous Vehicles Tutorials