Hazards of Partially Automated Driving

Automation is most dangerous when it is consistent and reliable most of the time. This places a false sense of trust, security and comfort on the driver’s mind that may backfire in the most unexpected cases. In addition, human error is unavoidable, thus prone to accidents.

Student blog by Trung Dang*

The Society of Automotive Engineers (SAE) taxonomy of automation defined the industrial standard for the system that functions between the driver and automated system. The range of was defined between levels 0 and 5: Level 0 is driver only system while Level 5 is a system based on full automation (expected to be ready in 2030). Thus far different manufactures such as Volvo, Tesla and Mercedes have reached to level 2 in various forms. These system uses a combined function approach that automates both longitudinal and lateral aspects of driving simultaneously. Thus, it enables the driver to be more hands-free, but not mind-free (Banks et al., 2018). Partially automated systems that are used nowadays demand the driver to monitor and take control when needed, as the driver is expected to be ready to act upon the vehicle approaching its Operational Design Domain (ODD). This introduces risks that can either be human or machine errors as it is not a conventional driving mechanics and thus associated risk must not be treated as such.

Studies have shown that there are risks in the monitoring of these subsystem. Drivers’ tasks are now partially taking part in the control of the vehicle instead of direct control. The prolonged monitoring may create fatigue, along with the fact that drivers may show too much trust in the system and thus not concentrate on the monitoring. This reduces the capability of the driver to detect, evaluate and respond to critical situations. Additionally, by making the driver hands-free, it may result in the driver engaging in non-driving related activities and thus distracted from the monitoring (Banks et al., 2018). These are the main risks of partially automated driving.

The study done by Banks, Plant and Stanton (2018) discussed the circumstances surrounding the Tesla accident on 7th May 2016. The fatal accident involving a Tesla Model S was the first major accident of a partially automated driving system. The Model S collided with a tractor trailer crossing an intersection on a highway causing fatal harm to the Model S driver. The vehicle’s automated system did not make an attempt to warn the driver, nor initiate a brake. Additionally, the driver, who was concluded as the one at fault (no defects were found in the automated system but instead the driver was determined to be distracted from the driving/monitoring task), did not attempt to override the automation. The study concluded that the driver was hands and feet free and assumed the monitoring role while actually being distracted from the responsibilities for an extended period, thus leading to the accident.

Automation is most dangerous when it is consistent and reliable most of the time. This places a false sense of trust, security and comfort on the driver’s mind that may backfire in the most unexpected cases. In addition, human error is unavoidable, thus prone to accidents. The switch between SAE levels comes with different responsibilities for the drivers, which requires different hazards to be associated with any given system. It is suggested that further investigation on the behavior of drivers should be carried out before implementing assisting technology in everyday traffic. Another suggestion is that automation should focus more on helping drivers with manual control, by warning of obstacles, rather than taking the control of the vehicle itself. The driver should remain engaged with the driving task at all times to intervene when needed. The study done by Louw, Madigan, Carsten and Merat (2016) suggested that automated system needs to direct driver’s attention at least 6 seconds before an adverse outcome would occur while the driver needed to be competent of making safe decisions.

 

*This student blog post has been done on basis of a project work during the course SAFM03 Current Trends in Security and Safety Management in the Masters Degree Programme in Security and Safety Management (SAFER) in fall 2019.

 

References

Banks, V.A., Eriksson, A., O’Donoghue, J., Stanton, N.A. 2018. Is partially automated driving a bad idea? Observations from an on-road study. Applied Ergonomics, Volume 68, April 2018, Pages 138-145.

Banks, V.A., Plant, K.L., Stanton, N.A. 2018. Driver error or designer error: Using the Perceptual Cycle Model to explore the circumstances surrounding the fatal Tesla crash on 7th May 2016. Safety Science, Volume 108, October 2018, Pages 278-285

Louw, T., Madigan, R., Carsten, O., Merat, N. 2016. Were they in the loop during automated driving? Links between visual attention and crash potential. Inj. Prev., 23(4): 281-286.