Fatalities with semi-automated vehicles typically occur when users are engaged in non-driving related tasks (NDRTs) that compromise their situational awareness (SA). This work developed a tactile display for on-body notification to support situational awareness, thus enabling users to recognize vehicle automation failures and intervene if necessary. We investigated whether such tactile notifications support "event detection'' (SA-L1) or 2 "anticipation'" (SA-L3). Using a simulated automated driving scenario, a between-groups study contrasted SA-L1 and SA-L3 tactile notifications that respectively displayed the spatial positions of surrounding traffic or future projection of the automated vehicle's position. Our participants were engaged in an NDRT, i.e., an Operation Span Task that engaged visual working memory (WM) resources. They were instructed to intervene if the tactile display contradicted the driving scenario, thus indicating vehicle sensing failures. On a single critical trial, we introduced a failure that could have resulted in a vehicle collision. SA-L1 tactile displays of potential collision targets resulted in less subjective workload on the NDRT than SA-L3, which indicated the vehicle's future actions. These findings and qualitative questionnaire suggest that the simplicity of SA-L1 display required less mental resources, which allowed participants to better interpret sensing failures in vehicle automation.
We make available data on intervention performance (distance, Maximum intensity, Time to Collision), WM performance (Attention and WM interference), qualitative questionnaire (NASA-TLX and SART), together with subjective questions from the semistructured interview and Unity VR environment.
Unity, 2019.1
Unity environment available at: https://github.com/FrancescoChiossi/Supporting-SA-in-AV
NASA TLX Questionnarie retrieved from : https://humansystems.arc.nasa.gov/groups/tlx/downloads/TLXScale.pdf
SART Questionnaire retrieved from : https://ext.eurocontrol.int/ehp/?q=node/1608