Data for: "Scanpath Prediction on Information Visualizations"

DOI

We propose Unified Model of Saliency and Scanpaths (UMSS) - a model that learns to predict multi-duration saliency and scanpaths (i.e. sequences of eye fixations) on information visualisations. Although scanpaths provide rich information about the importance of different visualisation elements during the visual exploration process, prior work has been limited to predicting aggregated attention statistics, such as visual saliency. We present in-depth analyses of gaze behaviour for different information visualisation elements (e.g. Title, Label, Data) on the popular MASSVIS dataset. We show that while, overall, gaze patterns are surprisingly consistent across visualisations and viewers, there are also structural differences in gaze dynamics for different elements. Informed by our analyses, UMSS first predicts multi-duration element-level saliency maps, then probabilistically samples scanpaths from them. Extensive experiments on MASSVIS show that our method consistently outperforms state-of-the-art methods with respect to several, widely used scanpath and saliency evaluation metrics. Our method achieves a relative improvement in sequence score of 11.5 % for scanpath prediction, and a relative improvement in Pearson correlation coefficient of up to 23.6 % for saliency prediction. These results are auspicious and point towards richer user models and simulations of visual attention on visualisations without the need for any eye tracking equipment. This dataset contains saliency maps and scanpaths for UMSS and baseline methods. The structure of the dataset is described in the README-File.

Identifier
DOI https://doi.org/10.18419/darus-3361
Related Identifier IsCitedBy https://doi.org/10.1109/TVCG.2023.3242293
Metadata Access https://darus.uni-stuttgart.de/oai?verb=GetRecord&metadataPrefix=oai_datacite&identifier=doi:10.18419/darus-3361
Provenance
Creator Wang, Yao ORCID logo
Publisher DaRUS
Contributor Bulling, Andreas; Wang, Yao; Bâce, Mihai
Publication Year 2023
Funding Reference DFG 251654672
Rights info:eu-repo/semantics/openAccess
OpenAccess true
Contact Bulling, Andreas (Universität Stuttgart)
Representation
Resource Type deep learning code; Dataset
Format text/x-python; application/octet-stream; application/zip; text/tab-separated-values; image/png; application/x-ipynb+json; text/plain; text/plain; charset=US-ASCII; text/markdown; text/csv
Size 1705; 2677; 1306; 2571; 1056; 1924; 2959; 5064; 30071; 2575; 2208; 2464; 3766; 761; 2657; 2541; 949; 22820; 21635; 9565; 294195907; 1190; 112264; 8872; 8865; 112092; 481244317; 1941; 1899; 30204; 26134; 2308; 97368; 34199; 1978; 4296; 9502; 321138199; 11276; 165240995; 1751; 19640; 0; 780; 9880; 2265; 174091004; 8816; 8863; 86382; 337833; 5328; 374257; 2511; 2416; 2365; 2361; 26311; 4409; 2364; 2863; 3387; 3033; 2627; 2775; 3700; 2823; 25212; 44954; 1010; 11; 9300; 947; 19488002; 1473; 1551; 2111; 1717; 7004; 967; 1438; 1629; 4556; 2725; 2639; 3375; 169586; 35206; 1014; 6755; 213369338; 2948; 832; 914; 667; 2604; 2726; 2292; 2310; 999; 377047; 336878; 336926; 11770; 14162; 189451123; 2535; 5180; 8804; 1686; 14292
Version 2.0
Discipline Other