Wed. Apr 21st, 2021


Research on visual saliency (the ability to attract visual attention) is crucial for applications in video summarization, object detection, and other areas. Despite current studies on the visual saliency of the web- and desktop-based user interfaces (UIs), mobile ones are still an open field.

Recently, researchers released the first dataset for visual saliency of mobile UIs.

Image credit: John Jackson via pexels.com, free licence

An eye-tracker was used to collect the gaze data for screenshots of different mobile apps from Android and iOS devices. The data were further modeled using classic stimulus-driven and data-driven deep learning models.

The results show that saliency is dominated by location and text, image, or face biases, while size and color are less important. Such knowledge help designers organize the content to achieve the desired effect. It is also shown that better results can be achieved using deep learning models instead of classical ones.

For graphical user interface (UI) design, it is important to understand what attracts visual attention. While previous work on saliency has focused on desktop and web-based UIs, mobile app UIs differ from these in several respects. We present findings from a controlled study with 30 participants and 193 mobile UIs. The results speak to a role of expectations in guiding where users look at. Strong bias toward the top-left corner of the display, text, and images was evident, while bottom-up features such as color or size affected saliency less. Classic, parameter-free saliency models showed a weak fit with the data, and data-driven models improved significantly when trained specifically on this dataset (e.g., NSS rose from 0.66 to 0.84). We also release the first annotated dataset for investigating visual saliency in mobile UIs.

Research article: Leiva, L. A., et al. “Understanding Visual Saliency in Mobile User Interfaces”, arXiv:2101.09176. Link: https://arxiv.org/abs/2101.09176






Source link