Effect of Context on Smartphone Users’ Typing Performance in the Wild published in TOCHI!

Our work investigating the effect of context on Smartphone users’ typing performance in the wild is published in TOCHI.

Elgin Akpinar, Yeliz Yesilda and Pinar Karagoz, Effect of Context on Smartphone Users’ Typing Performance in the Wild, ACM Transactions on Computer-Human Interaction, Volume 30, Issue 3, Article No.: 36, pp 1–44, 2023, DOI: https://doi.org/10.1145/3577013

Abstract: Smartphones play a crucial role in daily activities, however, situationally-induced impairments and disabilities (SIIDs) can easily be experienced depending on the context. Previous studies explored the effect of context but mainly done in controlled environments with limited research done in the wild. In this article, we present an in-situ remote user study with 48 participants’ keyboard interaction on smartphones including the performance and context details. We first propose an automated approach for error detection by combining approaches introduced in the literature and with a follow-up study, show that the accuracy of error detection is improved. We then investigate the effect of context on the typing performance based on five dimensions: environment, mobility, social, multitasking, and distraction, and reveal that the context affects participants’ error rate significantly but with individual differences. Our main contribution is providing empirical evidence with an in-situ study showing the effect of context on error rate.

Effects of data preprocessing on detecting autism in adults using web-based eye-tracking data published in BIT!

Our work investigating the effects of data preprocessing on detecting autism in adults using web-based eye-tracking data is published in BIT journal!

Erfan Khalaji, Sukru Eraslan, Yeliz Yesilada and Victoria Yaneva. Effects of data preprocessing on detecting autism in adults using web-based eye-tracking data. Behaviour & Information Technology, 2022. https://doi.org/10.1080/0144929X.2022.2127376

Abstract: Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder, often associated with social and communication challenges and whose prevalence has increased significantly over the past two decades. The variety of different manifestations of ASD makes the condition difficult to diagnose, especially in the case of highly independent adults. A large body of work is dedicated to developing new and improved diagnostic techniques, emphasising approaches that rely on objective markers. One such paradigm is investigating eye-tracking data as a promising and objective method to capture attention-related differences between people with and without autism. This study builds upon prior work in this area that focussed on developing a machine-learning classifier trained on gaze data from web-related tasks to detect ASD in adults. Using the same data, we show that a new data pre-processing approach, combined with an exploration of the performance of different classification algorithms, leads to an increased classification accuracy compared to prior work. The proposed approach to data pre-processing is stimulus-independent, suggesting that the improvements in performance shown in these experiments can potentially generalise over other studies that use eye-tracking data for predictive purposes.

Classification of Layout vs Data Tables published!

Our work on classifying tables as layout and relational tables on the web with a machine learning approach is published at Tweb!

Waqar Haider and Yeliz Yesilada. Classification of Layout vs Relational Tables on the Web: Machine Learning with Rendered Pages. ACM Transactions on the Web, 2022, Volume 17, Issue 1, Article No.: 1, pp 1–23. https://doi.org/10.1145/3555349

Abstract: Table mining on the web is an open problem, and none of the previously proposed techniques provides a complete solution. Most research focuses on the structure of the HTML document, but because of the nature and structure of the web, it is still a challenging problem to detect relational tables. Web Content Accessibility Guidelines (WCAG) also cover a wide range of recommendations for making tables accessible, but our previous work shows that these recommendations are also not followed; therefore, tables are still inaccessible to disabled people and automated processing. We propose a new approach to table mining by not looking at the HTML structure, but rather, the rendered pages by the browser. The first task in table mining on the web is to classify relational vs. layout tables, and here, we propose two alternative approaches for that task. We first introduce our dataset, which includes 725 web pages with 9,957 extracted tables. Our first approach extracts features from a page after being rendered by the browser, then applies several machine learning algorithms in classifying the layout vs. relational tables. The best result is with Random Forest with the accuracy of 97.2% (F1-score: 0.955) with 10-fold cross-validation. Our second approach classifies tables using images taken from the same sources using Convolutional Neural Network (CNN), which gives an accuracy of 95% (F1-score: 0.95). Our work here shows that the web’s true essence comes after it goes through a browser and using the rendered pages and tables, the classification is more accurate compared to literature and paves the way in making the tables more accessible.

Web Accessibility Awareness Trainings Completed!

“Accessibility Workshops: Web Accessibility Trainings”, initiated by the General Directorate of Services for Persons with Disabilities and the Elderly of the Republic of Turkey, Ministry of Family and Social Sciences, were completed with a total of 10 training sessions.

Assoc. Prof. Dr. Yeliz YEŞİLADA gave the training sessions and included topics such as “What is web accessibility?”, “Who are the web accessibility stakeholders?”, “What are the web accessibility standards and guidelines?” and “What are web accessibility assessment methods?”. 3,182 web content management officers, web designers, web developers, web software developers, managers and unit supervisors/branch managers attended these ten training sessions.

Further details can be found in the news announcement done by the Ministry of Family and Social Sciences of the Republic of Turkey.

Can we classify familiar users from their eye-tracking? Published in Turkish Journal of Electrical Engineering and Computer Sciences!

We are happy to share that we have published our work on automatically classifying familiar users form their eyetracking data in the Turkish Journal of Electrical Engineering and Computer Sciences journal.

ÖDER, MELİH; ERASLAN, ŞÜKRÜ; and YESİLADA, YELİZ (2022) “Automatically classifying familiar web users from eye-tracking data: a machine learning approach,” Turkish Journal of Electrical Engineering and Computer Sciences: Vol. 30: No. 1, Article 16. https://doi.org/10.3906/elk-2103-6

Abstract: Eye-tracking studies typically collect enormous amount of data encoding rich information about user behaviours and characteristics on the web. Eye-tracking data has been proved to be useful for usability and accessibility testing and for developing adaptive systems. The main objective of our work is to mine eye-tracking data with machine learning algorithms to automatically detect users’ characteristics. In this paper, we focus on exploring different machine learning algorithms to automatically classify whether users are familiar or not with a web page. We present our work with an eye-tracking data of 81 participants on six web pages. Our results show that by using eye-tracking features, we are able to classify whether users are familiar or not with a web page with the best accuracy of approximately 72% for raw data. We also show that with a resampling technique this accuracy can be improved more than 10%. This work paves the way for using eye-tracking data for identifying familiar users that can used for different purposes, for example, it can be used to better locate certain elements on pages such as adverts to meet the users’ needs or it can be used to do better profiling of users for usability and accessibility assessment of pages.

Transcoding web pages to save energy published at SPE!

Our work on transcoding web pages to save energy has been published at Journal of Software: Practice and Experience!

Ünlü, H, Yesilada, Y. Transcoding web pages via stylesheets and scripts for saving energy on the client. Softw Pract Exper. 2022; 52( 4): 984– 1003. doi:10.1002/spe.3046

Abstract: Mobile devices and accessing the web have become essential in our daily lives. However, their limitations in terms of both hardware such as the battery, and software capabilities can affect the user experience such as battery drain. There are some best practices for the web page design that are shown to affect the downloading time of web pages. In this study, we report our experience in applying these practices to see their effect on energy saving. We propose two techniques: (1) concatenating external script and stylesheet files and (2) minifying external script and stylesheets that can be used to transcode web pages to improve energy consumption on the client-side and therefore improve the battery life. We present our experimental architecture, implementation, and a systematic evaluation of these two techniques. The evaluation results show that the proposed techniques can achieve approximately 12% processor energy-saving and 4% power saving in two different client types, 13% improvement in a typical laptop battery life, and 4% improvement in a typical mobile phone battery life.

A review on the Effect of Context published at ACM Surveys!

Very pleased to announce that we have a systematic survey on the Effect of Context on Small Screen and Wearable Device Users’ Performance published at ACM Surveys.

Elgin Akpinar, Yeliz Yeşilada, and Selim Temizer. 2020. The Effect of Context on Small Screen and Wearable Device Users’ Performance – A Systematic Review. ACM Comput. Surv. 53, 3, Article 52 (May 2021), 44 pages. https://doi.org/10.1145/3386370

Abstract = Small screen and wearable devices play a key role in most of our daily tasks and activities. However, depending on the context, users can easily experience situationally induced impairments and disabilities (SIIDs). Previous studies have defined SIIDs as a new type of impairment in which an able-bodied user’s behaviour is impaired by the context including the characteristics of a device and the environment. This article systematically reviews the empirical studies on the effect of context on SIIDs. In particular, this review aims to answer the following two research questions: Which contextual factors have been examined in the literature that can cause SIIDs and how different contextual factors affect small screen and wearable device users’ performance. This article systematically reviews 187 publications under a framework that has five factors for context analysis: physical, temporal, social, task, and technical contexts. This review shows that a significant amount of empirical studies have been conducted focusing on some factors such as mobility but there still are some factors such as social factors that need to be further considered for SIIDs. Finally, some factors have shown to have significant impact on users’ performance such as multitasking but not all factors has been empirically demonstrated to have an effect on users’ performance.

Keywords = Wearable devices, Context, small screen devices

Complexity work published in the International Journal of Human-Computer Studies!

Happy to announce our work on predicting the perceived complexity of web pages called “Automated prediction of visual complexity of web pages: Tools and evaluations” published in “International Journal of Human-Computer Studies”.

Eleni Michailidou, Sukru Eraslan, Yeliz Yesilada, Simon Harper, Automated prediction of visual complexity of web pages: Tools and evaluations, International Journal of Human-Computer Studies,
Volume 145, 2021, 102523, ISSN 1071-5819, DOI: https://doi.org/10.1016/j.ijhcs.2020.102523.

Abstract: Understanding visual complexity as it relates to websites has been an emergent area for many years. However, predicting the visual complexity of a website as perceived by users has been a real challenge. Perception is important because it influences user engagement, dictating if they will find it dull, engaging, or too complex. While others have suggested solutions to certain levels of success, here we propose a simple but accurate model that generates a Visual Complexity Score (VCS) based on common aspects of an HTML Document Object Model (DOM). We created our model based on a statistical analysis of 3300 ratings of 55 users on 30 web pages. We then implemented this prediction model in an open source Eclipse framework called ViCRAM that both predicts and visualises the complexity of web pages in the form of a pixelated heat map. Finally, we evaluated this model and the tool prediction with another user study of 6240 ratings of 104 users on 30 web pages. This study shows that our tool can predict the perceived complexity with a strong correlation to users’ perceived complexity.

Keywords: Visual complexity; Prediction; Perception; Automated tool

EDA and EyeCrowdata published at ETWEB2020 – co-located event at ETRA2020!

Our two eye-tracking related projects with our undergraduate students at METU NCC have been published at ETWEB2020 – co-located event at ETRA2020.

  • Abdulrahman Zakrt, Abdulmalik Obaidah Elmahgiubi, Beshir Alhomsi, Sukru Eraslan and Yeliz Yesilada, Eye-tracking Data Analyser (EDA): Web Application and Evaluation, ETRA ’20 Adjunct: ACM Symposium on Eye Tracking Research and ApplicationsJune 2020 Article No.: 27 Pages 1–9, DOI: 10.1145/3379157.3391301
  • Naziha Shekh.Khalil, Ecem Dogruer, Abdulmohimen K. O. Elosta, Sukru Eraslan, Yeliz Yesilada and Simon Harper, EyeCrowdata: Towards a Web-based Crowdsourcing Platform for Web-related Eye-Tracking Data, ETRA ’20 Adjunct: ACM Symposium on Eye Tracking Research and ApplicationsJune 2020 Article No.: 31 Pages 1–6, DOI: 10.1145/3379157.3391304

Our autism detection work published at IEEE TNSRE!

We are very pleased that our paper on detecting high-functioning autism in adults by using eye tracking and machine learning has been published at IEEE Transactions on Neural Systems and Rehabilitation Engineering!

Victoria Yaneva, Le An Ha, Sukru Eraslan, Yeliz Yesilada, and Ruslan Mitkov. 2020. Detecting High-functioning Autism in Adults Using Eye Tracking and Machine Learning. IEEE Transactions on Neural Systems and Rehabilitation Engineering (SCI-E), 28 , 6 , 1254-1261. DOI: 10.1109/TNSRE.2020.2991675

Abstract: The purpose of this study is to test whether visual processing differences between adults with and without high-functioning autism captured through eye tracking can be used to detect autism. We record the eye movements of adult participants with and without autism while they look for information within web pages. We then use the recorded eye-tracking data to train machine learning classifiers to detect the condition. The data was collected as part of two separate studies involving a total of 71 unique participants (31 with autism and 40 control), which enabled the evaluation of the approach on two separate groups of participants, using different stimuli and tasks. We explore the effects of a number of gaze-based and other variables, showing that autism can be detected automatically with around 74% accuracy. These results confirm that eye-tracking data can be used for the automatic detection of high-functioning autism in adults and that visual processing differences between the two groups exist when processing web pages.