Data

MobileEYE: Deep-Learning-based Mobile Device Eye Tracking Solution for Dynamic Visuals

Western Sydney University
Gunawardena, Kahandawala ; Ginige, Jeewani ; Javadi, Bahman ; Lui, Gough ; Ginige, Jeewani
Viewed: [[ro.stat.viewed]] Cited: [[ro.stat.cited]] Accessed: [[ro.stat.accessed]]
ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rfr_id=info%3Asid%2FANDS&rft_id=info:doi10.26183/0ryn-p137&rft.title=MobileEYE: Deep-Learning-based Mobile Device Eye Tracking Solution for Dynamic Visuals&rft.identifier=10.26183/0ryn-p137&rft.publisher=Western Sydney University&rft.description=This repository contains two zipped files - Models and Codes - related to the development and evaluation of the MobileEYE algorithm. 1. Models These models were optimised for various inference environments, including mobile and edge devices. - Baseline Models: The original deep learning models developed during the first phase of the research without any optimisations. - Pruned Models: Models that have undergone pruning. - Quantized Models: Optimised models using quantisation. - ONNX Models: Models exported in ONNX format, primarily for integration with Flutter applications. 2. Codes This folder contains the source codes for each phase and iteration of the study, organised according to the Sub Research Questions (SRQs) they address. - SRQ_1_Iteration_1: Source code for deep learning models developed in the first iteration to answer the first sub-research question (SRQ1). These models were tested on static visual stimuli. - SRQ_1_Iteration_2: Code for the second iteration of deep learning models developed to answer SRQ1, focusing on dynamic visual stimuli. - SRQ_2_Iteration_1: Contains the Android application for on-device inference, and Python programs for cloud-based and edge-based inference, addressing the first iteration of the second sub-research question (SRQ2). - SRQ_2_Iteration_2_n_3: Python programs used for edge-based inference on real edge devices, along with a Flutter application for on-device inference on the Samsung Galaxy S22, used in the second and third iterations for SRQ2. - SRQ_3: Python programs for data analysis and the Python-Flask application, along with the Web Application used for data collection. It also includes the participant details in an Excel sheet, related to SRQ3. This project has extended ethics consent, but cannot be published openly. To discuss the data or project, please contact Nishan Gunawardena ORCID 0000-0002-4629-7335. &rft.creator=Gunawardena, Kahandawala &rft.creator=Ginige, Jeewani &rft.creator=Javadi, Bahman &rft.creator=Lui, Gough &rft.creator=Ginige, Jeewani &rft.date=2024&rft.coverage=&rft_rights=Copyright Western Sydney University&rft_subject=Eye tracking&rft_subject=Edge intellgence&rft_subject=Deep learning&rft_subject=Model optimisation&rft_subject=Hybrid models&rft_subject=Mobile Devices&rft_subject=Deep learning&rft_subject=Machine learning&rft_subject=INFORMATION AND COMPUTING SCIENCES&rft_subject=Cloud computing&rft_subject=Distributed computing and systems software&rft_subject=Performance evaluation&rft_subject=Computer vision&rft_subject=Computer vision and multimedia computation&rft_subject=Human-computer interaction&rft_subject=Human-centred computing&rft.type=dataset&rft.language=English Access the data

Licence & Rights:

view details

Copyright Western Sydney University

Access:

Conditions apply view details

Conditional

Full description

This repository contains two zipped files - Models and Codes - related to the development and evaluation of the MobileEYE algorithm.

1. Models

These models were optimised for various inference environments, including mobile and edge devices.

- Baseline Models: The original deep learning models developed during the first phase of the research without any optimisations.

- Pruned Models: Models that have undergone pruning.

- Quantized Models: Optimised models using quantisation.

- ONNX Models: Models exported in ONNX format, primarily for integration with Flutter applications.

2. Codes

This folder contains the source codes for each phase and iteration of the study, organised according to the Sub Research Questions (SRQs) they address.

- SRQ_1_Iteration_1: Source code for deep learning models developed in the first iteration to answer the first sub-research question (SRQ1). These models were tested on static visual stimuli.

- SRQ_1_Iteration_2: Code for the second iteration of deep learning models developed to answer SRQ1, focusing on dynamic visual stimuli.

- SRQ_2_Iteration_1: Contains the Android application for on-device inference, and Python programs for cloud-based and edge-based inference, addressing the first iteration of the second sub-research question (SRQ2).

- SRQ_2_Iteration_2_n_3: Python programs used for edge-based inference on real edge devices, along with a Flutter application for on-device inference on the Samsung Galaxy S22, used in the second and third iterations for SRQ2.

- SRQ_3: Python programs for data analysis and the Python-Flask application, along with the Web Application used for data collection. It also includes the participant details in an Excel sheet, related to SRQ3.

This project has extended ethics consent, but cannot be published openly. To discuss the data or project, please contact Nishan Gunawardena <[email protected]> ORCID 0000-0002-4629-7335.

Created: 2024-09-05

This dataset is part of a larger collection

Click to explore relationships graph
Identifiers
  • DOI : 10.26183/0RYN-P137
  • Local : research-data.westernsydney.edu.au/published/c128a1606b1711efac071f166c32d99b