| Synchronous remote testing [14,15,20-23] |
In-person testing is simulated by using video and audio transmissions and remote desktop access. |
Nearly identical to conventional in-person testing (with comparable results) [14,21-23]
Indirect cues and context can be missed [20]
Participants can prefer remote testing to in-person testing [22]
Management challenges (eg, network issues, remote troubleshooting, and setup) [15,20,22]
Users take longer to complete tasks than during in-person testing [15]
Users make more errors than during in-person testing [15]
|
| Web-based questionnaires or surveys [14,20,21] |
Users fill out web-based questionnaires as they complete tasks or after the completion of tasks. |
More time-consuming for usersa [14]
Less time-consuming for users than lab-based usability testing when usability is poora [21]
Overall usability rated lower when compared to lab-based usability testing [21]
Identifies fewer specific usability problems [14]
Enables the collection of data from many participants [20]
Validity problems with the self-report approach [20]
|
| Postuse interview [24] |
Users are interviewed over the phone about the usability of a design (qualitative and quantitative data are collected) after they have completed tasks. |
Beneficial for those with disabilities
Quantitative data collected are comparable to in-person testing data
Qualitative data are less rich compared to in-person testing data
In-person testing is better for formative testing; remote testing is better for summative evaluation
|
| User-reported critical incidents or diaries [12,13,19,20] |
Users fill out a diary and take notes during a period of use or fill out an incident form when they identify a critical problem with an interface. |
Able to capture most high- and moderate-severity incidentsa [12,13]
Users report fewer low-severity incidents than experts [12,13]
Validity problems with self-reports [20]
Issues may be underreported compared to those reported via traditional methodsa [19]
|
| User-provided feedback [25] |
While completing timed tasks, users provide comments or feedback in a separate browser window. Once a task is complete, the user rates the difficulty of the completed task. |
The percentage of participants who completed remote testing tasks was the same as the percentage of participants who completed in-person testing tasks
No difference in the time taken to complete tasks
Able to capture rich qualitative information through typed comments
Less observation data captured compared to those captured during in-person testing
Captured fewer usability issues in some cases compared to those captured during in-person testing
|
| Log analysis [20] |
The actions taken by the user (eg, clicks) are captured for future analysis. |
|