Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Mar 6.
Published in final edited form as: ASSETS. 2020 Oct;2020:23. doi: 10.1145/3373625.3417030

TableView: Enabling Eficient Access to Web Data Records for Screen-Magnifier Users

Hae-Na Lee 1, Sami Uddin 2, Vikas Ashok 3
PMCID: PMC7936724  NIHMSID: NIHMS1666156  PMID: 33681868

Abstract

People with visual impairments typically rely on screen-magnifier assistive technology to interact with webpages. As screen-magnifier users can only view a portion of the webpage content in an enlarged form at any given time, they have to endure an inconvenient and arduous process of repeatedly moving the magnifier focus back-and-forth over different portions of the webpage in order to make comparisons between data records, e.g., comparing the available fights in a travel website based on their prices, durations, etc. To address this issue, we designed and developed TableView, a browser extension that leverages a state-of-the art information extraction method to automatically identify and extract data records and their attributes in a webpage, and subsequently presents them to a user in a compactly arranged tabular format that needs significantly less screen space compared to that currently occupied by these items in the page. This way, TableView is able to pack more items within the magnifier focus, thereby reducing the overall content area for panning, and hence making it easy for screen-magnifier users to compare different items before making their selections. A user study with 16 low vision participants showed that with TableView, the time spent on panning the data records in webpages was significantly reduced by 72.9% (avg.) compared to that with just a screen magnifier, and 66.5% compared to that with a screen magnifier using a space compaction method.

CCS CONCEPTS: Human-centered computing → Accessibility technologies, Empirical studies in accessibility

Keywords: web accessibility, usability, screen magnifier, low vision

1. INTRODUCTION

To interact with computer applications including web browsers, many people with low vision rely on screen-magnifier assistive technology [21, 30, 37] that enlarges screen content and also provides other customization options such as color inversion, contrast enhancement, cursor details, etc. Enlarging a specific content pushes other portions of the screen, so screen-magnifier users have to pan (i.e., move the magnifier lens or viewport) application content with their pointing devices to access different portions of the content to complete their tasks. However, frequently-performed important web tasks that involve navigating and comparing data records (e.g., search results, shopping products, fights, restaurants, etc.) cannot be easily performed by simple panning; the users also need to remember the details of the items as they pan the list of records, given that only a small portion of the list of records is visible on screen at any time (see Figure 1a) when viewed under the magnifier lens. In a preliminary interview study with 16 low vision participants, all participants stated that this additional cognitive burden often makes for an arduous and tedious interaction experience, especially when lists are long and the required enlargement (zoom level) is high. The participants therefore desired a method that facilitates quick and easy comparison of data records based on their attributes (e.g., price, rating, etc.).

Figure 1:

Figure 1:

Screenshots of a typical job-search website illustrating screen-magnifier interaction with web data records using (a) screen magnifier with the zoom level set to 4×; and (b) screen magnifer + TableView, with the same zoom level of 4×. Notice that with TableView, information of more data records can be packed within the magnifier viewport. A mouse click on the Select Attributes button displays a list of checkboxes (right below the button) with attributes as labels, using which the user can optionally select only a fraction of the data-record attributes to be shown in the TableView interface.

While previous approaches [7, 8] can alleviate the panning and cognitive burden to a considerable extent using space-compaction techniques, they are not sufficient for facilitating easy and efficient interaction with web data records, as these records and their contents are typically stacked vertically in webpages (as in Figure 1a), and therefore just reducing the space is unlikely to push many occluded records ‘up’ within the magnifier lens or viewport. Therefore, in this paper, we present TableView, a browser extension that automatically extracts data records from webpages using a state-of-the-art method, and then presents these records in a compact tabular form via a custom overlay popup interface. As illustrated in Figure 1b, this compact tabular presentation of data is tailored for enabling easier and faster comparisons between data records, which consequently improves the access efficiency by reducing the time required by screen-magnifier users to scan and compare all the data records before making a selection that satisfies their needs. Furthermore, informed by the interview findings, we made the TableView interface interactive by allowing the users to select the data-record attributes they wish to view, based on their preferences. This ‘attribute filter’ feature of TableView further improves interaction efficiency by reducing the amount of horizontal panning while viewing and comparing data records.

We evaluated TableView in a user study with the same participants who took part in our interview study. In the study, we observed that with TableView, the efficiency of access, i.e., the time to compare data records and make a selection, was significantly reduced by 72.9% (avg.) on unfamiliar websites compared to that with just screen magnifier, and by 66.4% (avg.) on familiar websites. Even when compared to a state-of-the-art space-compacting approach presented in [8], the average reductions in task-completion times were 66.5% and 56.1% for unfamiliar and familiar websites, respectively. Furthermore, the interaction burden measured in terms of NASA Task Load Index (TLX) score was also significantly reduced by 65.9% (avg.) when compared with that using just screen magnifier, and 55.4% (avg.) when compared with that using a space-compacting method. Specifically, all study participants stated that TableView significantly reduced their mental burden while interacting with data records.

In sum, our contributions are the following:

  • The findings of an interview study eliciting the interaction issues that low-vision users face while interacting with web data records using screen magnifiers.

  • The design of TableView, a browser extension that provides an alternative tabular presentation of web data records on a webpage along with optional attribute filters to facilitate quick and easy comparisons between the records.

  • The findings of a user study with 16 low-vision screen-magnifier users evaluating the efficacy of TableView.

2. RELATED WORK

Our work is closely related to existing literature on the following topics: (i) interaction behavior of low-vision users; (ii) improving usability for low-vision users; (iii) non-visual interaction with tabular data; and (iv) web data-record extraction techniques.

2.1. Low-Vision Interaction Behavior

While there exists extensive literature understanding and addressing the usability of screen readers for blind users [6, 7, 9, 19, 33, 34], the usability of screen magnifiers for people with low vision has been relatively understudied [22, 25, 31, 39]. Jacko et al. [22] focused on low-vision users who had age-related macular degeneration, and analyzed the users’ mouse cursor movements to observe the interaction strategies of the users. In their experiment, they found that the user performance was significantly impacted by the size of icons. Specifically, the performance improved as the icon size became larger. Szpiro et al. [39], on the other hand, investigated the general user behavior, interaction strategies, and challenges of screen-magnifier users when they interact with a variety of computing devices such as smartphones, tablets, and desktop computers. In their study, they observed that low-vision users typically use multiple assistive technologies simultaneously to meet their interaction needs while interacting with computing applications. They also observed that users frequently needed to make multiple adjustments to comfortably view the application content. Both these studies focused only on generic aspects of low-vision interaction with computing devices, and not specific aspects associated with browsing the web, especially interacting with data records.

2.2. Improving Usability for Low-Vision Users

A few research works have focused on improving the user-interaction experience with GUIs for people with low vision [7, 8, 18, 24]. Specifically, Kline et al. [24] present a set of accessibility tools that let users selectively magnify a portion of the screen area and also enable them to keep track of mouse cursor location. Gajos et al. [18], on the other hand, propose a personalized interface-generation technique for people with motor and low-vision impairments to interact with general computer applications. Their approach automatically generates personalized GUI from given interface specifications, to match the custom needs of users based on their motor and vision abilities. Bigham et al. [7] developed a magnification system that automatically figures out how much to enlarge webpage content without introducing adverse side effects, such as additional horizontal scrolling. Similarly, Billah et al. [8] present a context-preserving screen magnifier, named SteeringWheel, that attempts to keep semantically-related web elements close to each other within the magnified viewport, by discriminately magnifying only the non-whitespace content. Furthermore, they provided an easy-to-use Dial input device for low-vision users to interact with web content using simple gestures.

While the aforementioned solutions significantly improve usability, they are not tailored for the basic comparison tasks that screen-magnifier users frequently perform almost on every website. For instance, the SteeringWheel [8] system enables the screen-magnifier users to sequentially navigate and view content of data records only one record at a time, and therefore it is not easy to compare the records. Also, the space-efficient magnification methods in both [7] and [8] are mostly suitable for reducing the horizontal panning area of data records; these methods contribute less towards reducing the vertical panning area in data records and enable easy comparisons, as there is generally less whitespace between the contents vertically (e.g., see Figure 1a).

2.3. Non-Visual Interaction with Tabular Data

Accessibility and usability of tabular structures have been considerably explored in the literature [4, 5, 23, 38, 42]. However, almost all of these works have focused on blind screen-reader users [5, 15, 17, 23, 42]. On the other hand, research works on usability of tables for low-vision screen-magnifier users are sparingly few [31, 32]. In their study, Pascual et al. [32] found that all low-vision users could easily complete tasks while interacting with simple two-dimensional tables on an accessible website. Also, the average satisfaction rating for the tasks involving interaction with simple tables was very high (4 - Very easy). Their observations influenced our design choice for the data-record table in the TableView interface, where we use simple two-dimensional table to display the content of data records.

2.4. Web Data-Record Extraction Techniques

Extracting data from webpages is not a new idea. Plenty of extraction techniques currently exist to extract different kinds of data from the webpages [1, 6, 12, 28, 35], including data records [3, 43, 44]. For instance, the DOM-tree based methods proposed in [3, 16, 29, 41] focus on extracting data records, including list of items such as search results, shopping products, available fights, etc., by exploiting repetitive patterns such as same XPaths, similar visual formatting, same HTML subtree structure, tree alignment, tree matching, etc. in the page DOMs. On the other hand, Zhu et al. [44] propose an integrated approach based on custom devised hierarchical conditional random fields for simultaneously detecting both data records and their attributes. As a pre-processing step, they rely on the VIPS webpage segmentation algorithm [13] to obtain the initial semantically-meaningful blocks. Similarly, other visual-segmentation based approaches have also been proposed [14, 26, 27], which exploit visual similarities to identify and extract data records. Given a plethora of existing algorithms for extracting data records, in this paper, instead of devising a new algorithm, we leveraged a state-of-the-art algorithm, specifically STEM algorithm [16], in TableView.

3. UNCOVERING USABILITY ISSUES

We conducted an interview study with 16 low-vision users to understand challenges and issues they face while interacting with webpages containing data records.

3.1. Participants

We recruited 16 low-vision users (8 female, 8 male), with an average age of 46.8 (Median = 47, SD = 11.7, Range = 29–65). The inclusion criteria required the participants to be regular screen-magnifier users; low-vision users with extremely low visual acuity who rely on screen readers were excluded from the interview. All participants were familiar with one or more of the following screen magnifiers: Windows Magnifier [30], ZoomText [37], Apple Zoom [21]; and they spent at least 3 hours a day browsing the web. The visual acuity of the participants ranged between 20/100 and 20/500. Table 1 presents the participant demographics.

Table 1:

Participant demographics. The participants self-reported their visual acuity, and the most frequently visited websites are the ones containing data records.

ID Age/ Gender Diagnosis (C -Congenital, A -Adventitious) Visual Acuity Max Zoom DailyWeb Browsing Five Most Frequently Visited Websites
Left Eye Right Eye
P1 34/M Optic atrophy (A) 20/100 20/200 3 hours Google, YouTube, Amazon, Facebook, Twitter
P2 53/M Diabetic retinopathy (A) 20/200 20/400 4 hours Google, Amazon, Facebook, Twitter, Google Maps
P3 65/M Glaucoma (A) 0 20/100 3 hours Google, YouTube, Amazon, Facebook, Twitter
P4 51/F Leber congenital amaurosis (C) 20/100 20/200 5 hours Google, YouTube, Amazon, Facebook, Yahoo
P5 62/F Glaucoma (A) 20/400 0 3 hours Google, Amazon, Twitter, Yelp, Yahoo
P6 47/F Chorioretinal scarring (C) 20/400 20/200 3 hours Google, YouTube, Amazon, Facebook, Twitter
P7 45/M Albinism, nystagmus (C) 20/200 20/400 4 hours Google, YouTube, Amazon, Facebook, Twitter
P8 37/M Stevens-Johnson syndrome (A) 20/200 20/400 4 hours Google, YouTube, Amazon, Facebook, Yahoo
P9 29/F Retinitis pigmentosa (A) 20/400 0 3 hours Google, YouTube, Amazon, Facebook, Yahoo
P10 34/F Cancer (C) 20/200 0 4.5 hours Google, Amazon, Facebook, Twitter, Yahoo
P11 32/M Glaucoma (A) 20/100 20/200 5 hours Google, YouTube, Amazon, Twitter, Yahoo
P12 38/F Retinitis pigmentosa (A) 20/500 0 3 hours Google, YouTube, Amazon, Facebook, Twitter
P13 49/F Glaucoma (C) 20/200 20/400 4 hours Google, YouTube, Amazon, Twitter, Yahoo
P14 64/M Cataracts (A) 0 20/500 3.5 hours Google, Amazon, Facebook, Twitter, Yahoo
P15 47/F Astigmatism (C) 20/200 20/400 5 hours Google, YouTube, Amazon, Twitter, Yahoo
P16 62/M Congenital retinal scar (C) 20/400 20/400 3 hours Google, YouTube, Amazon, Facebook, Yelp

3.2. Interview Format

All interviews were conducted remotely via either phone or Skype. The interviews were semi-structured, with questions about the following two topics:

  • General questions about screen magnifiers and web browsing. E.g., What screen magnifiers do you use? What browsers do you use? What websites do you frequently visit? How many hours do you typically spend per day browsing the web? Do you order food online?

  • Usability issues and interaction strategies while accessing web data records. E.g., What problems do you face when navigating Google search results? What issues do you face when comparing products on shopping websites such as Amazon? How do you work around these issues?

The participants were also asked to illustrate some of the usability issues for clarification. With their permission, all sessions were audio-recorded and also screen-captured if applicable. Each interview lasted about 30 to 45 minutes. The collected interview data was then qualitatively analyzed using grounding theory, specifically an open coding technique [36], where we iteratively went over the user responses and identified key insights, pain points, and themes that reoccurred in the data. We detail our findings next.

3.3. Findings

High dependency on query webpages.

All participants indicated that they are very highly dependent on the web for their everyday activities such as grocery shopping, ordering food, shopping products, seeking information, social networking, and entertainment, etc., all of which required them to issue queries and interact with the resulting data records. Also, except P1, P9, and P11 who were Mac users, all other participants owned Windows laptops or desktops. While the Windows platform users all preferred third-party software, such as ZoomText for web browsing instead of the Windows in-built screen magnifier, the Mac users relied on the Apple’s in-built Zoom screen magnifier. In either case, no participant used the browser’s own zoom feature to enlarge screen content. Also, all participants indicated that they spent at least 3 hours daily browsing the web.

Excessive panning for revisiting data records.

All 16 participants stated that they could not remember the values of desired attributes of data records, and therefore had to pan extensively back-and-forth to revisit many of the previously visited records to reconfirm the values. Regarding this, 12 participants also said that one of the strategies they use to reduce the cognitive burden during this panning process, is to iteratively filter out the candidates in their minds by focusing on one attribute at a time. They also claimed that adopting such an iterative strategy helps them reduce horizontal panning, as different attributes of records typically are spatially scattered over the screen, and therefore all attributes cannot be viewed at the same time in the magnifier lens or viewport. By focusing on one attribute in each iteration, they simply have to align the viewport once and then simply pan or scroll down to see the values of that attribute for each data record. Nonetheless, they all agreed that while this strategy helped reduce their cognitive load, it did not help much with regard to panning reduction.

Unable to distinguish visited vs. unvisited links.

Five participants (P2, P5, P6, P12, and P15) stated that they had trouble distinguishing the previously visited records from the unvisited records, since they could not visually distinguish between the blue and purple colored links that represent these records. As a consequence, they often spent more time while navigating the data records, due to redundant accesses to some of the data records.

Reading only few data records.

A majority (11) of participants revealed that they almost always get tired after reading a few data records, and therefore they do not go over all the data records in a webpage. Due to this, these participants expressed that they have previously missed out on better shopping deals, cheaper hotel rooms, better restaurant options, more relevant search-result links, etc. Seven of these eleven participants disclosed that sticking to the first few data records also reduces accidental selection of undesired data records; they said that these unintentional selections increase their frustration and interaction effort, as they have to navigate back to the webpage containing the data records, and navigate over these records again.

Summary.

The interview study revealed several pain points of low-vision screen-magnifier users when they interact with web data records. From the study observations, it is clear that an alternative magnifier-friendly presentation of web data records is needed to improve the usability for low-vision users. In this regard, we designed and developed TableView, which is described next.

4. TABLEVIEW DESIGN

Figure 2 presents an architectural schematic illustrating the workflow of TableView. When a webpage is loaded, the TableView extension uses the STEM algorithm [16], which has been shown to be accurate and robust, to automatically detect and extract data records (if any) on the webpage. The extraction involves identifying the relevant nodes in the HTML Document Object Model (DOM) of webpage that correspond to the data records and their attributes (e.g., price, duration, company, etc., on a fight reservation website). TableView also monitors the webpage for changes, and re-extracts the data records from the webpage if necessary (e.g., when a user applies search filters). The extension also injects a button with label TableView right above the data records. When the user clicks this button, the extracted data records and their attributes are displayed as a compact 2D table in an interactive popup dialog as shown previously in Figure 1b. Hovering the mouse cursor over a table cell will generate an in-place overlay to display the entire contents of the cell (e.g., long descriptions). The user can also select the attribute filters from the top of the popup dialog to view only the desired attributes.

Figure 2:

Figure 2:

An architectural workflow schematic of TableView.

4.1. Extracting Data Records

As mentioned earlier, we used the existing STEM (suffix tree-based extraction method) algorithm [16] due to its proven high precision of extraction as well as its robustness in filtering out noisy decorative elements in dense content-rich webpages. The core idea underlying the STEM algorithm is to find repetitive HTML tag path (similar to XPath) patterns in the webpage DOM. In this regard, the tag paths (e.g., <body>,<ul>,<li>) of all the nodes in the DOM are computed, and then a unique integer is assigned to each unique tag path. Using these integer codes instead of the nodes, the entire webpage is then represented as a sequence of integers. From this webpage sequence, a suffix tree is constructed using the well-known suffix-tree construction algorithm [40]. This way, the task of finding the list of data records in a webpage is transformed into finding the corresponding repeating node sequence in the constructed suffix tree. To find the correct node sequence, the STEM algorithm applies four custom data-record specific filters to the candidate repeating node sequences in the suffix tree, and then picks the most optimal node sequence as the final choice to extract the data records.

In [16], the authors showed that the precision of the STEM algorithm on five different datasets were all above 0.9, with the best precision being 0.986. Similarly, they showed that the average recall too was very high – at least 0.8 for all five datasets, with the best recall being 0.961. We also evaluated the STEM algorithm on our own custom dataset comprising 200 webpages (total of 6, 397 data records) collected from the following categories: shopping, social media, entertainment, government, job search, booking, classifieds, and search engines. The exact choices for the webpages were based on the Alexa ranking [2] that keeps track of the top most-visited websites on the web. The collected 200 webpages containing data records were manually annotated by injecting custom data attributes in the webpage DOMs before saving them in the dataset. Specifically, we leveraged the inspect functionality of web browser to easily find the root nodes of each data record in the webpage DOM, and then added the custom data attribute to each of these nodes before saving the webpage. These injected data attributes serve as markers to measure the extraction performance of the STEM algorithm. The overall precision of identification on our custom dataset was 0.912 and the recall was 0.929, thereby validating the capability of this algorithm in accurately identifying the data records. A closer inspection revealed that almost all errors were due to arbitrary advertisement content inside the data records which made the underlying DOM subtrees of these records different from that of other data records on the webpage. Also, in some webpages, the advertisement content was arranged in the form of data records, thereby resulting in incorrect classification.

4.2. TableView Interface

The interface of TableView is shown in Figure 1b. The motivation for choosing a tabular format to display the data records in TableView was influenced by findings of recent research studies [31, 42]. Specifically, Williams et al. [42] found that blind screen-reader users could complete information lookup tasks much faster and with reduced cognitive effort when the entire webpages were transcoded as HTML tables. Also, Moreno et al. [31] found that low-vision users preferred more coherent and organized content presentation, preferably at the center, so that the movements in the field of vision that cause loss of context could be reduced.

Other than the entire data-record table, TableView also provides customization options to enable screen-magnifier users to selectively view only the desired attributes of the data records. This feature is made available via the Select Attributes button at the top of the interface. When the user clicks on this button, a list of checkboxes with the corresponding attribute names, is made visible just below the Select Attributes button as shown in Figure 3. The users can select the attributes (i.e., check the corresponding boxes) based on their preferences and then press the Filter button, and the TableView accordingly updates the data-record table in its interface to show only the selected columns. Note that by default, TableView does not show this list of checkboxes in order to allocate more space for the actual data record table given the limited area of the magnifier viewport. The implementation details regarding the selection of attribute names, record table characteristics, and user-event handlers are described next.

Figure 3:

Figure 3:

Attribute selection feature of TableView. Clicking the Select Attributes button displays the list of checkboxes from which the user can choose the attributes to view in the data-record table.

Determining attribute names.

TableView first tries to extract the attribute names from the DOM metadata of the HTML nodes corresponding to the attributes of data record. Specifically, it checks if the attribute nodes or their descendants contain the “aria-label”, “data”, and “id” properties, and if so, it extracts them to be used as attribute names (see Figure 4). From the extracted metadata, we use a predefined dictionary of keywords to filter out the noisy textual content (e.g., words such as “search”, “search-result”, “card”, etc.). This dictionary was built by mining the commonly occurring noisy text in our aforementioned custom dataset. Also, 74% (148) of the webpages in the dataset had one of the three properties (“aria-label”, “data”, “id”) in the metadata of data-record attributes. In case this metadata is absent, TableView simply generates a unique arbitrary name of the form ‘Attribute i’, where i is an integer.

Figure 4:

Figure 4:

An illustration of extracting attribute names from DOM metadata. As this example website has data attributes in the DOM, their values are best sources for extracting attribute names.

Table characteristics.

All attribute columns are assigned the same width, and the attribute columns are organized left-to-right in the same order as they appear in the DOM. To view the full contents of an attribute for a particular record, the user simply needs to hover over the corresponding cell; an overlay will be dynamically generated to display the entire content of the cell. TableView automatically closes the overlay as soon as the user moves the cursor outside its boundaries. Also, based on the recommendations of Moreno et al. [31], TableView assigns the same font size to render the text in all table cells regardless of how they are actually rendered in the webpage. Furthermore, TableView uses the same HTML element for rendering the attribute as that in the webpage containing the data records.

Relaying user actions.

Some of the attributes of data records are actionable in the webpage (e.g., links, buttons), and therefore clicking on these attributes in the TableView’s custom popup dialog interface, should generate the same intended outcomes. Therefore, whenever the user clicks an actionable attribute (e.g., Save button), TableView closes the popup dialog and simulates a click on the corresponding attribute in the original webpage, which often leads to loading of a new page in the browser. To facilitate this relay of user actions, TableView maintains a mapping between the table contents in its popup dialog with the corresponding attributes of data records in the original webpage.

5. EVALUATION

5.1. Participants

For the study, we recruited the same 16 low-vision participants who took part in the preliminary interview (see Table 1). The two studies were separated by a time gap of approximately two months. Among all the participants, 14 participants mentioned they could not recollect the exact topics discussed in the interview study, but vaguely remembered that the interviews were about web browsing. Two participants (P1 and P7), however, mentioned that they could remember most of the details of the interview. Also, as seen in Table 1, the participant group was very heterogeneous comprising a wide array of low-vision conditions, and the participants used different screen-magnifier settings, thereby limiting the impact of participant overlap on the external validity.

5.2. Apparatus

The study was conducted remotely, and the participants used their own computers to perform the study tasks. Zoom or Skype conferencing software was used for communication and screen-sharing, and the entire session was recorded with the participants’ permission. Except P1, P9, and P11 who used Zoom, Apple’s built-in screen magnifier, on their MacBooks, all other participants had ZoomText screen magnifier installed on their computers. Also, all participants had Google Chrome installed on their computers. The TableView extension was emailed (via Google Drive) to the participants. The experimenter assisted the participants in installing the extension via the conferencing software. However, 4 participants (P2, P4, P5, and P10) required assistance from their family members to install the extension.

5.3. Design

In a within-subject experiment, the participants were asked to perform representative ‘data-record interaction’ tasks under the following study conditions:

  • Screen Magnifier (SM): The participants only used their preferred screen magnifier (e.g., ZoomText) to interact with data records specified in the tasks.

  • Screen Magnifier + Space Compaction (SC): The participants used their preferred screen magnifier, and a space-compaction method proposed in [8] was also enabled.

  • Screen Magnifier + TableView without Attribute Filters (TV ): The participants used their preferred screen magnifier, and the TableView was enabled without attribute filters in the popup dialog.

  • Screen Magnifier + TableView with Attribute Filters (TVF): The participants used their preferred screen magnifer, and the TableView was enabled with attribute filters in the popup dialog.

For convenience, the space-compaction algorithm was included into the TableView extension and could be turned ‘ON/OFF’ with a checkbox selection in the extension GUI. When the space compaction was ‘ON’, the TableView popup was disabled, and vice versa. There was also another checkbox to turn ‘ON/OFF’ the attribute filters. In this way, we simulated the latter three study conditions using a single browser extension.

The study was divided into two parts, Parts A and B. In Part A, the participants did the tasks on unfamiliar websites, and in Part B, they did the tasks on a familiar website.

Part A – Unfamiliar websites.

For each of the above conditions, the participants were asked to perform one task that required them to go over the first 15 data records in a website and select the record that satisfied some pre-specified criteria. To avoid learning effect, we selected four different but similar job search websites (https://www.efinancialcareers.com/, https://www.ziprecruiter.com/, https://www.roberthalf.com/, and https://www.dice.com/) for the study. These websites were vetted to ensure that the STEM algorithm accurately extracted the data records, so as to enable fair comparison between study conditions. The pre-specified criteria comprised value constraints on the following job attributes: salary, location, company, and type. During the recruitment phase, we found that all participants were unfamiliar with these websites. The assignment of websites to conditions was randomized, and the ordering the conditions was counterbalanced using the Latin square method [10]. We also ensured that the selection criteria was set to match the data record in the middle of the list in all of these websites. To avoid unforeseen issues, we used cached version of these webpages for the study.

Part B – Familiar websites.

For each of the above conditions, the participants were asked to perform a task that required them to go over the first 15 data records in a familiar website and select the record that satisfied some pre-specified criteria. Since all participants were familiar with the Amazon website, we chose that website for this part of the study. The criteria chosen were price, shipping, and company. To avoid learning effect, four different queries (‘laptop’, ‘smartphone’, ‘tv’, ‘desktop’) were used for the four conditions. As in case of Part A, we set the selection criteria to match the data record in the middle of the list, and cached webpages were used for the study. The assignment of queries to conditions and the ordering of conditions were counterbalanced using the Latin square method [10].

5.4. Procedure

Before starting the experiment, the participants were given enough time to download and install the TableView extension either by themselves or with the assistance of their family members. They were also given practice time (10 minutes) to familiarize themselves with both the space-compaction and TableView study conditions. The participants first complete Part A of the study, followed by Part B. Since the task websites in Part A belonged to a different domain (job search) having different page layouts compared to that in Part B (shopping), there was no issue of learning effect, and therefore we did not counterbalance the tasks between the two parts of the study. The participants were assigned 15 minutes to complete each task. Post-study questionnaires were administered after each study condition. Exit interviews were conducted after the participants completed all their tasks. The screen-sharing and the recording features of the conference software were both turned on to ensure that all user-interaction activities were captured for later analysis. Each study lasted for 2.5 hours, and all conversations during the study were in English.

Measurements.

We measured the completion times for each task during the study. Subjective measurements included the System Usability Scale (SUS) [11] and NASA Task Load Index (TLX) [20] scores, and qualitative subjective feedback. Errors, i.e., selecting incorrect data records, were also recorded, and the participants were asked to continue the task and find the correct record.

5.5. Results

Task completion times.

Figure 5a and Figure 5b present the individual task completion times for each participant in all four study conditions, for Parts A and B of the study, respectively. As seen in Figure 5, for both Parts A and B of the study, TableView (TV and TVF conditions) significantly reduced the time needed to navigate data records, compared to both screen magnifier (SM) and space compaction (SC) conditions. Overall, we observed that with TableView (specifically TVF condition), the task completion time was reduced by 72.9% (avg.) on unfamiliar websites (Part A) compared to that with just screen magnifier, and by 66.4% on familiar websites (Part B). The average reductions in task-completion times with TableView (TVF condition) compared to the SC condition were 66.5% and 56.1% for unfamiliar websites (Part A) and familiar websites (Part B), respectively. The differences in task completion times between the study conditions were found to be statistically significant for both Part A (Kruskal-Wallis test, h = 51.636, p < 0.001) and Part B (Kruskal-Wallis test, h = 53.739, p < 0.001) of the study. For Part A, pairwise comparisons between the following study conditions using Wilcoxon signed-rank test also revealed significant differences: (a) SM vs. SC (W = 136, p < 0.001); (b) SC vs. TV (W = 136, p < 0.001); and (c) TV vs. TVF (W = 136, p < 0.001). A similar observation was made for Part B: (a) SM vs. SC (W = 136, p < 0.001); (b) SC vs. TV (W = 136, p < 0.001); and (c) TV vs. TVF (W = 136, p < 0.001).

Figure 5:

Figure 5:

Average task completion times measured for each of the participants under different study conditions, which were Screen Magnifier (SM); Screen Magnifier + Space Compaction (SC); Screen Magnifier + TableView without Atribute Filters (TV ); and Screen Magnifier + TableView with Attribute Filters (TVF).

An inspection of the recorded data revealed that in all the study conditions, most participants could not view all attributes in the magnifier viewport, and hence they spent time panning horizontally to access the desired attributes. However, often times in the screen magnifier (SM) condition, the participants had to pan over large patches of whitespace, which caused orientation issues for some participants and they accidentally navigated to a different data record. The horizontal panning and disorientation was significantly reduced in the space compaction (SC) condition, but the space compaction could not assist much with vertical panning, as the content within each record was arranged vertically, and therefore barely one data record was visible even with space compaction. No such issues were observed in the TableView conditions (TV and TVF) as the content of each data record was completely reorganized into a table row.

A similar trend was observed when the participants interacted with a familiar website in Part B of the study, except that the participants were more aware of the correct panning direction even while navigating over whitespace patches. Also, notice that the task completion times were slightly higher in Part B than in Part A for all conditions as the data records in the Amazon website used in Part B, had more content than the data records in the job search websites used in Part A.

Errors.

The total number of errors, i.e., selection of incorrect data record for a task, was the highest (26) in the screen magnifier (SM) condition compared to the other conditions (SC = 12, TV = 6, and TVF = 2). The errors occurred either due to the participants forgetting the selection criteria for a task, or due to incorrect mental association of attributes to data records, especially when accidentally shifting focus to neighboring data records during panning.

SUS and TLX scores.

Table 2 shows the SUS and TLX scores for each participant. Regarding SUS, overall the TV and TVF conditions were rated much higher than the SM and SC conditions. A one-way ANOVA test showed a statistically significant effect of study condition on SUS scores (F = 186.67, p < 0.00001). Pair-wise comparisons between the following study conditions using the paired t-test also revealed significant differences: (a) SM vs. SC (t = 8.58, p < 0.00001); (b) SC vs. TV (t = 10.53, p < 0.00001); and (c) TV vs. TVF (t = 5.16, p = 0.0001). However, notice in Table 2 that the SUS scores for the TV condition are still very high, and for 4 participants the SUS scores for the TV and TVF conditions are the same. A close analysis attributed this observation to the counterbalancing of study conditions during the study; whenever the participants did the task in the TV condition before TVF condition, they gave a high rating because they felt it was much better than the default screen magnifier. Once they became aware of the additional attribute-filtering feature in the TVF condition, they felt it was much better than TV condition. Since they had already formed an opinion before doing the tasks in Part B, we did not collect the SUS and TLX scores during Part B.

Table 2:

SUS and TLX scores under four study conditions. The conditions were Screen Magnifer (SM); Screen Magnifer + Space Compaction (SC); Screen Magnifer + TableView without Atribute Filters (TV); and Screen Magnifer + TableView with Atribute Filters (TVF). The best SUS and TLX scores for each participant are highlighted in bold.

ID SUS TLX
SM SC TV TVF SM SC TV TVF
P1 25 50 72.5 80 76 57 19 10
P2 45 55 82.5 90 71 52.3 38 31.3
P3 27.5 60 87.5 87.5 73.3 57 36.6 26.6
P4 22.5 72.5 82.5 90 71 53.6 37.6 28.6
P5 40 75 80 87.5 75.3 56.6 24.6 16
P6 37.5 55 90 92.5 82 64.3 44 37
P7 47.5 72.5 87.5 95 79.6 61 29.6 19.6
P8 25 55 75 77.5 71 51.6 34 25.3
P9 40 57.5 80 85 79.3 61 33.6 26.3
P10 20 62.5 87.5 87.5 67 48.6 32 30
P11 45 55 95 95 65 61.3 23.6 15
P12 27.5 52.5 90 90 66.6 49.6 38 23
P13 40 50 82.5 85 67.6 52 43.3 33.3
P14 20 65 87.5 90 69 49 33 25.6
P15 25 50 80 85 57.6 55 25.3 17.3
P16 20 62.5 92.5 97.5 72.3 52.6 31.3 23.6

Regarding the interaction workload, as observable in Table 2, overall, the participants provided much better (i.e., lower) workload scores for the TV and TVF conditions compared to the SM and SC conditions. Specifically, we observed a significant effect of the study conditions on the TLX scores (Kruskal-Wallis test, h = 54.861, p < 0.001). Statistically significant differences were also observed between pairs of conditions: (a) SM vs. SC – Wilcoxon signed-rank test, W = 136, p = 0.00003; (b) SC vs. TV – Wilcoxon signed-rank test, W = 136, p = 0.0004; and (c) TV vs. TVF – Wilcoxon signed-rank test, W = 136, p = 0.0004. A closer inspection revealed that among the six sub-scales (i.e, Mental Demand, Physical Demand, Temporal Demand, Performance, Effort, and Frustration) of the two-part TLX questionnaire (i.e., load rating between 0–100 and individual weighting with pairwise comparisons), ‘Mental Demand’ and ‘Frustration’ sub-scales had the highest impacts – average 3 times and 3.8 times higher for the SM condition compared to TVF condition, and average 2.4 times and 2.7 times higher for the SC condition compared to TVF condition. We noticed the same trend even in the weighted scores computed from the second part of the TLX questionnaire, i.e., ‘Mental Demand’ and ‘Frustration’ sub-scales had the highest difference in scores – average 1.4 times and 2.0 times respectively higher for the SM condition compared to TVF condition, and average 1.2 times and 1.9 times respectively higher for the SC condition compared to TVF condition.

Qualitative feedback.

In the post-study interview, all participants stated that TableView made it very easy and quick for them to peruse data records and sometimes make almost instantaneous visual comparisons. Also, 7 participants expressed that the TableView’s linear arrangement of attributes in a table row made it easy for them to locate these attributes by simply maintaining their visual focus along a line as they pan horizontally, as opposed to their status-quo that requires them to move their eye focus all over the viewport to locate the record attributes. To quote P6:

With ZoomText, I have to move my eyes frequently in diferent directions to look at the various features of a product, and this is very tiring. With your TableView, I just have to look in one direction which is much easier.

All four participants with glaucoma (P3, P5, P11, and P13) stated that the tabular representation of data is very suitable for interaction given their tunnel vision. To quote P5:

I just need to focus on one part of the screen and then simply scroll, and the amount of eye movement needed is far less compared to what I currently need to do in websites.

Regarding the errors during the tasks, nearly two-thirds (10) of the participants explained that in the screen magnifier (SM) condition and to some extent in the space compaction (SC) condition, they got mentally tired after navigating the first few records, as it took significant time to pan and check the values of attributes for each data record. Specifically, they stated that the content area that they had to cover was much higher in these two conditions. As a result, at some point during the task they developed an incorrect notion of the task requirements, thereby resulting in errors.

6. DISCUSSION

The study results clearly demonstrate the effectiveness of TableView in improving the overall user experience of screen-magnifier users in even familiar webpages. The analyses of study observations also revealed some key insights as well as potential avenues for further improvement. We briefly describe some of these next.

Customizing TableView table appearance and personalization.

A majority of participants (12) expressed the need for a feature in TableView that will enable them to customize the width of columns. The rationale behind this need was that some attributes have higher priority than others, and hence more screen space should be allotted to these attributes so that more of their text is visible in the magnifier viewport. Four participants also expressed the desire to save the customization settings (i.e., attribute selections, column width) for the websites, so that they do not have to repeat the same process each time they interact with the same websites. Also, they wanted to use their saved settings on other similar websites. One participant even wondered if TableView can be made flexible enough to let users arrange the attributes of data records themselves in a drag-and-drop fashion, instead of using a fixed tabular format. This way, more convenient and personalized presentation of data records can be made possible to low-vision users.

Identifying functional dependencies between page segments.

In addition to displaying the data records, six participants also wanted the related ‘support segments’ (e.g., search filters, sort options) to also be made available in the TableView popup dialog. To support this feature, novel techniques need to be designed that can automatically identify and extract the support segments connected to the data records in webpages. Also, maintenance mechanisms should be devised to automatically fetch and refresh data records in the TableView interface whenever search filters are applied.

Limitations and future work.

One limitation of our study was that there was an overlap in the participant groups between the preliminary interview study and the latter usability study for evaluating TableView. As explained before, although the impact of this overlap on the external validity is significantly reduced given the heterogeneity of the participant group encompassing a multitude of different low-vision conditions, there may still exist some bias. Therefore future studies with an independent sample of participants are necessary to remove this bias. Furthermore, studies with individual groups targeting specific types of low-vision conditions are also needed to uncover the unique user requirements and preferences pertaining to these groups.

Another limitation is that TableView relies completely on the STEM algorithm to extract data records, and therefore its accuracy is directly tied to the accuracy of the STEM algorithm. As mentioned earlier, this algorithm failed when there was arbitrary advertisement content present either within the data records or in between the data records. Therefore, additional modifications to the original STEM algorithm and fail-safe mechanisms are needed to overcome these challenges. We also intend to explore semi-automatic user-assisted extraction algorithms, where the users can indicate the location of the data records by simply clicking on one of the records, and the algorithm residing in the browser extension can instantly extract all the records using the pattern matching techniques guided by users’ assistance. All these are scope of our future research.

Finally, the current prototype of TableView lacks personalization. The community of low-vision users is a very heterogeneous group comprising different visual conditions such as central vision loss, peripheral vision loss, glare light sensitivity, etc. Even two people with the same visual condition may have different interaction needs depending on the extent and severity of their condition. Therefore, further research is needed to tailor TableView for different vision conditions, and also to provide customization options for users.

7. CONCLUSION

This paper presented an approach to facilitate convenient interaction with web data records for low-vision screen magnifier users. The approach, manifested in the form of a browser extension TableView, automatically identifies the data records in the webpages, extracts the comparable attributes and their values, and then presents this information to the screen-magnifier users in a compact tabular format via an interactive GUI that also lets the users filter the attributes based on their preferences. A user study demonstrated the immense potential of TableView in significantly improving the user experience and reducing the interaction burden for low-vision users. Generalizing TableView to automatically identify functional relationships between segments in webpages, and then display these segments in close proximity via an interactive interface, can further enhance the usability of TableView.

ACKNOWLEDGMENTS

This work was supported by NSF Award 1805076 and NIH grant R01EY030085. We thank the anonymous reviewers for the insightful feedback that helped improve the paper.

Contributor Information

Hae-Na Lee, Stony Brook University.

Sami Uddin, Old Dominion University.

Vikas Ashok, Old Dominion University.

REFERENCES

  • [1].Alarte Julian, Insa David, and Silva Josep. 2017. Webpage Menu Detection Based on DOM. In SOFSEM 2017: Theory and Practice of Computer Science, Stefen Bernhard, Baier Christel, van den Brand Mark, Eder Johann, Hinchey Mike, and Margaria Tiziana (Eds.). Springer International Publishing, Cham, 411–422. [Google Scholar]
  • [2].Alexa Internet, Inc. 2020. Alexa - Top sites. https://www.alexa.com/topsites.
  • [3].Álvarez Manuel, Pan Alberto, Raposo Juan, Bellas Fernando, and Cacheda Fidel. 2007. Finding and Extracting Data Records from Web Pages. In Proceedings of the 2007 International Conference on Embedded and Ubiquitous Computing (Taipei, Taiwan) (EUC’07). Springer-Verlag, Berlin, Heidelberg, 466–478. [Google Scholar]
  • [4].Amtmann Dagmar, Johnson Kurt, and Cook Debbie. 2002. Making Web-Based Tables Accessible for Users of Screen Readers. Library Hi Tech 20, 2 (2002), 221. https://www.learntechlib.org/p/96485 [Google Scholar]
  • [5].Asakawa Chieko and Itoh Takashi. 1999. User Interface of a Nonvisual Table Navigation Method. In CHI ‘99 Extended Abstracts on Human Factors in Computing Systems (Pittsburgh, Pennsylvania) (CHI EA ‘99). Association for Computing Machinery, New York, NY, USA, 214–215. 10.1145/632716.632850 [DOI] [Google Scholar]
  • [6].Ashok Vikas, Puzis Yury, Borodin Yevgen, and Ramakrishnan IV. 2017. Web Screen Reading Automation Assistance Using Semantic Abstraction. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (Limassol, Cyprus) (IUI ‘17). Association for Computing Machinery, New York, NY, USA, 407–418. 10.1145/3025171.3025229 [DOI] [Google Scholar]
  • [7].Bigham Jefrey P.. 2014. Making the Web Easier to See with Opportunistic Accessibility Improvement. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (Honolulu, Hawaii, USA) (UIST ‘14). ACM, New York, NY, USA, 117–122. 10.1145/2642918.2647357 [DOI] [Google Scholar]
  • [8].Syed Masum Billah Vikas Ashok, Porter Donald E., and Ramakrishnan IV. 2018. SteeringWheel: A Locality-Preserving Magnification Interface for Low Vision Web Browsing. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ‘18). ACM, New York, NY, USA, Article 20, 13 pages. 10.1145/3173574.3173594 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Borodin Yevgen, Bigham Jefrey P., Dausch Glenn, and Ramakrishnan IV. 2010. More Than Meets the Eye: A Survey of Screen-reader Browsing Strategies. In Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A) (Raleigh, North Carolina) (W4A ‘10). ACM, New York, NY, USA, Article 13, 10 pages. 10.1145/1805986.1806005 [DOI] [Google Scholar]
  • [10].Bradley James V. 1958. Complete counterbalancing of immediate sequential effects in a Latin square design. Ɉ. Amer. Statist. Assoc. 53, 282 (1958), 525–528. [Google Scholar]
  • [11].Brooke John et al. 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry 189, 194 (1996), 4–7. [Google Scholar]
  • [12].Cai Deng, Yu Shipeng, Wen Ji-Rong, and Ma Wei-Ying. 2003. VIPS: a Vision-based Page Segmentation Algorithm. Technical Report MSR-TR-2003–79. 28 pages. https://www.microsoft.com/en-us/research/publication/vips-a-visionbased-page-segmentation-algorithm/
  • [13].Cai Deng, Yu Shipeng, Wen Ji-Rong, and Ma Wei-Ying. 2004. Block-Based Web Search. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (Shefeld, United Kingdom) (SIGIR ‘04). Association for Computing Machinery, New York, NY, USA, 456–463. 10.1145/1008992.1009070 [DOI] [Google Scholar]
  • [14].Cai Zehuan, Liu Jin, Xu Lamei, Yin Chunyong, and Wang Jin. 2017. A Vision Recognition Based Method for Web Data Extraction. Advanced Science and Technology Letters 143 (2017), 193–198. [Google Scholar]
  • [15].Chiousemoglou Mechmet and Jürgensen Helmut. 2011. Setting the Table for the Blind. In Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments (Heraklion, Crete, Greece) (PETRA ‘11). Association for Computing Machinery, New York, NY, USA, Article 1, 8 pages. 10.1145/2141622.2141624 [DOI] [Google Scholar]
  • [16].Fang Yixiang, Xie Xiaoqin, Zhang Xiaofeng, Cheng Reynold, and Zhang Zhiqiang. 2018. STEM: a suffix tree-based method for web data records extraction. Knowledge and Information Systems 55, 2 (2018), 305–331. [Google Scholar]
  • [17].Fernandes António Ramires, Carvalho Alexandre, Almeida José João, and Simoes Alberto. 2006. Transcoding for web accessibility for the blind: semantics from structure. (2006). [Google Scholar]
  • [18].Gajos Krzysztof Z., Wobbrock Jacob O., and Weld Daniel S.. 2007. Automatically Generating User Interfaces Adapted to Users’ Motor and Vision Capabilities. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology (Newport, Rhode Island, USA) (UIST ‘07). ACM, New York, NY, USA, 231–240. 10.1145/1294211.1294253 [DOI] [Google Scholar]
  • [19].Gardiner Steven, Tomasic Anthony, and Zimmerman John. 2015. EnTable: Rewriting Web Data Sets as Accessible Tables. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (Lisbon, Portugal) (ASSETS ‘15). Association for Computing Machinery, New York, NY, USA, 443–444. 10.1145/2700648.2811344 [DOI] [Google Scholar]
  • [20].Hart Sandra G. and Staveland Lowell E.. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Human Mental Workload, Hancock Peter A. and Meshkati Najmedin (Eds.). Advances in Psychology, Vol. 52. North-Holland, 139–183. 10.1016/S01664115(08)62386-9 [DOI] [Google Scholar]
  • [21].Apple Inc. 2020. Change Accessibility Zoom preferences on Mac - Apple Support. https://support.apple.com/guide/mac-help/change-zoom-preferences-foraccessibility-mh40579/mac.
  • [22].Jacko Julie A., Barreto Armando B., Marmet Gottlieb J., Chu Josey Y. M., Bautsch Holly S., Scott Ingrid U., and Rosa Robert H. Jr. 2000. Low Vision: The Role of Visual Acuity in the Efficiency of Cursor Movement. In Proceedings of the Fourth International ACM Conference on Assistive Technologies (Arlington, Virginia, USA) (Assets ‘00). ACM, New York, NY, USA, 1–8. 10.1145/354324.354327 [DOI] [Google Scholar]
  • [23].Khurana Rushil, McIsaac Duncan, Elliot Lockerman, and Mankof Jennifer. 2018. Nonvisual Interaction Techniques at the Keyboard Surface. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ‘18). Association for Computing Machinery, New York, NY, USA, 1–12. 10.1145/3173574.3173585 [DOI] [Google Scholar]
  • [24].Kline Richard L. and Glinert Ephraim P.. 1995. Improving GUI Accessibility for People with Low Vision. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ‘95). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 114–121. 10.1145/223904.223919 [DOI] [Google Scholar]
  • [25].Leonard V Kathlene, Jacko Julie A, and Pizzimenti JJ. 2006. An investigation of handheld device use by older adults with age-related macular degeneration. Behaviour & Information Technology 25, 4 (2006), 313–332. [Google Scholar]
  • [26].Li Longzhuang, Liu Yonghuai, Obregon Abel, and Weatherston Matt. 2007. Visual segmentation-based data record extraction from web documents. In 2007 IEEE International Conference on Information Reuse and Integration. IEEE, 502–507. [Google Scholar]
  • [27].Liu Wei, Meng Xiaofeng, and Meng Weiyi. 2009. Vide: A vision-based approach for deep web data extraction. IEEE Transactions on Knowledge and Data Engineering 22, 3 (2009), 447–460. [Google Scholar]
  • [28].Melnyk Valentyn, Ashok Vikas, Puzis Yury, Soviak Andrii, Borodin Yevgen, and Ramakrishnan IV. 2014. Widget Classification with Applications to Web Accessibility. In Web Engineering, Casteleyn Sven, Rossi Gustavo, and Winckler Marco (Eds.). Springer International Publishing, Cham, 341–358. [Google Scholar]
  • [29].Miao Gengxin, Tatemura Junichi, Hsiung Wang-Pin, Sawires Arsany, and Moser Louise E.. 2009. Extracting Data Records from the Web Using Tag Path Clustering. In Proceedings of the 18th International Conference on World Wide Web (Madrid, Spain) (WWW ‘09). Association for Computing Machinery, New York, NY, USA, 981–990. 10.1145/1526709.1526841 [DOI] [Google Scholar]
  • [30].Microsoft. 2020. Use Magnifier to make things on the screen easier to see. https://support.microsoft.com/en-us/help/11542/windows-use-magnifier-to-make-things-easier-to-see.
  • [31].Moreno Lourdes, Valencia Xabier, Pérez J. Eduardo, and Arrue Myriam. 2018. Exploring the Web Navigation Strategies of People with Low Vision. In Proceedings of the XIX International Conference on Human Computer Interaction (Palma, Spain) (Interacción 2018). Association for Computing Machinery, New York, NY, USA, Article 13, 8 pages. 10.1145/3233824.3233845 [DOI] [Google Scholar]
  • [32].Pascual Afra, Ribera Mireia, Granollers Toni, and Coiduras Jordi L.. 2014. Impact of Accessibility Barriers on the Mood of Blind, Low-vision and Sighted Users. Procedia Computer Science 27 (2014), 431–440. 10.1016/j.procs.2014.02.047 5th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion, DSAI 2013. [DOI] [Google Scholar]
  • [33].Power Christopher, Freire André, Petrie Helen, and Swallow David. 2012. Guidelines Are Only Half of the Story: Accessibility Problems Encountered by Blind Users on the Web. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI ‘12). ACM, New York, NY, USA, 433–442. 10.1145/2207676.2207736 [DOI] [Google Scholar]
  • [34].Pradhan Alisha, Mehta Kanika, and Findlater Leah. 2018. “Accessibility Came by Accident”: Use of Voice-Controlled Intelligent Personal Assistants by People with Disabilities. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ‘18). Association for Computing Machinery, New York, NY, USA, 1–13. 10.1145/3173574.3174033 [DOI] [Google Scholar]
  • [35].Prasad Jyotika and Paepcke Andreas. 2008. Coreex: Content Extraction from Online News Articles. In Proceedings of the 17th ACM Conference on Information and Knowledge Management (Napa Valley, California, USA) (CIKM ‘08). Association for Computing Machinery, New York, NY, USA, 1391–1392. 10.1145/1458082.1458295 [DOI] [Google Scholar]
  • [36].Saldaña Johnny. 2015. The coding manual for qualitative researchers. Sage. [Google Scholar]
  • [37].Freedom Scientific. 2020. ZoomText Screen Magnifier and Screen Reader - zoomtext.com. https://www.zoomtext.com/.
  • [38].Spiliotopoulos Dimitris, Xydas Gerasimos, Kouroupetroglou Georgios, Argyropoulos Vasilios, and Ikospentaki Kalliopi. 2010. Auditory Universal Accessibility of Data Tables Using Naturally Derived Prosody Specification. Univers. Access Inf. Soc. 9, 2 (June 2010), 169–183. 10.1007/s10209-009-0165-0 [DOI] [Google Scholar]
  • [39].Anais Szpiro Sarit Felicia, Hashash Shafeka, Zhao Yuhang, and Azenkot Shiri. 2016. How People with Low Vision Access Computing Devices: Understanding Challenges and Opportunities. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (Reno, Nevada, USA) (ASSETS ‘16). ACM, New York, NY, USA, 171–180. 10.1145/2982142.2982168 [DOI] [Google Scholar]
  • [40].Ukkonen E. 1995. On-Line Construction of Sufx Trees. Algorithmica 14, 3 (Sept. 1995), 249–260. 10.1007/BF01206331 [DOI] [Google Scholar]
  • [41].Wen Yan, Zeng Qingtian, Duan Hua, Zhang Feng, and Chen Xin. 2018. An Automatic Web Data Extraction Approach based on Path Index Trees. International Journal of Performability Engineering 14, 10, Article 2449 (2018), 11 pages. 10.23940/ijpe.18.10.p21.24492460 [DOI] [Google Scholar]
  • [42].Williams Kristin, Clarke Taylor, Gardiner Steve, Zimmerman John, and Tomasic Anthony. 2019. Find and Seek: Assessing the Impact of Table Navigation on Information Look-up with a Screen Reader. ACM Trans. Access. Comput. 12, 3, Article 11 (Aug. 2019), 23 pages. 10.1145/3342282 [DOI] [Google Scholar]
  • [43].Zhai Yanhong and Liu Bing. 2005. Web Data Extraction Based on Partial Tree Alignment. In Proceedings of the 14th International Conference on World Wide Web (Chiba, Japan) (WWW ‘05). Association for Computing Machinery, New York, NY, USA, 76–85. 10.1145/1060745.1060761 [DOI] [Google Scholar]
  • [44].Zhu Jun, Nie Zaiqing, Wen Ji-Rong, Zhang Bo, and Ma Wei-Ying. 2006. Simultaneous Record Detection and Attribute Labeling in Web Data Extraction. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Philadelphia, PA, USA) (KDD ‘06). Association for Computing Machinery, New York, NY, USA, 494–503. 10.1145/1150402.1150457 [DOI] [Google Scholar]

RESOURCES