Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2025 Dec 12.
Published in final edited form as: Int Web All Conf. 2024 Oct 22;2024:45–58. doi: 10.1145/3677846.3677860

“We are at the mercy of others’ opinion”: Supporting Blind People in Recreational Window Shopping with AI-infused Technology

Rie Kamikubo 1, Hernisa Kacorri 2, Chieko Asakawa 3
PMCID: PMC12697220  NIHMSID: NIHMS1998142  PMID: 41394318

Abstract

Engaging in recreational activities in public spaces poses challenges for blind people, often involving dependency on sighted help. Window shopping is a key recreational activity that remains inaccessible. In this paper, we investigate the information needs, challenges, and current approaches blind people have to recreational window shopping to inform the design of existing wayfinding and navigation technology for supporting blind shoppers in exploration and serendipitous discovery. We conduct a formative study with a total of 18 blind participants that include both focus groups (N=8) and interviews for requirements analysis (N=10). We find that there is a desire for push notifications of promotional information and pull notifications about shops of interest such as the targeted audience of a brand. Information about obstacles and points-of-interest required customization depending on one’s mobility aid as well as presence of a crowd, children, and wheelchair users. We translate these findings into specific information modalities and rendering in the context of two existing AI-infused assistive applications: NavCog (a turn-by-turn navigation app) and Cabot (a navigation robot).

Keywords: Blind people, navigation, guide robots, AI, information requirements

1. INTRODUCTION

Participation in recreational activities plays an important role in enhancing people’s quality of life [45, 59, 84, 110]. Providing equal access to these pursuits is imperative for fostering social inclusion [40], a key element in The United Nations Convention on the Rights of Persons with Disabilities affirming the rights to leisure and recreation [73]. However, blind people often experience limited recreational opportunities [8, 67, 89, 99], especially those that take place in public spaces such as shopping [99]. This is partially due to natural or built barriers and negative cultural attitudes [26, 71, 95, 108, 126].

To promote leisure, well-being, and social inclusion of blind people, in this paper we explore technological solutions for challenges typically faced by this community in recreational shopping. Specifically, the search aspect of browsing, also known as window shopping, can be extremely difficult to perform without access to visual cues and awareness of the surroundings. We build on the rich literature that focuses on daily living activities such as wayfinding [9, 49, 93, 101] and safety in navigation [53, 57, 64] to elicit design recommendations that can inform these existing AI-infused1 systems to support recreational window shopping.

Why window shopping?

Shopping is not only limited to acquiring consumer staples; it has been considered a meaningful activity that people engage in for a variety of personal and social reasons [5, 80, 126]. Many people choose to go window shopping as a form of recreation to learn about the latest trends, gain fashion or decorating inspiration, or socialize with friends [11, 28]. The serendipitous discovery of new information in window shopping also adds pleasurable experiences [10, 124]. While our attitudes and experiences of shopping changed during the COVID-19 pandemic, we don’t know how long this impact will last [27]. Sheth [102] argues that “when an existing habit or a necessity is given up, it always comes back as a recreation or a hobby,” and wonders whether shopping can “become more an outdoor activity or hobby or recreation.” Others also see hope for a post-pandemic retail renaissance [23, 120] aligned with the recent trends [97, 103]. Thus, the need to make window shopping inclusive remains.

To inform the design of existing wayfinding and navigation systems for enhancing recreational window shopping opportunities, we conduct a formative study with blind participants (N=18). First, we gain insights from two in-person focus groups (4 participants each) on window-shopping challenges and opportunities, and identify a set of information needs of blind shoppers. Then, we conduct remote one-on-one interview sessions with a second group (N=10) to assess the types of notifications blind people prefer. In these sessions, participants are grounded in a simulated scenario of future guidance systems (e.g., walking with guide robots) that support both obstacle avoidance and information access for recreational shopping.

Our findings reveal a desire for push notifications of promotional information (e.g., window displays), whereas pull notifications of detailed shop information (e.g., general fashion styles or items carried, target age groups) only for shops of interest. Participant feedback indicates that information about obstacles and points-of-interest (POIs) needs customization depending on their mobility aid, presence of a crowd, children, and wheelchair users. We discuss how these findings can be translated into specific information rendering and modalities in the context of two AI-infused assistive applications: a guide robot (i.e., Cabot [37]) and a turn-by-turn navigation mobile app (i.e., NavCog [101]).

The main contribution of this work is empirical. By gathering and analyzing information requirements with blind people through a combination of focus groups and one-on-one interviews, we present design implications for future guidance systems to take into account the requirements shaped by the specific user needs and preferences in the context of recreational shopping. We also see how our study provides insights that are applicable to a broader context, such as “The Future of Shopping Centers” released in 2018 by Kearney [16]. This remains relevant to many later articles suggesting technological innovations for post-COVID in-person shopping and retail experiences [17, 76, 91, 120], with the potential for AI in this space [38].

2. RELATED WORK

This research aims to improve access to recreational opportunities for blind people. We draw on two areas of related work in accessibility: research with the blind community on recreational activities in public spaces, with a particular focus on shopping scenarios, and research on wayfinding and navigation technology for blind people that can be applied in these scenarios.

2.1. Accessible Recreational Activities

For many blind people, mobility and information access challenges can limit their recreational pursuits in public spaces, whether indoor or outdoor [66, 67, 89, 99]. There are additional barriers to participation in terms of transportation and high dependence on others’ assistance [66, 90]. In response, we see efforts to understand and remove these barriers in recreational activities, including travel [25, 77, 108], nature [8, 18, 19], and sports [46, 90, 94]. Nevertheless, acquiring visual information is often a prerequisite for recreational experiences, such as in museums and art galleries. To foster inclusive museum experiences, various researchers have explored the accessibility of exhibitions and artworks, via guided tours, audio descriptions, or tactile cues [1, 42, 44, 58, 92]. Technology to support navigation in museums for blind people has also been an important area of exploration [47, 54, 118].

Shopping, the focus of our work, is another activity that offers considerable recreational potential [5], particularly window shopping [11, 124]. Shopping malls facilitate this experience as entertainment venues [28, 30], and their accessibility could hold various implications for blind people despite the growth of online shopping [6, 7]. Baker et al. [6] suggest that beyond meeting recreational needs, blind and low vision consumers derive a sense of belonging and presence within their community through in-person shopping experiences. To explore ways to make these experiences accessible, previous research has investigated the challenges faced by this consumer group concerning retail layout, purchasing methods, and environment barriers within malls [7, 13, 51, 109, 116, 126].

We see extensive research on technological solutions for shopping for blind people from a literature review [29]. However, it is often discussed in the context of daily activities, such as grocery shopping [127]. Many early efforts focus on supporting navigation inside the supermarket and identification of desired items on shelves, often using the infrastructure of RFID systems with barcode scanning [2, 62, 72, 81]. With advances in computer vision and mobile technologies, researchers have introduced new interaction methods for shopping [29], including wearable devices offering audio guidance to supplement visual information of consumer staples [12, 83, 119, 128, 129] and applications that indicate clothing patterns and colors [105107, 125].

Despite the large body of work on accessible shopping for blind people, supporting recreational window shopping remains unexplored, with a few exceptions to test wayfinding and navigation systems in shopping malls (e.g., [98, 101]). Yet, these efforts are intended to support blind people to reach their destination effectively. While this is still an important research direction, Guerreiro et al. [36] investigating airport navigation for blind people found a significant limitation in their travel experiences even when supposedly assisted – they are escorted to the target gate but have no opportunity to explore nearby shops or restaurants, reflecting their desire to “get up and move around.” Exploration and serendipitous discovery of new information, as in window shopping, is deemed a challenge for blind people.

2.2. Accessible Wayfinding and Navigation

There has been a wide array of research on orientation & mobility (O&M) for blind people beyond directional guidance [114], such as informing nearby points of interest (POIs) [34, 49, 75] and obstacles [53, 69, 85], with various form factors being investigated [55, 88, 112]. We expand on specific form factors used in these wayfinding and navigation systems that have the potential to be applied in recreational window shopping scenarios: smartphones and wearable devices and robots.

Smartphones and Wearable Devices.

Smartphone-based solutions have been common in previous research, allowing researchers to leverage various sensors and technologies for user localization and implement turn-by-turn instructions to reach a target destination [21, 56, 57, 78, 101]. These solutions often come with conveying the location of nearby POIs, helping users build a rich awareness of their surroundings [9, 49, 75]. In particular, Microsoft’s Soundscape smartphone app [9] provides 3D audio cues tied to features in the real-world environment to provide information about POIs in the user’s vicinity. Gleason et al. extend this design by augmenting spatial audio with rich textual descriptions of POIs for additional functional or visual context.

Another line of research focuses on detecting obstacles and providing auditory or tactile feedback to help blind users avoid them, often using wearable devices [22, 53, 64, 96]. These systems have the potential for application in minimizing barriers in shopping environments e.g., obstacles in the open aisles of the malls [109]. Yet, designing appropriate feedback for obstacles often remains complex when it can increase confusion and cognitive load [100].

Robots.

The development of guide robots for blind people has been an active area of research for decades [4, 32, 37, 79, 111]. One of the key benefits of robot guidance is obstacle avoidance. By physically grasping a robot that guides and influences their trajectory, blind users can gain enhanced mobility and safety [32, 74, 79, 111, 115]. In such interaction, the feedback regarding obstacles can be minimized to prevent overwhelming the user. Instead, guide robots can broaden their functionality to provide additional information related to POIs and landmarks [37, 60, 115].

We see that building on the work of wayfinding and navigation systems for blind people opens potential avenues for recreational window shopping opportunities, especially with guide robots that can support both obstacle avoidance and provision of environmental information [37]. However, there is still a limited understanding of design features to apply these systems in recreational shopping scenarios. To our knowledge, there is no comprehensive study that investigates the requirements in this context. For example, what are the relevant POIs in recreational window shopping? What kind of information do blind users want to access and control for enhanced recreational shopping experiences? We aim to address these questions in our study.

3. STUDY DESIGN

With our goal to inform the design of AI-infused systems to ultimately support recreational window shopping, we conduct a formative study with blind participants (N=18). The study is broken into two parts: (i) in-person focus groups involving 8 blind participants and (ii) remote one-on-one interview sessions with a new group of 10 participants. Section 3.2 outlines the study details. We obtained (IRB-approved) informed consent and compensated participants for their time and expertise at a rate of $25 per hour.

3.1. Participants

Participant recruitment for this study took place in the United States. We recruited participants through an existing mailing list in our lab, consisting of blind people from the local community who had previously participated in studies conducted by our research group, and word of mouth. Table 1 shows the demographic information of all blind participants, with a noticeable gender skew observed towards women. While this may be partly due to the nature of our convenience sampling approach, we can presume motivation factors by gender influencing participants’ involvement in our study [61]. Yet, in hindsight, our study surfaces perspectives from groups that are often underrepresented within HCI [82] and accessibility [52].

Table 1:

Participant demographics from focus groups and interviews including gender, age, mobility aid, and vision level.

Study Session ID Gender Age Mobility Aid Vision Level
Focus Group 1 P1 woman 67 guide dog legally blind
P2 woman 69 white cane totally blind
P3 woman 74 white cane totally blind
P4 woman 78 white cane totally blind
Focus Group 2 P5 woman 39 guide dog totally blind
P6 woman 46 guide dog totally blind
P7 woman 65 guide dog totally blind
P8 woman 65 white cane legally blind
One-on-One Interviews P9 man 29 white cane totally blind
P10 woman 35 guide dog totally blind
P11 woman 36 guide dog totally blind
P12 man 39 wheelchair &
sighted help
legally blind
P13 man 44 guide dog totally blind
P14 woman 46 guide dog totally blind
P15 woman 48 white cane totally blind
P16 woman 52 white cane totally blind
P17 woman 65 white cane totally blind
P18 woman 72 white cane totally blind

In the 90-minute focus group sessions, we invited 4 participants per session (Group 1: P1 - P4, Group 2: P5 - P8). Participants were 39 to 78 years old (median=66, IQR=70.3–60.3). Half had a guide dog. The other half used only the white cane. While we sampled the participants for each group based on their schedule availability, all Group 1 members mainly do in-person shopping, whereas all Group 2 members described their preferred choice between online and in-person shopping depending on the type of merchandise (e.g., in-person for clothes/shoes, online for gifts/electronics).

In the 45-minute one-on-one sessions, we invited 10 participants (P9 - P18) with ages ranging from 29 to 72 years old (median=45, IQR=51–36.8). For half of the participants, the white cane was the primary mobility aid. One of the participants previously used a white cane but since transitioning to a wheelchair, they relied on sighted help. The other 4 participants had a guide dog. Six participants (P9, P10, P11, P13, P16, P18) reported going for in-person shopping (e.g., in shopping malls) about 4 to 6 times a year. The other participants (P12, P14, P15, P17) indicated 1 to 3 times a year. These frequencies are much lower considering the average mall visits in the U.S. (i.e., 3.1 times per month in 2018) [68]. Yet, all participants, except P11, indicated that they like to do in-person shopping as a recreational activity, especially for clothes or shoes. P11 expressed a similar preference for in-person and online shopping, as the online shopping experience provides a “sense of better independence.”

3.2. Procedure

Focus group participants were engaged in a free discussion centered around strategies and coping mechanisms to perform recreational shopping, the main challenges of navigating public places such as shopping in malls, and information needs to enjoy window shopping. Specifically, the session started with the research team clarifying the study’s objectives. The team then provided examples of state-of-the-art navigation technologies with a focus on those that have a physical form factor, such as guide robots that could be extended to support window shopping. To facilitate discussion on the information needs and technological opportunities for blind window shopping, the groups were also instructed on what some of these systems (e.g., NavCog [101], Cabot [37]) can offer in terms of alerting or avoiding obstacles and informing POIs to the user. Via a turn-taking method, participants were then invited to share their shopping experiences that did not include grocery shopping, whether done in person or online, alone or with someone. To provide more context, participants included any rationale and preferences around their shopping activities. Last, the groups were given a specific scenario: to imagine walking with a guide robot like the one discussed in the study inside a shopping mall. Continuing with a turn-taking method, participants discuss their information needs grounded in this concrete scenario.

With our initial information requirements synthesized from the focus groups, we proceeded to conduct in-depth interviews with a third group comprising 10 blind participants. This session aimed to analyze and refine these requirements. Grounded in the same scenario of walking with a guide robot inside a shopping mall, participants were prompted with a list of potential features extracted from the focus groups, divided into three sections: (a) obstacles, (b) POIs, and (c) shops. For each section, participants indicated whether and how they would prefer accessing features of interest in terms of pull and push notifications. For push notifications, participants could indicate whether they wanted toggle controls e.g., turn them off in a given context. For pull notification, participants could indicate the modality of interaction e.g., asking verbally or pressing a button. Participants also identified any missing features in the list to expand on their information needs.

The results of the one-on-one interviews are the counts of informative notifications related to shops, POIs, and obstacles, as well as qualitative data about their attitudes to accessing particular features of the surroundings to enhance window shopping opportunities in the future. We thematically analyzed the qualitative study data obtained from all sessions that were recorded and transcribed.

3.3. Reflections of Our Study Design

Our original study protocol following the focus groups was meant to have participants test a simulated scenario of being led by a guide robot inside a shopping mall. An experimenter would simulate the response of the robot using a mobile app that conveys relevant audio messages about the surroundings to the participant walking next to the experimenter. The app was built for an in-person experiment and had UI elements such as buttons and dropdowns to read pre-defined information stored in a dictionary according to participants’ commands through voice or movements. For instance, the participant could choose to give a verbal ‘Read’ command to pull promotional information (e.g., ‘Spring Sale’) or automatically receive push notifications as they slow down or stops near the entrance of the store of interest. There was also a ‘Search’ command to hear information about certain areas/services in the mall. The app had a function to store logs of tapped UI elements for desirable information and included an input field for any additional information that did not exist in the dictionary.

Unfortunately, the COVID-19 pandemic during our data collection forced us to cancel the in-person portion of this study. Instead, participants were invited for interviews through one-on-one Zoom calls. The app was still used but merely served as a demo of what one would hear in an imagined window shopping scenario in an indoor mall with a floor of retailers located on both sides. We provided a detailed description of the shopping mall and shopping activities, such as passing by retail stores, window displays, signs, potential obstacles, and other pedestrians, to better contextualize the assessment of information needs.

4. FOCUS GROUPS

The results from the focus groups provide data on blind participants’ perspectives, current strategies, and challenges in recreational shopping. To enhance their future window shopping opportunities, we also surface various information needs to formulate initial information requirements.

4.1. Engaging in Recreational Shopping

In person vs. online.

When discussing their in-person and online shopping experiences, all participants from Group 1 preferred in-person recreational shopping as it creates rich sensory experiences that are missing from online shopping:

“When you’re doing in-person shopping and when you have the items in your hand, you’re using all the senses that you can use in replace for vision, whereas if you’re dealing with online shopping, you are not using all of the senses. I think it’s really good to be able to touch and feel.” - (P4)

Group 1 also highlighted accessibility concerns with online shopping platforms, strengthening their preference for in-person recreational shopping. They often have to rely on alt-text descriptions that lack information for a better conceptualization of product selections:

“I try to do some sort of window shopping online. It is so difficult even to find the link that is going to tell you the descriptions...Every site is different and then sometimes the descriptions are so lacking in information even once you read it, you still don’t know what that is going to look like.” - (P1)

Group 2, on the other hand, had a detailed discussion on how online and in-person shopping are preferred depending on the type of merchandise to browse. Online shopping was common for gifts or electronics where they could rely on reviews, whereas in-person shopping was preferred for clothes or shoes, especially for special occasion items (e.g., event gowns):

“I have a niece and nephew and I bought them gifts a lot.

And usually they are online because I can’t read those reviews I like reading reviews you know I read like twenty or thirty reviews before I bought them, just to make it’s exactly what I think it is. So shopping online is easier like that. I agree about shoes [to shop in person]. If I’m also buying something fancy if I have to buy something that I have to wear to events, then in-person is ideal. But then again I go with people.” - (P8)

Two participants (P5, P6) also commented on their preference for in-person recreational shopping as a form of “social trips” (P5), emphasizing that they would not engage in it alone:

“My husband and I just took a trip...We do like ‘window shopping’ when it’s nothing that we have to buy. We went to a bunch of wood shops and antique shops and we bought nothing but that was fun. Because he’s fully sighted, it’s like ‘let’s look at this, check that out.’ We did it together but I never would have done that on my own. It becomes not fun.” - (P6)

Assistance from others.

In addition to discussing their preferences, both groups delved into their strategies for in-person recreational shopping. All participants mentioned that they typically have sighted companions like friends or family members. They expressed their need for sighted help for various shopping tasks, from obtaining information about product selections to finding their way to specific locations within stores:

“I always go with somebody if I’m going to buy clothing because I want to know if something looks good, in addition to of course finding out whether it fits and so that means I need to get to a fitting room and all of these kinds of things that are not very easy to do without another person.” - (P3)

Despite seeking sighted companions for recreational shopping, some participants (P1, P2, P6, P8) found it challenging. They instead ask for assistance from sales clerks for product descriptions or mall security guards to navigate between stores. P1 and P2 specifically reported their successful experience with a free personal stylist service at a department store that helped them explore selections of clothes. These choices about whom to approach for assistance could be explained by prior literature [113], which suggests that blind people consider the social costs of requesting favors from close connections (i.e., friends) and often prefer to receive assistance from professionals tasked with assisting.

Two participants (P2, P7) reported another strategy that could mitigate these social costs of “appearing to be dependent” [14]. They search online for information about malls or stores before visiting in person with others, showcasing the potential of virtual navigation to support this approach (e.g., [35]). Yet, they still expressed the need for sighted help to receive additional information beyond what they found in their prior online searches:

“I look on the internet to try to find out about the stores [in the mall] but you can’t always find out what is everywhere. Sometimes if I’m with other ladies, they will tell me what they have...So you have to ask people and they will tell you because there are stores you never heard of.” - (P2)

4.2. Lack of Opportunities for Window Shopping

Both groups discussed their limited opportunities for window shopping i.e., browsing stores in shopping malls. One major contributing factor was the lack of independent access to information. Many participants (P1, P3, P4, P6, P7) shared their frustration while seeking and receiving information from others:

“The part of the frustration is that of a line of questioning. My husband will say something and then I’ll ask a quick question. He is not thinking as fast as I am about the items that I want to look at. Friends are a bit different, knowing what I want to look at. We are at the mercy of others’ opinion.” - (P7)

All participants thus agreed that having specific target stores or items in mind is essential to receiving the kind of information they desire from people. Otherwise, they constantly need to depend on others and be informed about their surroundings:

“When I go to the mall, I want to know what is there. When I go with a friend, can you tell me what stores we are passing? Any blind person really needs to know where she wants to go.” - (P4)

In addition, two participants (P5, P6) highlighted issues of “sighted people interference” preventing them from engaging in window shopping. They expressed worries about how their actions might be interpreted by retail staff or other shoppers, similar to the concerns as blind people negotiate their actions in the social context (e.g., attending to other pedestrians’ movements to avoid being seen as ‘rude’ if accidentally walking close to them [113]):

“One of the things that I think inhibits me from sometimes shopping in person is in terms of people. If I could just walk around the store and touch stuff and not worry about someone coming over and being offended or coming over and like ‘what are you doing there?’ or me running into somebody, I would be a lot more inclined to do that. Because of sort of the sighted people interference, it’s so hard.” - (P6)

4.3. Information Needs in Window Shopping

Participants discussed the important features of the surroundings that could enhance their window shopping opportunities in the context of walking inside a shopping mall with a guide robot designed to notify these features to the user. Accompanied by a demo of a related system (i.e., Cabot [37]), they were prompted with three key topics to guide their discussions: (a) obstacles, (b) POIs, and (c) shops. We detail their information needs along with the feedback they provided.

Notifying obstacles.

Guide robots can notify users about obstacles on their path during navigation (often via audio messages e.g., “avoiding a person”), but the relevance of such feedback tends to be context dependent [37]. In our focus groups, we quickly recognized different preferences for notifications about obstacles. Participants often found information related to pedestrians as unnecessary, with a few exceptions (P3, P4) of wanting to be notified of crowded areas—situations that could pose a particular threat [15]. Many still preferred to minimize these notifications, partly because they could rely on their ability to detect other pedestrians or the crowd, as “it will usually be noisy” (P2). Similarly, while participants discussed floor changes such as wet floors or carpets as potential obstacles, only one participant (P2) cared for such information. Most participants preferred to use their mobility aids and/or senses to detect them. These preferences are perhaps reflective of blind users’ concerns of losing spatial awareness when the amount of guidance is overwhelming [101]. In fact, a few participants (P1, P5) alerted how such notifications could result in information overload:

“No no because I could hear that [pedestrians]. That would be constant in a shopping mall...Get too close to the food court, it would get much louder.” - (P5)

In continuation of their discussion on whether a guide robot should notify obstacles, participants identified certain objects of interest, especially those that are “functional” (P2). For example, both groups referred to trash cans, seating areas, displays, and elevators as potential obstacles found in the middle of the shopping mall floor. They were on common ground that these obstacles should be notified depending on individual needs, as “different people need different things” (P3). One idea was to pre-set objects of interest in settings and easily toggle notifications on or off for these objects:

“There are times that you may want to find a chair to sit down in the mall but then there are times when you want to get from A to B place, and you don’t really care about all the obstacles. So the best thing for me would be to have an option to turn on or off.” - (P1)

The need for configuring notifications also became apparent for non-functional obstacles in the environment, such as columns or planters. Often, this was influenced by participants’ mobility aids and navigation practices. Among participants using the white cane, one common preference was to be notified of these obstacles, serving as helpful cues for orientation [121]:

“I think there’s a fine line between not enough information and maybe too much information but I would like that, you know. One of the problems I always had with my note taker for points of interest was that you could get from place to place but you don’t know what you’re passing along the way...The same thing applies to the mall. You don’t know what you don’t know. - (P8)

In contrast, those navigating with a guide dog typically don’t question the obstacles around, as guide dogs are taught to find a safe path forward [70]. All but P5 guide dog owners in our focus groups referred to the same expectation when walking with a guide robot:

“If it’s gonna walk me around the planter then because I’m so used to the dog walking me around things and if the guide dog makes a sudden move, you just go. That’s what guide dogs do.” - (P6)

Notifying POIs.

POIs are places that might interest users during navigation [100]. After discussing in detail about obstacles, participants expanded on POIs in the shopping mall environment aside from shops. These included restrooms, water fountains, cafe, information counters, and ATMs, which all aligned with semantic features found in navigation apps for blind people (e.g., NavCog [101]). They found such incidental information useful, as it is typically missing from their interactions with their environments:

“It’s good to know things like... ‘There’s a restroom on your left.’ ‘There’s this cafe on your right.’ Those things are really helpful. When you’re walking past cafe or whatever, that’s great. It’s just sort of incidental information that you would get if you are a sighted person.” - (P6)

Notifying shop details.

Last, both groups discussed their information needs about shops to enhance their window shopping opportunities. All participants agreed that it is important to automatically hear shop names as they walk past the shops and have control for more detail, such as through “a button on the robot” (P6, P8) to access a second layer of information. For more details, they articulated notifications about shop categories such as men’s, women’s, or teen’s fashion to know the general items or styles the shops carry. Group 2 also discussed ways to notify price ranges, including categorization as “fancy” (P6) or parameters by “dollar signs” (P7). Participants (P2, P3, P4, P8) showed additional interest in promotional information such as sales, new arrivals, or grand opening. With various layers of information about shops, establishing a clear information hierarchy was deemed important:

“I want to have a way to finding out more information if I need it...I need to know the names of all the stores that I’m passing and then when I get to any given store I would like to know, number one what that store has to offer me and number two if it has any special signs such as sales or clearance.” - (P3)

Some participants (P1, P2, P8) also showed interest in knowing window displays, such as the styles of clothes on mannequins. Such visual information was found useful to better understand the types of merchandise offered by the shops:

“Often if I’m going shopping and say it’s the sort of the beginning of the season, I will say to the clerk, ‘what’s the color of this year,’ ‘what are the main colors of this year’ just to get an idea of what I might expect a lot of.” - (P1)

Yet, there were still comments by others (P3, P4, P5) questioning the usefulness of knowing what is on the window display since it is mostly visual information that “we can’t really experience” (P5). One possible explanation for their perception of window displays could be linked to the onset age of blindness. Experiences for aesthetic enjoyment of congenitally blind people can differ from those who have become blind later on in life [65]. Indeed, one of the participants (blind since birth) showed no interest in information intended for aesthetic purposes:

“If the stuff in the display is something that is in the shop that I am going to buy, then it is interesting. If the stuff in the display is something to make it beautiful, like artificial flowers, then I don’t need to know.” - (P3)

5. REQUIREMENTS ANALYSIS

Through one-on-one interviews, we assessed the initial information needs synthesized from the focus groups. To inform the future design of user interfaces to support blind people’s window shopping experiences, we solicited participants’ preferences for accessing information, guided by the following classification of notification attributes derived from the focus group insights:

Push notify.

User is automatically notified of features of the surroundings e.g., restrooms and shop names.

Pull notify.

User requests information relevant to features of the surroundings e.g., promotional information for the nearby shops.

Pre-set notify.

User pre-sets specific features of interest (e.g., chairs & trash cans) to toggle notifications on/off based on individual needs.

No feedback.

User is not interested in information such as pedestrians; this is especially the case for most guide dog owners.

Below, we report our analysis of the assessment which revealed different types of notifications related to obstacles and POIs (in Section 5.1) and shops (in Section 5.2), specifying the rendering and modality to access relevant information.

5.1. Types of O&M Notifications

We expand on participants’ responses to features of the surroundings that can serve as important information for orientation and mobility (O&M), including obstacles and POIs. Figure 1 shows their preferences for rendering different types of O&M notifications.

Figure 1: Types of desired notifications for O&M including obstacles and POIs. Bars represent the frequency of responses from 10 blind participants, indicating their preferences regarding push/pre-set notifications or no feedback given a list of prompted features. We also include counts for non-prompted features that participants added to indicate their specific interests.

Figure 1:

Non-functional obstacles e.g., columns and planters.

Aligned with the insights from our focus groups, many participants (except P9, P15, P16) preferred to receive no notifications about non-functional obstacles such as columns in the environment if a guide robot is navigating around them. Yet again, these preferences tend to be impacted by mobility aids, as illustrated by P9: “I’m a cane user by default. I’m looking at my environment by cane.” Some participants using a white cane wished for push (P9, P16) or pre-set notifications (P15) about columns serving as cues to better understand their current location and surroundings. Indeed, real-world trajectory data from blind users in prior work [50] revealed that information about surrounding obstacles improves path-following behavior. Similarly, while many participants (except P9, P16, P17) found the notifications about planters unnecessary, the rest of the white cane users preferred push notifications.

Functional obstacles e.g., trash cans and displays.

We found varying preferences for notifications considering how some obstacles could be functional similar to our focus group insights. While half of the interview participants (P10, P11, P12, P14, P18) indicated no interest in knowing about trash cans in the walkways, the other half favored push notifications (P9, P15, P16, P17) for trash cans or pre-set notifications with toggle controls (P13). In addition, half of the participants (P10, P11, P13, P16, P18), including those who showed no initial interest in being notified, looked for a feature to search for trash cans in the mall and request directional guidance.

Participants also considered displays as functional obstacles, with many (except P11, P12, P13) preferring push notifications for information that could be shown on these displays—e.g., indoor maps or advertisements. The remaining three participants favored pre-set notifications instead for greater control over the information they would receive, especially when passing by multiple displays in the environment.

Low-level obstacles e.g., floor changes and steps.

Only a few participants (P13, P16) preferred push notifications about floor changes such as carpets or titles. Aligned with our focus group insights, the rest of the majority showed no interest in such information as they could notice these changes on their own. Instead, push notifications were deemed preferable for addressing low-level obstacles only when they posed “significant trip hazards” (P14) or were of a “safety-oriented” nature (P11). As examples of this nature, participants referred to steps (P10, P11, P14, 16), stairs (P11, P12, P15), and ramps (P10). A few (P9, P12) were also concerned about small items that people have dropped on the floor as unexpected objects, alerting the need for push notifications.

People e.g., pedestrians and friends.

When asked about potential obstacles, all participants referred to pedestrians. However, relevant notifications for pedestrians differed across participants. Half of the participants (P12, P13, P14, P15, P17) mentioned the importance of push notifications only in particular scenarios when heightened awareness would be crucial. This included navigating through crowded spaces or being near children or those using a wheelchair.

Among the remaining half, two participants (P9, P16), both white cane users, preferred push notifications during navigation around pedestrians, whereas three participants (P10, P11, P18), including two guide dog owners, preferred no notifications. Much like other obstacle notifications, blind people’s current navigation methods may shape their preferences. However, it is crucial to consider the potential long-term effects of using guide robots, particularly as an alternative to traditional mobility aids [37]. Preferences for obstacle notifications could change over time and context, as highlighted by P9 a white cane user: “For people who use a guide dog, they are already used to where guide dogs will go around obstacles...a person or trash can. That is something I am not used to. That would take me time to do.” Pedestrian avoidance also highlighted instances involving specific individuals of interest. All participants agreed to receive push notifications when approached by friends, suggesting a guide robot’s capability to detect pedestrians in such scenarios as illustrated by P15: “It has happened to me on occasion where I’ll be at a mall and someone recognizes me and comes up to me that knows me and stops me and says hello. The robot might think to turn as obstacles.”

POIs e.g., kiosks, elevators, and seating areas.

As illustrated in Figure 1b, the relevance of POIs is aligned with what is found in navigation apps for blind people [36, 100]. All or most participants preferred push notifications for information not unique to window shopping scenarios, such as information counters, ATMs, restrooms, cafes, or elevators. Some participants suggested kiosks (P13, P14, P15, P17) and food court vendors (P10, P18) as additional POIs relevant to shopping mall environments. All participants also expressed a preference for push notifications to know nearby seating areas. Yet, there were many mixed comments on the desirable methods for receiving relevant information in more dynamic scenarios—e.g., being notified only when benches, chairs, and/or tables were available (P9, P10, P13) or toggling notifications about their availability (P15). A few participants also suggested a search feature for available or closest benches (P11, P14) to avoid repeated notifications in the walkways. Interestingly, P14 thought of integrating pedestrian detection to gauge seat availability in the areas.

5.2. Types of Shop Notifications

We investigate how blind participants’ preferences for shop notifications vary across different layers of information in the context of window shopping (Figure 2). As information depth increases, it becomes crucial to design feedback solutions that do not overload users. We identify relevant information that participants want to access for more in-depth details and further probe their preferences to control the desired level of information.

Figure 2: Types of desired notifications for shop information. Bars represent the frequency of responses from 10 blind participants, indicating their preferences regarding push/pull notifications or no feedback given a list of prompted shop-related features. Counts for non-prompted features indicate their specific interests.

Figure 2:

First layer - shop names, categories, and window displays.

Aligned with the focus group discussions, all interview participants preferred to be informed of each shop they passed by, with a suggested format like “[Name of the shop] on your right” included in a first layer of basic information. In addition, the majority of participants (except P10, P11, P15) wanted to know the categories (e.g., men’s, women’s, or teen’s fashion, shoes) along with the names of shops via push notifications. The remaining few participants wanted to pull such short descriptions only for the shops of interest for better control.

Similar to what we observed in the focus group discussions, the interview participants showed a bimodal reaction in terms of information about window displays in nearby shops. Half of them (P9, P14, P15, P16, P18) expressed a desire for push notifications detailing these window displays in the first layer of information—an element that tends to be missing in their shopping experiences, as highlighted by P15: “I would love to hear that sort of thing [visual description]. That is the whole element that I miss when I go by myself. I like to hear those details, what the mannequins are wearing, like cream-colored top.” Such preferences, when the remaining half did not want the push notifications, point to the need for customization or easy toggles between the first and second layers of information.

Second layer - more details.

All participants agreed to pull more detailed information on an “as-needed basis,” often for the shops they are interested in to complement the information in the first layer. Their additional comments also surfaced diverse needs for relevant secondary information, including target age groups (P10, P15, P18), target size groups (P10, P18), possible price ranges (P11, P13, P14, P16), shopper ratings (P13), store hours (P9, P14), crowded or waiting lines (P12, P16, P18), general styles or items carried (P14, P15, P17), and window displays (P11, P12, P13, P17).

Some also shared their ideas on labeling shops for contextual information. P13 suggested categorizing shops as high-end, midrange, low-end, or discount stores. P14 and P15 referred to the styles of fashion (e.g., dresses, casual/formal, business attire, outerwear, sleepwear, weekend wear) to complement commonly available shop tags of Men’s or Women’s fashion.

Promotional layer - sales, new arrivals, and grand opening.

When asked about preferences for notifications about promotional information extracted from our focus groups, interview responses varied based on the nature of the information and individual interests. While the majority preferred to pull more detail for selected shops for sales or new arrivals, some participants (P10, P12, P13, P18) preferred push notifications, especially for sales being considered in the first layer of information. P10 described how “having promotion comes in handy” in determining whether to go inside stores and explore what’s new, as “there could be something out there that you’ve never thought of.” Perhaps for this reason, all participants agreed that they favor push notifications for grand opening signs.

5.3. Interaction Modalities

To address the information requirements of blind people to enhance window shopping opportunities, we are interested in their preferences to interact with a future guidance system, including the design of pull, pre-set, and search features. We explore user reactions to various notification formats and interaction modalities.

Pull features - button, voice controls, and mobility patterns.

When asked about how they would prefer to pull information about shops and other features of the surroundings, many participants (P9, P10, P11, P13, P14, P15) referred to active methods of using buttons (e.g., on a guide robot’s handle) to navigate through shop details. For example, P15 imagined a scenario of having a short press of a ‘more information’ button for shop categories and a long press for detailed information about promotions. A few participants (P16, P18) also suggested voice commands to request specific information, such as ‘What are the sales?’ The remaining participants (P12, P17) referred to passive methods; if they stop, slow down, or dwell after they hear the shop names, their movements can trigger to pull detailed shop information.

Additionally, P11 articulated the scaffolding of information, particularly with ways to skip or cut off some layers of information: “It would be nice though if there was an option to skip through to specifically go to what I wanted to know, like if I want to know if the shop carries formal wear, that would maybe be like the third thing it says and I might want to skip the second thing. Having that as an option or being able to cut it off maybe like as it starts to say ‘Banana Republic’ and ‘tell me more’ and ‘this is a clothing store’ and I can just be skip skip.”

Pre-set features - settings and toggle controls.

Some participants (P12, P13, P18) expanded on how they want to control the types of information for pre-set notifications. They referred to customizing with toggle controls via a smartphone application, as illustrated by P13: “I would probably make it like a setting, like swipe through an iPhone and toggle things that I want. Pedestrians, I wouldn’t necessarily need to know. It might be something that is a choice for the user.” P12 and P18 also suggested a setting to specify ‘my favorite shops’ or ‘my favorite types of shops’ to tailor push notifications such as promotional information only for the shops under these predefined categories.

Search features - voice queries and physical interactions.

Most participants (except P9) preferred to search for specific objects in the environment by voice. P9 felt more comfortable using a menu on a smartphone application. The majority still preferred voice commands to allow their hands to be free as they could be using a white cane or in a wheelchair. Some participants (P10, P15, P16, P17) also referred to how they might have shopping bags in their hands, leading to cumbersome tasks to take out and control with their smartphone. Yet, we could expect privacy concerns about voice-search features, as illustrated by P11: “I would want to have a voice activated feature just so because that seems like it’s the easiest but I think also have an another option would be nice. I wouldn’t be embarrassed to say ‘Take me to the restroom’ but I think other people might be when they don’t want to say that out loud in a crowd. Another option is a physical thing with a smartphone, something more discrete.”

6. DISCUSSION

Our research serves as a step toward supporting blind people in recreational shopping, particularly focused on window shopping (e.g., exploration and serendipitous discovery in shopping malls), which can be highly conducive to one’s leisure and social participation [6, 7]. First, through our focus groups with 8 blind participants, we explored the information needs, challenges, and current approaches they have to window shopping. Engaging in such activities was found to be limited, primarily due to a lack of independent access to information. Consequently, they often employed strategies to minimize exploration costs, relying on assistance from store staff and/or friends to effectively search for specific stores and items.

Following our focus groups, we proceeded to conduct one-on-one interviews with 10 additional blind participants and delved deeper into the analysis of requirements to enhance recreational window shopping opportunities. Below, we reflect on the information requirements identified and translate our findings in the context of AI-infused systems and more recent work on generative AI.

6.1. Design Implications for Information

Both focus groups and requirements analysis interviews gave us insights into the interface design to navigate through different types of information considering the diverse needs. For example, the relevance of O&M notifications was dependent on many personal and situational factors, including the usage of aids for mobility and the type of obstacles (e.g., functional, non-functional). In the case of white cane users, receiving notifications about obstacles, even for non-functional objects such as columns, can serve as helpful cues for orientation. Indeed, prior work on real-world data indicates that informing participants about surrounding obstacles improves path-following behavior [50]. Furthermore, functional obstacles such as trash cans or chairs can become features of interest when needed. Recognizing these factors, there is a clear need to facilitate the customization of notifications, allowing users to preset their preferences and toggle features of interest for an adaptive notification design.

The relevance of notifications regarding pedestrians in the surroundings was found to be context-dependent, with only a few participants from the interviews considering this information a higher priority via push notifications. Our analysis further revealed situational characteristics to be integrated into pedestrian detection, such as the presence of crowds, children, or wheelchair users. Surprisingly, till now, the discourse on pedestrian detection for accessibility has focused on detecting individuals (not crowds), adults (not children), and demographics (not mobility aids), though we see recent efforts like the AVA Data Challenge [122] helpful in this area. The design components could also involve personalizing the recognition system [63], as identifying people of interest (e.g., friends) was deemed critical by our participants in extending the ability of pedestrian detection. Perhaps for this reason, despite the extensive literature focusing on pedestrian detection for blind people, there remains a large gap between the research trends in this field and users’ actual needs [33]. We suggest considering these contextual elements as potential avenues to close this gap.

In terms of information requirements related to shops, it is important to take into account different levels of information-seeking behaviors. Our analysis showed how most participants preferred to receive push notifications for shop names as they walked past shops, along with short descriptions of shop categories (e.g., Women’s fashion) and to have controls to pull more details about the shops. Yet, with preferences for shop-related details varying among participants, scaffolding techniques for shop notifications would be critical. For example, adapting to user preferences could be facilitated by interactive verbosity controls allowing users to skip certain information (P11), or customization settings to prioritize specific information e.g., crowdedness under ‘my favorite shops’ (P12, P18).

We also observed that visual promotional content such as window displays could play an important role in fulfilling recreational and informational purposes in shopping, as exemplified by P18: “What the display is trying to promote, that could be a fashion statement that somebody is making. If it’s for someone who sees it, it just bounces in their head. ‘Oh wow the beach, that swimsuit,’ and they say ‘I think I’ll go in and check that shop out.’ How do you make that something alluring to me.” A great number of techniques have leveraged computer vision to automate the process of producing textual descriptions of visual content [39, 41, 123], but the results can be limited, especially in providing ‘alluring’ information. One future direction could involve exploring approaches to generate contextual descriptions, such as demonstrated by Sreedhar et al. [104] to create context-specific alt-text for online retail images. In the setting of window shopping, incorporating review comments and input from store employees designing window displays or communicating the ‘fashion statement’ might be valuable for meaningful descriptions. With advances in generative AI, there is potential to generate narrative and interpretive descriptions that can pique curiosity, similar to work being done in storytelling [87, 117].

6.2. Design Implications for Interactions in O&M

Based on the findings, we propose the following user interface designs and considerations to enhance existing wayfinding and navigation systems to support blind recreational window shopping.

Navigation apps in smartphones.

Turn-by-turn navigation apps for blind people have great potential to be used in window shopping scenarios. In fact, the evaluation study of NavCog [101] has surfaced a design suggestion for a “window shopping mode” that enables more frequent notifications related to nearby shops. Going forward, it is important to address the means to navigate the hierarchy of shop information, as our findings highlight the first (e.g., shop names and categories) and second (e.g., sales and new arrivals) layers. One way is to incorporate gesture-based interaction, which has been discussed by Kacorri et al. [48] in the context of a mobile interface for blind users to request information while in motion. We can extend this direction based on our participants’ feedback, such as a long press on the touchscreen to activate the second layer. This method can also be complemented with existing swiping gestures (e.g., left and right swipe to interact with POIs on a virtual route for blind people [35]) to access a series of detailed shop information.

One opportunity we anticipate lies in the ability to incorporate data about shops. When designing the Wizard-of-Oz app as outlined in the original protocol (refer to Section 3.3), we did this step manually. We referenced floor plans and a comprehensive list of shops on the shopping mall’s website which included annotations for shop categories and merchandise types. The official shop website and social media were also used to collect promotional information. Clearly, such a manual approach is not scalable and struggles to deliver relevant, up-to-date information. Yet, the labor-intensive process of collecting and annotating POIs is not a surprise in the implementation of navigation apps [36, 101]. We see an opportunity with the recent advancements in generative AI e.g., by combining large language models with real-time search [24] or augmenting them with internal knowledge bases [86] specific to particular shopping malls.

Guide robots.

Building on existing robot guidance systems, such as Cabot [37], we suggest future scenarios to enhance window shopping opportunities. Blind users can physically hold the robot to navigate around obstacles, potentially offering benefits as a ‘robotic guide dog’ [37, 43]. Importantly, by incorporating various sensors, cameras, and machine learning algorithms to detect features of the surroundings, guide robots can be designed to provide relevant information for recreational shopping and communicate them with the user not just with through audio but with haptic feedback.

As informed by our study, the handle can be embedded with physical buttons for primary and secondary actions to access different layers of shop information. Additionally, it can be equipped with a motion command interface, requesting notifications as they slow down or stop to explore the nearby shops of interest. Our participants also suggested integrating the robot with a smartphone app to enable customization, such as pre-setting their favorite shops, to change the priority of information (e.g., push notifications for sales only for the pre-defined shops). Furthermore, a voice command for searching POIs, such as seating areas or trash cans, could be an option for the user, as most participants preferred to limit the cumbersome tasks associated with taking out and using their smartphones in combination with guide robots. Yet, as reflected on P11’s comments “I wouldn’t be embarrassed to say ‘Take me to the restroom’ but I think other people might be when they don’t want to say that out loud in a crowd,” carefully considering social and cultural contexts is critical.

Social acceptability related to technology use in recreational window shopping warrants further design considerations. Our focus groups revealed societal barriers to recreational shopping opportunities for blind people, encapsulated by P6 referring to “sighted people interference.” This reflects blind people’s concerns discussed in previous research [4], emphasizing the need for design decisions that can mitigate possible social tensions arising from the interaction with guide robots. Perhaps we could learn from the recent effort by Cai et al. [20] considering social conventions in the design of a robotic guidance system, such as matching its speed with nearby pedestrians or maintaining an appropriate distance from walls.

7. CONCLUSION

Through a combination of focus groups and requirements analysis interviews with 18 blind participants, we identified preferences and requirements for information access to support blind people’s recreational window shopping opportunities. We discerned relevant push notifications, which included visually engaging content like window displays, and user interfaces to manage the delivery of information related to obstacles and POIs. Customization of notifications was deemed necessary due to personal and situational factors, such as one’s mobility aid used or the presence of crowds.

In this paper, we discussed the implications and potentials for generative AI in the context of eliciting information as well as incorporating recreational window shopping in guide robots and navigation apps for the blind. More so, we discussed the broader social context surrounding these opportunities, as participants articulated their negotiation tactics to limit the social costs of in-person shopping, such as searching for information beforehand and seeking assistance from trained store staff. These discussions resonate with ongoing discourse on the future of shopping, connecting online platforms to an elevated in-person experience [76, 120]—e.g., advanced support from sales associates and technology to enrich human interaction. Thus, to create more inclusive in-person recreational experiences for blind people, we aim next to delve into social navigation aspects [31, 63] within guidance systems.

CCS CONCEPTS • Human-centered computing → Accessibility; User studies; • Social and professional topics → People with disabilities.

ACKNOWLEDGMENTS

We would like to thank the blind participants in our study as well as Daisuke Sato for helping us prepare the CaBot demo. This work was sponsored by Shimizu Corporation. Rie Kamikubo and Hernisa Kacorri were additionally supported from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR), ACL, HHS (grants #90REGE0008 and #90REGE0024).

Footnotes

1

We adopt the term introduced by Amershi et al. [3] referring to “features harnessing artificial intelligence (AI) capabilities that are directly exposed to the end user.”

ACM Reference Format:

Rie Kamikubo, Hernisa Kacorri, and Chieko Asakawa. . “We are at the mercy of others’ opinion”: Supporting Blind People in Recreational Window Shopping with AI-infused Technology. In Proceedings of the 21st International Web for All Conference, May 13–14, 2024, Singapore. ACM, New York, NY, USA, 14 pages.

Contributor Information

Rie Kamikubo, University of Maryland, College Park.

Hernisa Kacorri, University of Maryland, College Park.

Chieko Asakawa, Carnegie Mellon University.

REFERENCES

  • [1].Ahmetovic Dragan, Kwon Nahyun, Oh Uran, Bernareggi Cristian, and Mascetti Sergio. 2021. Touch Screen Exploration of Visual Artwork for Blind People. In Proceedings of the Web Conference 2021 (WWW ‘21). Association for Computing Machinery, New York, NY, USA, 2781–2791. 10.1145/3442381.3449871 [DOI] [Google Scholar]
  • [2].Alnfiai Mrim. 2014. VirtualEyez: Developing NFC Technology to Enable the Visually Impaired to Shop Independently. Ph.D. Dissertation. [Google Scholar]
  • [3].Amershi Saleema, Weld Dan, Vorvoreanu Mihaela, Fourney Adam, Nushi Besmira, Collisson Penny, Suh Jina, Iqbal Shamsi, Bennett Paul N., Inkpen Kori, Teevan Jaime, Kikin-Gil Ruth, and Horvitz Eric. 2019. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ‘19). Association for Computing Machinery, New York, NY, USA, 1–13. 10.1145/3290605.3300233 [DOI] [Google Scholar]
  • [4].Azenkot Shiri, Feng Catherine, and Cakmak Maya. 2016. Enabling building service robots to guide blind people a participatory design approach. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 3–10. [Google Scholar]
  • [5].Bäckström Kristina. 2006. Understanding recreational shopping: A new approach. International Review of Retail, Distribution and Consumer Research 16, 02 (2006), 143–158. [Google Scholar]
  • [6].Baker Stacey Menzel. 2006. Consumer normalcy: Understanding the value of shopping through narratives of consumers with visual impairments. Journal of retailing 82, 1 (2006), 37–50. [Google Scholar]
  • [7].Baker Stacey Menzel, Stephens Debra Lynn, and Hill Ronald Paul. 2002. How can retailers enhance accessibility: Giving consumers with visual impairments a voice in the marketplace. Journal of Retailing and Consumer Services 9, 4 (2002), 227–239. [Google Scholar]
  • [8].Bandukda Maryam, Holloway Catherine, Singh Aneesha, and Berthouze Nadia. 2020. PLACES: A Framework for Supporting Blind and Partially Sighted People in Outdoor Leisure Activities. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘20). Association for Computing Machinery, New York, NY, USA, Article 29, 13 pages. 10.1145/3373625.3417001 [DOI] [Google Scholar]
  • [9].BlindSquare. 2012. BlindSquare iOS Application. https://www.blindsquare.com/ [Google Scholar]
  • [10].Bloch Peter H, Ridgway Nancy M, and Nelson James E. 1991. Leisure and the shopping mall. (1991). [Google Scholar]
  • [11].Bloch Peter H, Ridgway Nancy M, and Sherrell Daniel L. 1989. Extending the concept of shopping: An investigation of browsing activity. Journal of the Academy of Marketing Science 17 (1989), 13–21. [Google Scholar]
  • [12].Boldu Roger, Matthies Denys JC, Zhang Haimo, and Nanayakkara Suranga. 2020. AiSee: An Assistive Wearable Device to Support Visually Impaired Grocery Shoppers. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 4 (2020), 1–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Bradley Robert, Hopkins T, Bailey JM, Bradley R, Hopkins T, Bailey JM, and Hopkins Teresa. 2000. A study of the influence of visual impairment on the purchase of clothing. British Journal of Visual Impairment 18, 2 (2000), 79–81. [Google Scholar]
  • [14].Brady Erin L., Zhong Yu, Morris Meredith Ringel, and Bigham Jeffrey P. 2013. Investigating the appropriateness of social network question asking as a resource for blind users. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW ‘13). Association for Computing Machinery, New York, NY, USA, 1225–1236. 10.1145/2441776.2441915 [DOI] [Google Scholar]
  • [15].Branham Stacy M., Abdolrahmani Ali, Easley William, Scheuerman Morgan, Ronquillo Erick, and Hurst Amy. 2017. “Is Someone There? Do They Have a Gun”: How Visual Information about Others Can Improve Personal Safety Management for Blind Individuals. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘17). Association for Computing Machinery, New York, NY, USA, 260–269. 10.1145/3132525.3132534 [DOI] [Google Scholar]
  • [16].Brown Michael and Lubelczyk Matt. 2018. The future of shopping centers. https://www.es.kearney.com/consumer-retail/article/-/insights/the-future-of-shopping-centers-article Accessed: 2024-01-30. [Google Scholar]
  • [17].Brown Michael and Thomas Katie. 2021. Killing Dracula, birthing Frankenstein’s monster, and the future of the mall. https://www.kearney.com/industry/consumer-retail/article/-/insights/killing-dracula-birthing-frankensteins-monster-and-the-future-of-the-mall Accessed: 2024-01-30. [Google Scholar]
  • [18].Burns Nicola, Paterson Kevin, and Watson Nick. 2008. Exploring disabled people’s perceptions and use of forest recreation goods, facilities and services in Scotland, England and Wales. Glasgow: Strathclyde Centre for Disability Research, Department of Sociology, Anthropology and Applied Social Sciences, University of Glasgow (2008). [Google Scholar]
  • [19].Burns Nicola, Paterson Kevin, and Watson Nick. 2009. An inclusive outdoors? Disabled people’s experiences of countryside leisure services. Leisure Studies 28, 4 (2009), 403–417. [Google Scholar]
  • [20].Cai Shaojun, Ram Ashwin, Gou Zhengtai, Wasim Shaikh Mohd Alqama, Chen Yu-An, Wan Yingjia, Hara Kotaro, Zhao Shengdong, and Hsu David. 2024. Navigating Real-World Challenges: A Quadruped Robot Guiding System for Visually Impaired People in Diverse Environments. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. ACM. [Google Scholar]
  • [21].Cheraghi Seyed Ali, Namboodiri Vinod, and Walker Laura. 2017. GuideBeacon: Beacon-based indoor wayfinding for the blind, visually impaired, and disoriented. In 2017 IEEE International Conference on Pervasive Computing and Communications (PerCom). IEEE, 121–130. [Google Scholar]
  • [22].Dakopoulos Dimitrios and Bourbakis Nikolaos G. 2009. Wearable obstacle avoidance electronic travel aids for blind: a survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 40, 1 (2009), 25–35. [Google Scholar]
  • [23].Danziger Pamela N. 2020. Malls Are Struggling Now, But There’s Hope For A Retail Renaissance Post-COVID. https://www.forbes.com/sites/pamdanziger/2020/08/27/prospects-for-malls-are-bright-post-covid-get-ready-for-a-mall-renaissance/?sh=345bac3c14fa Accessed: 2024-01-31. [Google Scholar]
  • [24].Davis Wes. 2023. ChatGPT can now search the web in real time. https://www.theverge.com/2023/9/27/23892781/openai-chatgpt-live-web-results-browse-with-bing Accessed: 2024-04-30. [Google Scholar]
  • [25].Devile Eugenia and Kastenholz Elisabeth. 2020. Accessible tourism experiences: the voice of people with visual disabilities. In Social Tourism at the Crossroads. Routledge, 84–104. [Google Scholar]
  • [26].Devine Mary Ann. 2015. Leveling the playing field: Perspectives of people with disabilities on reasonable accommodations received to engage in public recreation. Disability Studies Quarterly 35, 3 (2015). [Google Scholar]
  • [27].Diaz-Gutierrez Jorge Manuel, Mohammadi-Mavi Helia, and Ranjbari Andisheh. 2023. COVID-19 Impacts on Online and In-Store Shopping Behaviors: Why they Happened and Whether they Will Last Post Pandemic. Transportation Research Record (2023). 10.1177/03611981231155169 [DOI] [Google Scholar]
  • [28].Hedhli Kamel El, Chebat Jean-Charles, and Sirgy M Joseph. 2013. Shopping well-being at the mall: Construct, antecedents, and consequences. Journal of business research 66, 7 (2013), 856–863. [Google Scholar]
  • [29].Elgendy Mostafa, Sik-Lanyi Cecilia, and Kelemen Arpad. 2019. Making shopping easy for people with visual impairment using mobile assistive technologies. Applied Sciences 9, 6 (2019), 1061. [Google Scholar]
  • [30].Farrag Dalia A, El Sayed Ismail M, and Belk Russell W. 2010. Mall shopping motives and activities: a multimethod approach. Journal of International Consumer Marketing 22, 2 (2010), 95–115. [Google Scholar]
  • [31].Francis Anthony, Pérez-d’Arpino Claudia, Li Chengshu, Xia Fei, Alahi Alexandre, Alami Rachid, Bera Aniket, Biswas Abhijat, Biswas Joydeep, Chandra Rohan, et al. 2023. Principles and guidelines for evaluating social robot navigation algorithms. arXiv preprint arXiv:2306.16740 (2023). [Google Scholar]
  • [32].Galatas Georgios, McMurrough Christopher, Mariottini Gian Luca, and Makedon Fillia. 2011. eyeDog: an assistive-guide robot for the visually impaired. In Proceedings of the 4th international conference on pervasive technologies related to assistive environments. 1–8. [Google Scholar]
  • [33].Gamage Bhanuka, Do Thanh-Toan, Chiang Price Nicholas Seow, Lowery Arthur, and Marriott Kim. 2023. What do Blind and Low-Vision People Really Want from Assistive Smart Devices? Comparison of the Literature with a Focus Study. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘23). Association for Computing Machinery, New York, NY, USA, Article 30, 21 pages. 10.1145/3597638.3608955 [DOI] [Google Scholar]
  • [34].Gleason Cole, Fiannaca Alexander J., Kneisel Melanie, Cutrell Edward, and Morris Meredith Ringel. 2018. FootNotes: Geo-referenced Audio Annotations for Nonvisual Exploration. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 3, Article 109 (sep 2018), 24 pages. 10.1145/3264919 [DOI] [Google Scholar]
  • [35].Guerreiro João, Ahmetovic Dragan, Kitani Kris M, and Asakawa Chieko. 2017. Virtual navigation for blind people: Building sequential representations of the real-world. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 280–289. [Google Scholar]
  • [36].Guerreiro João, Ahmetovic Dragan, Sato Daisuke, Kitani Kris, and Asakawa Chieko. 2019. Airport accessibility and navigation assistance for people with visual impairments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–14. [Google Scholar]
  • [37].Guerreiro João, Sato Daisuke, Asakawa Saki, Dong Huixu, Kitani Kris M, and Asakawa Chieko. 2019. CaBot: Designing and Evaluating an Autonomous Navigation Robot for Blind People. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 68–82. [Google Scholar]
  • [38].Guha Abhijit, Grewal Dhruv, Kopalle Praveen K, Haenlein Michael, Schneider Matthew J, Jung Hyunseok, Moustafa Rida, Hegde Dinesh R, and Hawkins Gary. 2021. How artificial intelligence will affect the future of retailing. Journal of Retailing 97, 1 (2021), 28–41. [Google Scholar]
  • [39].Gurari Danna, Zhao Yinan, Zhang Meng, and Bhattacharya Nilavra. 2020. Captioning images taken by people who are blind. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVII 16. Springer, 417–434. [Google Scholar]
  • [40].Hall Sarah A. 2009. The social inclusion of people with disabilities: a qualitative meta-analysis. Journal of ethnographic & qualitative research 3, 3 (2009). [Google Scholar]
  • [41].Hanley Margot, Barocas Solon, Levy Karen, Azenkot Shiri, and Nissenbaum Helen. 2021. Computer vision and conflicting values: Describing people with automated alt text. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 543–554. [Google Scholar]
  • [42].Holloway Leona, Marriott Kim, Butler Matthew, and Borning Alan. 2019. Making Sense of Art: Access for Gallery Visitors with Vision Impairments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 20. [Google Scholar]
  • [43].Hwang Hochul, Jung Hee-Tae, Giudice Nicholas A, Biswas Joydeep, Lee Sunghoon Ivan, and Kim Donghyun. 2024. Towards Robotic Companions: Understanding Handler-Guide Dog Interactions for Informed Guide Dog Robot Design. (2024). [Google Scholar]
  • [44].Bartolome Jorge Iranzo, Quero Luis Cavazos, Kim Sunhee, Um Myung-Yong, and Cho Jundong. 2019. Exploring Art with a Voice Controlled Multimodal Guide for Blind People. In Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction. 383–390. [Google Scholar]
  • [45].Iso-Ahola Seppo E and Park Chun J. 1996. Leisure-related social support and self-determination as buffers of stress-illness relationship. Journal of Leisure research 28, 3 (1996), 169–187. [Google Scholar]
  • [46].Jaarsma Eva A, Dekker Rienk, Koopmans Steven A, Dijkstra Pieter U, and Geertzen Jan HB. 2014. Barriers to and facilitators of sports participation in people with visual impairments. Adapted Physical Activity Quarterly 31, 3 (2014), 240–264. [DOI] [PubMed] [Google Scholar]
  • [47].Jain Dhruv. 2014. Pilot evaluation of a path-guided indoor navigation system for visually impaired in a public museum. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS ‘14). Association for Computing Machinery, New York, NY, USA, 273–274. 10.1145/2661334.2661405 [DOI] [Google Scholar]
  • [48].Kacorri Hernisa, Mascetti Sergio, Gerino Andrea, Ahmetovic Dragan, Alampi Valeria, Takagi Hironobu, and Asakawa Chieko. 2018. Insights on assistive orientation and mobility of people with visual impairment based on large-scale longitudinal data. ACM Transactions on Accessible Computing (TACCESS) 11, 1 (2018), 1–28. [Google Scholar]
  • [49].Kacorri Hernisa, Mascetti Sergio, Gerino Andrea, Ahmetovic Dragan, Takagi Hironobu, and Asakawa Chieko. 2016. Supporting orientation of people with visual impairment: Analysis of large scale usage data. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. 151–159. [Google Scholar]
  • [50].Kacorri Hernisa, Ohn-Bar Eshed, Kitani Kris M., and Asakawa Chieko. 2018. Environmental Factors in Indoor Navigation Based on Real-World Trajectories of Blind Users. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ‘18). Association for Computing Machinery, New York, NY, USA, 1–12. 10.1145/3173574.3173630 [DOI] [Google Scholar]
  • [51].Kameswaran Vaishnav and Muralidhar Srihari Hulikal. 2019. Cash, Digital Payments and Accessibility-A Case Study from India. In ACM Conference on Computer Supported Cooperative Work, Vol. 3. [Google Scholar]
  • [52].Kamikubo Rie, Wang Lining, Marte Crystal, Mahmood Amnah, and Kacorri Hernisa. 2022. Data Representativeness in Accessibility Datasets: A Meta-Analysis. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘22). Association for Computing Machinery, New York, NY, USA, Article 8, 15 pages. 10.1145/3517428.3544826 [DOI] [Google Scholar]
  • [53].Katzschmann Robert K, Araki Brandon, and Rus Daniela. 2018. Safe local navigation for visually impaired users with a time-of-flight and haptic feedback device. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26, 3 (2018), 583–593. [DOI] [PubMed] [Google Scholar]
  • [54].Kayukawa Seita, Sato Daisuke, Murata Masayuki, Ishihara Tatsuya, Takagi Hironobu, Morishima Shigeo, and Asakawa Chieko. 2023. Enhancing Blind Visitor’s Autonomy in a Science Museum Using an Autonomous Navigation Robot. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ‘23). Association for Computing Machinery, New York, NY, USA, Article 541, 14 pages. 10.1145/3544548.3581220 [DOI] [Google Scholar]
  • [55].Khan Dawar, Cheng Zhanglin, Uchiyama Hideaki, Ali Sikandar, Asshad Muhammad, and Kiyokawa Kiyoshi. 2022. Recent advances in vision-based indoor navigation: A systematic literature review. Computers & Graphics 104 (2022), 24–45. [Google Scholar]
  • [56].Kim Jee-Eun, Bessho Masahiro, Kobayashi Shinsuke, Koshizuka Noboru, and Sakamura Ken. 2016. Navigating visually impaired travelers in a large train station using smartphone and bluetooth low energy. In Proceedings of the 31st Annual ACM Symposium on Applied Computing. 604–611. [Google Scholar]
  • [57].Kuribayashi Masaki, Kayukawa Seita, Vongkulbhisal Jayakorn, Asakawa Chieko, Sato Daisuke, Takagi Hironobu, and Morishima Shigeo. 2022. Corridor-Walker: Mobile indoor walking assistance for blind people to avoid obstacles and recognize intersections. Proceedings of the ACM on Human-Computer Interaction 6, MHCI (2022), 1–22. [Google Scholar]
  • [58].Kwon Nahyun, Koh Youngji, and Oh Uran. 2019. Supporting Object-level Exploration of Artworks by Touch for People with Visual Impairments. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘19). Association for Computing Machinery, New York, NY, USA, 600–602. 10.1145/3308561.3354620 [DOI] [Google Scholar]
  • [59].Labbé Delphine, Miller William C, and Ng Ruby. 2019. Participating more, participating better: Health benefits of adaptive leisure for people with disabilities. Disability and Health Journal 12, 2 (2019), 287–295. [DOI] [PubMed] [Google Scholar]
  • [60].Lacey Gerard and MacNamara Shane. 2000. Context-aware shared control of a robot mobility aid for the elderly blind. The International Journal of Robotics Research 19, 11 (2000), 1054–1065. [Google Scholar]
  • [61].Lakomy Martin, Hlavová Renata, Machackova Hana, Bohlin Gustav, Lindholm Maria, Bertero Michela G, and Dettenhofer Markus. 2020. The motivation for citizens’ involvement in life sciences research is predicted by age and gender. PLoS One 15, 8 (2020), e0237140. [Google Scholar]
  • [62].Lanigan Patrick E, Paulos Aaron M, Williams Andrew W, Rossi Dan, and Narasimhan Priya. 2006. Trinetra: Assistive Technologies for Grocery Shopping for the Blind.. In ISWC. 147–148. [Google Scholar]
  • [63].Lee Kyungjun, Sato Daisuke, Asakawa Saki, Asakawa Chieko, and Kacorri Hernisa. 2021. Accessing passersby proxemic signals through a head-worn camera: opportunities and limitations for the blind. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility. 1–15. [Google Scholar]
  • [64].Bing Li, Munoz J Pablo, Rong Xuejian, Xiao Jizhong, Tian Yingli, and Arditi Aries. 2016. ISANA: wearable context-aware indoor assistive navigation with obstacle avoidance for the blind. In European Conference on Computer Vision. Springer, 448–462. [Google Scholar]
  • [65].Li Franklin Mingzhe, Zhang Lotus, Bandukda Maryam, Stangl Abigale, Shinohara Kristen, Findlater Leah, and Carrington Patrick. 2023. Understanding Visual Arts Experiences of Blind People. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ‘23). Association for Computing Machinery, New York, NY, USA, Article 60, 21 pages. 10.1145/3544548.3580941 [DOI] [Google Scholar]
  • [66].Lieberman Lauren and Stuart Moira. 2002. Self-determined recreational and leisure choices of individuals with deaf-blindness. Journal of Visual Impairment & Blindness 96, 10 (2002), 724–735. [Google Scholar]
  • [67].Lieberman Lauren J, Haibach-Beach Pamela, Perreault Melanie, and Stribing Alex. 2023. Outdoor recreation experiences in youth with visual impairments: A qualitative inquiry. Journal of Adventure Education and Outdoor Learning 23, 2 (2023), 170–183. [Google Scholar]
  • [68].Lindner Jannik. 2023. Must-Know Mall Traffic Statistics [Recent Analysis]. https://gitnux.org/mall-traffic-statistics/ Accessed: 2024-02-08. [Google Scholar]
  • [69].Liu Hong, Wang Jun, Wang Xiangdong, and Qian Yueliang. 2015. iSee: obstacle detection and feedback system for the blind. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers (UbiComp/ISWC’15 Adjunct). Association for Computing Machinery, New York, NY, USA, 197–200. 10.1145/2800835.2800917 [DOI] [Google Scholar]
  • [70].Lloyd Janice KF, Grow Steven La, Stafford Kevin J, and Budge R Claire. 2008. The guide dog as a mobility aid part 1: Perceived effectiveness on travel performance. Vision Rehabilitation International 1, 1 (2008), 17–33. [Google Scholar]
  • [71].Loi Kim Ieng and Kong Weng Hang. 2017. Tourism for all: Challenges and issues faced by people with vision impairment. Tourism Planning & Development 14, 2 (2017), 181–197. [Google Scholar]
  • [72].Ipiña Diego López-de, Lorido Tania, and López Unai. 2011. Blindshopping: enabling accessible shopping for visually impaired people through mobile technologies. In International Conference on Smart Homes and Health Telematics. Springer, 266–270. [Google Scholar]
  • [73].Manca Luigino. 2017. Article 30 [Participation in cultural life, recreation, leisure and sport]. In The United Nations convention on the rights of persons with disabilities: A commentary. Springer, 541–555. [Google Scholar]
  • [74].Megalingam Rajesh Kannan, Vishnu Souraj, Sasikumar Vishnu, and Sreekumar Sajikumar. 2019. Autonomous path guiding robot for visually impaired people. In Cognitive Informatics and Soft Computing. Springer, 257–266. [Google Scholar]
  • [75].Microsoft. 2024. Microsoft Soundscape. https://www.microsoft.com/en-us/research/product/soundscape/ Accessed: 2024-01-30. [Google Scholar]
  • [76].Moore Kaleigh. 2023. Future of Malls: How Retailers Can Adapt in 2024. https://www.shopify.com/retail/future-of-malls Accessed: 2024-01-31. [Google Scholar]
  • [77].Müller Karin, Engel Christin, Loitsch Claudia, Stiefelhagen Rainer, and Weber Gerhard. 2022. Traveling More Independently: A Study on the Diverse Needs and Challenges of People with Visual or Mobility Impairments in Unfamiliar Indoor Environments. ACM Trans. Access. Comput. 15, 2, Article 13 (may 2022), 44 pages. 10.1145/3514255 [DOI] [Google Scholar]
  • [78].Murata Masayuki, Ahmetovic Dragan, Sato Daisuke, Takagi Hironobu, Kitani Kris M, and Asakawa Chieko. 2018. Smartphone-based indoor localization for blind navigation across building complexes. In 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom). IEEE, 1–10. [Google Scholar]
  • [79].Nanavati Amal, Tan Xiang Zhi, and Steinfeld Aaron. 2018. Coupled Indoor Navigation for People Who Are Blind. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 201–202. [Google Scholar]
  • [80].Ng Cheuk Fan. 2003. Satisfying shoppers’ psychological needs: From public market to cyber-mall. Journal of Environmental Psychology 23, 4 (2003), 439–455. [Google Scholar]
  • [81].Nicholson John, Kulyukin Vladimir, and Coster Daniel. 2009. ShopTalk: independent blind shopping through verbal route directions and barcode scans. The Open Rehabilitation Journal 2, 1 (2009), 11–23. [Google Scholar]
  • [82].Offenwanger Anna, Milligan Alan John, Chang Minsuk, Bullard Julia, and Yoon Dongwook. 2021. Diagnosing Bias in the Gender Representation of HCI Research Participants: How it Happens and Where We Are. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ‘21). Association for Computing Machinery, New York, NY, USA, Article 399, 18 pages. 10.1145/3411764.3445383 [DOI] [Google Scholar]
  • [83].OrCam. 2020. OrCam MyEye. https://www.orcam.com/en-us/orcam-myeye Accessed: 2024-01-30. [Google Scholar]
  • [84].Pressman Sarah D, Matthews Karen A, Cohen Sheldon, Martire Lynn M, Scheier Michael, Baum Andrew, and Schulz Richard. 2009. Association of enjoyable leisure activities with psychological and physical well-being. Psychosomatic medicine 71, 7 (2009), 725. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [85].Presti Giorgio, Ahmetovic Dragan, Ducci Mattia, Bernareggi Cristian, Ludovico Luca, Baratè Adriano, Avanzini Federico, and Mascetti Sergio. 2019. WatchOut: Obstacle sonification for people with visual impairment or blindness. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility. 402–413. [Google Scholar]
  • [86].Ram Ori, Levine Yoav, Dalmedigos Itay, Muhlgay Dor, Shashua Amnon, Leyton-Brown Kevin, and Shoham Yoav. 2023. In-context retrieval-augmented language models. Transactions of the Association for Computational Linguistics 11 (2023), 1316–1331. [Google Scholar]
  • [87].Ravi Hareesh, Kafle Kushal, Cohen Scott, Brandt Jonathan, and Kapadia Mubbasir. 2021. Aesop: Abstract encoding of stories, objects, and pictures. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2052–2063. [Google Scholar]
  • [88].Real Santiago and Araujo Alvaro. 2019. Navigation systems for the blind and visually impaired: Past work, challenges, and open problems. Sensors 19, 15 (2019), 3404. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [89].Rector Kyle. 2017. Enhancing exercise for people who are blind or low vision using interactive technology. Interactions 24, 5 (2017), 68–71. [Google Scholar]
  • [90].Rector Kyle, Milne Lauren, Ladner Richard E., Friedman Batya, and Kientz Julie A. 2015. Exploring the Opportunities and Challenges with Exercise Technologies for People who are Blind or Low-Vision. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS ‘15). Association for Computing Machinery, New York, NY, USA, 203–214. 10.1145/2700648.2809846 [DOI] [Google Scholar]
  • [91].Reese Scott. 2020. Integrating Retail Online/Offline Experiences in the “New Normal”. https://multichannelmerchant.com/blog/integrating-retail-online-offline-experiences-in-the-new-normal/ Accessed: 2024-01-31. [Google Scholar]
  • [92].Reichinger Andreas, Fuhrmann Anton, Maierhofer Stefan, and Purgathofer Werner. 2016. Gesture-based interactive audio guide on tactile reliefs. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. 91–100. [Google Scholar]
  • [93].Ren Peng, Lam Jonathan, Manduchi Roberto, and Mirzaei Fatemeh. 2023. Experiments with RouteNav, A Wayfinding App for Blind Travelers in a Transit Hub. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘23). Association for Computing Machinery, New York, NY, USA, Article 2, 15 pages. 10.1145/3597638.3608428 [DOI] [Google Scholar]
  • [94].Rimmer James H. 2005. The conspicuous absence of people with disabilities in public fitness and recreation facilities: lack of interest or lack of access? American Journal of Health Promotion 19, 5 (2005), 327–329. [DOI] [PubMed] [Google Scholar]
  • [95].Rimmer James H. 2006. Building inclusive physical activity communities for people with vision loss. Journal of Visual Impairment & Blindness 100, 1_suppl (2006), 863–865. [Google Scholar]
  • [96].Rodríguez Alberto, Yebes J Javier, Alcantarilla Pablo F, Bergasa Luis M, Almazán Javier, and Cela Andrés. 2012. Assisting the visually impaired: obstacle detection and warning system by acoustic feedback. Sensors 12, 12 (2012), 17476–17496. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [97].Rothenberg Eva. 2023. The US mall is not dying. https://www.cnn.com/2023/08/20/business/shopping-mall-retail-growth/index.html Accessed: 2024-01-30. [Google Scholar]
  • [98].Saha Manaswi, Fiannaca Alexander J., Kneisel Melanie, Cutrell Edward, and Morris Meredith Ringel. 2019. Closing the Gap: Designing for the Last-Few-Meters Wayfinding Problem for People with Visual Impairments. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘19). Association for Computing Machinery, New York, NY, USA, 222–235. 10.1145/3308561.3353776 [DOI] [Google Scholar]
  • [99].Salminen Anna-Liisa and Karhula Maarit E. 2014. Young persons with visual impairment: Challenges of participation. Scandinavian journal of occupational therapy 21, 4 (2014), 267–276. [DOI] [PubMed] [Google Scholar]
  • [100].Sato Daisuke, Oh Uran, Guerreiro João, Ahmetovic Dragan, Naito Kakuya, Takagi Hironobu, Kitani Kris M, and Asakawa Chieko. 2019. NavCog3 in the wild: Large-scale blind indoor navigation assistant with semantic features. ACM Transactions on Accessible Computing (TACCESS) 12, 3 (2019), 1–30. [Google Scholar]
  • [101].Sato Daisuke, Oh Uran, Naito Kakuya, Takagi Hironobu, Kitani Kris, and Asakawa Chieko. 2017. NavCog3: An evaluation of a smartphone-based blind indoor navigation assistant with semantic features in a large-scale environment. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility. 270–279. [Google Scholar]
  • [102].Sheth Jagdish. 2020. Impact of Covid-19 on consumer behavior: Will the old habits return or die? Journal of Business Research 117 (2020), 280–283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [103].Snider Mike. 2023. ‘Death of the mall is widely exaggerated’: Shopping malls see resurgence post-COVID, report shows. https://www.usatoday.com/story/money/shopping/2023/08/28/future-of-shopping-malls-in-america/70649848007/ Accessed: 2024-01-30. [Google Scholar]
  • [104].Sreedhar Rachana, Tan Nicole, Zhang Jingyue, Jin Kim, Gregson Spencer, Moreta-Feliz Eli, Samudrala Niveditha, and Sadalgi Shrenik. 2022. AIDE: automatic and accessible image descriptions for review imagery in online retail. In Proceedings of the 19th International Web for All Conference. 1–8. [Google Scholar]
  • [105].Stangl Abigale J., Kothari Esha, Jain Suyog D., Yeh Tom, Grauman Kristen, and Gurari Danna. 2018. BrowseWithMe: An Online Clothes Shopping Assistant for People with Visual Impairments. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘18). Association for Computing Machinery, New York, NY, USA, 107–118. 10.1145/3234695.3236337 [DOI] [Google Scholar]
  • [106].Stearns Lee, Findlater Leah, and Froehlich Jon E. 2018. Applying Transfer Learning to Recognize Clothing Patterns Using a Finger-Mounted Camera. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘18). Association for Computing Machinery, New York, NY, USA, 349–351. 10.1145/3234695.3241015 [DOI] [Google Scholar]
  • [107].Stearns Lee Stephan. 2018. HandSight: A Touch-Based Wearable System to Increase Information Accessibility for People with Visual Impairments. Ph.D. Dissertation. [Google Scholar]
  • [108].Stephens Kate, Butler Matthew, Holloway Leona M, Cagatay Goncu, and Marriott Kim. 2020. Smooth Sailing? Autoethnography of Recreational Travel by a Blind Person. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘20). Association for Computing Machinery, New York, NY, USA, Article 26, 12 pages. 10.1145/3373625.3417011 [DOI] [Google Scholar]
  • [109].Swaine Bonnie, Labbé Delphine, Poldma Tiiu, Barile Maria, Fichten Catherine, Havel Alice, Kehayia Eva, Mazer Barbara, McKinley Patricia, and Rochette Annie. 2014. Exploring the facilitators and barriers to shopping mall use by persons with disabilities and strategies for improvements: Perspectives from persons with disabilities, rehabilitation professionals and shopkeepers. Alter 8, 3 (2014), 217–229. [Google Scholar]
  • [110].Szczepańska Agnieszka and Pietrzyka Katarzyna. 2021. The COVID-19 epidemic in Poland and its influence on the quality of life of university students (young adults) in the context of restricted access to public spaces. Journal of Public Health (2021), 1–11. [Google Scholar]
  • [111].Tachi Susumu and Komoriya Kiyoshi. 1984. Guide dog robot. Autonomous Mobile Robots: Control, Planning, and Architecture (1984), 360–367. [Google Scholar]
  • [112].Tapu Ruxandra, Mocanu Bogdan, and Zaharia Titus. 2020. Wearable assistive devices for visually impaired: A state of the art survey. Pattern Recognition Letters 137 (2020), 37–52. [Google Scholar]
  • [113].Thieme Anja, Bennett Cynthia L., Morrison Cecily, Cutrell Edward, and Taylor Alex S. 2018. “I can do everything but see!” – How People with Vision Impairments Negotiate their Abilities in Social Contexts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ‘18). Association for Computing Machinery, New York, NY, USA, 1–14. 10.1145/3173574.3173777 [DOI] [Google Scholar]
  • [114].Thoo Yong-Joon, Medina Maximiliano Jeanneret, Froehlich Jon E., Ruffieux Nicolas, and Lalanne Denis. 2023. A Large-Scale Mixed-Methods Analysis of Blind and Low-vision Research in ACM and IEEE. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘23). Association for Computing Machinery, New York, NY, USA, Article 20, 20 pages. 10.1145/3597638.3608412 [DOI] [Google Scholar]
  • [115].Tobita Kazuteru, Sagayama Katsuyuki, and Ogawa Hironori. 2017. Examination of a Guidance Robot for Visually Impaired People. Journal of Robotics and Mechatronics 29, 4 (2017), 720–727. [Google Scholar]
  • [116].Tullio-Pow Sandra, Yu Hong, and Strickfaden Megan. 2021. Do You See What I See? The shopping experiences of people with visual impairment. Interdisciplinary Journal of Signage and Wayfinding 5, 1 (2021), 42–61. [Google Scholar]
  • [117].Uehara Kohei, Mori Yusuke, Mukuta Yusuke, and Harada Tatsuya. 2022. Vinter: Image narrative generation with emotion-arc-aware transformer. In Companion Proceedings of the Web Conference 2022. 716–725. [Google Scholar]
  • [118].Wang Xiyue, Kayukawa Seita, Takagi Hironobu, and Asakawa Chieko. 2022. BentoMuseum: 3D and Layered Interactive Museum Map for Blind Visitors. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘22). Association for Computing Machinery, New York, NY, USA, Article 35, 14 pages. 10.1145/3517428.3544811 [DOI] [Google Scholar]
  • [119].Wang Ziqi, Guo Bin, Wang Qianru, Zhang Daqing, and Yu Zhiwen. 2023. Mobi-Eye: An Efficient Shopping-Assistance System for the Visually Impaired With Mobile Phone Sensing. IEEE Transactions on Human-Machine Systems (2023). [Google Scholar]
  • [120].Weintraub Marty. 2020. The future of the mall - Building a new kind of destination for the post-pandemic world. https://www2.deloitte.com/content/dam/Deloitte/ca/Documents/consumer-industrial-products/ca-future-of-the-mall-en-AODA.pdf Accessed: 2024-01-31. [Google Scholar]
  • [121].Williams Michele A., Galbraith Caroline, Kane Shaun K., and Hurst Amy. 2014. “just let the cane hit it”: how the blind and sighted see navigation differently. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS ‘14). Association for Computing Machinery, New York, NY, USA, 217–224. 10.1145/2661334.2661380 [DOI] [Google Scholar]
  • [122].CVPR 2024 Workshop. 2024. AVA: Accessibility, Vision, and Autonomy Meet. https://accessibility-cv.github.io/ Accessed: 2024-05-08. [Google Scholar]
  • [123].Wu Shaomei, Wieland Jeffrey, Farivar Omid, and Schiller Julie. 2017. Automatic alt-text: Computer-generated image descriptions for blind users on a social network service. In proceedings of the 2017 ACM conference on computer supported cooperative work and social computing. 1180–1192. [Google Scholar]
  • [124].Xia Lan. 2010. An examination of consumer browsing behaviors. Qualitative Market Research: An International Journal 13, 2 (2010), 154–173. [Google Scholar]
  • [125].Yang Xiaodong, Yuan Shuai, and Tian YingLi. 2014. Assistive clothing pattern recognition for visually impaired people. IEEE Transactions on Human-Machine Systems 44, 2 (2014), 234–243. [Google Scholar]
  • [126].Yu Hong, Tullio-Pow Sandra, and Akhtar Ammar. 2015. Retail design and the visually impaired: A needs assessment. Journal of Retailing and Consumer Services 24 (2015), 121–129. [Google Scholar]
  • [127].Yuan Chien Wen, Hanrahan Benjamin V, Lee Sooyeon, Rosson Mary Beth, and Carroll John M. 2019. Constructing a holistic view of shopping with people with visual impairment: a participatory design approach. Universal Access in the Information Society 18 (2019), 127–140. [Google Scholar]
  • [128].Zhao Yuhang, Szpiro Sarit, Knighten Jonathan, and Azenkot Shiri. 2016. CueSee: exploring visual cues for people with low vision to facilitate a visual search task. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 73–84. [Google Scholar]
  • [129].Zientara Peter A, Lee Sooyeon, Smith Gus H, Brenner Rorry, Itti Laurent, Rosson Mary B, Carroll John M, Irick Kevin M, and Narayanan Vijaykrishnan. 2017. Third eye: A shopping assistant for the visually impaired. Computer 50, 2 (2017), 16–24. [Google Scholar]

RESOURCES