Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2021 Dec 22;68:101765. doi: 10.1016/j.tele.2021.101765

A multi-method analytical approach to predicting young adults’ intention to invest in mHealth during the COVID-19 pandemic

Najmul Hasan a, Yukun Bao a,, Raymond Chiong b
PMCID: PMC8693780  PMID: 34955594

Abstract

Mobile-based health (mHealth) systems are proving to be a popular alternative to the traditional visits to healthcare providers. They can also be useful and effective in fighting the spread of infectious diseases, such as the COVID-19 pandemic. Even though young adults are the most prevalent mHealth user group, the relevant literature has overlooked their intention to invest in and use mHealth services. This study aims to investigate the predictors that influence young adults’ intention to invest in mHealth (IINmH), particularly during the COVID-19 crisis, by designing a research methodology that incorporates both the health belief model (HBM) and the expectation-confirmation model (ECM). As an expansion of the integrated HBM-ECM model, this study proposes two additional predictors: mobile Internet speed and mobile Internet cost. A multi-method analytical approach, including partial least squares structural equation modelling (PLS-SEM), fuzzy-set qualitative comparative analysis (fsQCA), and machine learning (ML), was utilised together with a sample dataset of 558 respondents. The dataset—about young adults in Bangladesh with an experience of using mHealth—was obtained through a structured questionnaire to examine the complex causal relationships of the integrated model. The findings from PLS-SEM indicate that value-for-money, mobile Internet cost, health motivation, and confirmation of services all have a substantial impact on young adults’ IINmH during the COVID-19 pandemic. At the same time, the fsQCA results indicate that a combination of predictors, instead of any individual predictor, had a significant impact on predicting IINmH. Among ML methods, the XGBoost classifier outperformed other classifiers in predicting the IINmH, which was then used to perform sensitivity analysis to determine the relevance of features. We expect this multi-method analytical approach to make a significant contribution to the mHealth domain as well as the broad information systems literature.

Keywords: mHealth, Young adults, Integrated information systems model, Multi-method analytical approach, SEM-fsQCA-ML

1. Introduction

The COVID-19 pandemic has had catastrophic effects on both communities and governments (Hasan, 2020, Mao et al., 2020). High incidence and mortality rates, as well as rapid changes in biological and epidemiological patterns, have had a major impact on healthcare services. The necessity of strong measures for controlling the pandemic and the resulting negative economic consequences have caused dilemmas for policymakers. Measures that governments should take to save lives can often be difficult to implement (Guo et al., 2021). A deep understanding of these dilemmas can help healthcare providers and policymakers identify healthcare alternatives for combating the COVID-19 pandemic, as well as others in the future (Mao et al., 2020). This can be critical for implementing effective measures, particularly in developing and least developed countries with high population density, weak healthcare services, and a scarcity of required resources (Shammi et al., 2020). In this regard, a combination of digital healthcare systems and remote care strategies might help control, prevent, and combat the transmission of communicable diseases (Asadzadeh & Kalankesh, 2021). Mobile health (mHealth) can provide alternative approaches for controlling any pandemic by creating awareness, providing remote consultation and minimising patient referrals, supporting contact tracing, and thereby helping to mitigate the further expansion of an epidemic (Nachega et al., 2020).

These days, mHealth applications are regularly used by the general population, government, and crisis management organisations (Guo et al., 2021). Governments and healthcare providers use mHealth technologies, such as instant messaging, patient surveillance, and contact tracing, to check the spread of infectious diseases. An mHealth app (Flu-Report) was used to monitor influenza patients using a self-reported questionnaire (Fujibayashi et al., 2018). A health-monitoring app for detecting the Zika virus was used to continuously monitor the health status of the Spanish Olympic delegation (Rodriguez-Valero et al., 2018). A mobile system for rapid diagnosis of the Middle East respiratory syndrome (MERS) has also been developed (Shirato et al., 2020). More recently, various government departments, including healthcare and epidemic control, are utilising mHealth applications for diagnosis and screening, contact tracing, recording movement, sending awareness messages, and combating misinformation (Asadzadeh & Kalankesh, 2021). Thus, mHealth can be beneficial for patients, healthcare professionals, and policymakers alike by further improving healthcare services.

Young adults (19–34 years) are the most potential consumer group for mHealth services, with a greater tendency to download mHealth apps than other age groups (Altmann & Gries, 2017). The flexibility and accessibility of mobile technology platforms can attract them towards mHealth and, thus, connect them to the healthcare system (Slater et al., 2017). Usage of mHealth applications might include a powerful strategy of self-care management for young adults. However, young adults are generally overlooked in favour of older age groups by conventional health policies, since they have less contact with health information exchange or interaction with health practitioners (Nikolaou et al., 2019). Despite this, the use of mHealth technology among young adults is increasing. For instance, in China, most mHealth users are well-educated and city-dwelling young adults (To et al., 2019). Young medical practitioners extensively use mHealth applications as clinical resources to provide adequate healthcare services. In Africa, despite limited mHealth services, young people are making innovative and strategic use of mobile telephones to ensure efficient healthcare services (Kathuria-Prakash et al., 2019). Sharpe et al. (2017) found that mHealth apps can go beyond conventional healthcare behaviour changes. Furthermore, the limited resources and funds available to young adults might make mHealth an appealing alternative for accessing the healthcare system (Hampshire et al., 2015). While long-term usage of mHealth applications is beneficial, young adults choose to abandon the apps for financial reasons. Many users who downloaded and used an mHealth app but then uninstalled it, complained that it was too expensive or had poor user experience (Murnane et al., 2015). However, another study found the cost of the app to have a statistically significant positive association with younger participants’ intentions to invest more money in mHealth apps (Somers et al., 2019). Despite widespread acceptance and understanding of the deployment of mHealth, there seems to be a dearth of evaluations of this technology, particularly in the post-adoption phases (O’Connor, Andreev, & O’Reilly, 2020). In this context, understanding and encouraging the post-adoption of mHealth services by young adults is important for ensuring the success of mHealth policy and improving overall access to healthcare services. While prior research has attempted to determine users' intention to accept and utilise mHealth (Altmann and Gries, 2017, Hampshire et al., 2015, Sittig et al., 2020 Talukder et al., 2019), there are only a few studies about young adults' intentions to adopt mHealth technology. Therefore, this study aims to address the following research question: “What factors influence young adults’ intention to invest in mHealth (IINmH) technology, especially during the COVID-19 pandemic?”.

Recent studies have investigated intentions to use mHealth by applying various technology acceptance models, such as the Extended Unified Theory of Acceptance and Use of Technology Model (UTAUT2) (Hoque & Sorwar, 2017), Technology Acceptance Model (TAM) (Alsswey & Al-Samarraie, 2020), and Health Belief Model (HBM) (Alhalaseh et al., 2020). However, none of these studies has been conducted from the perspective of young adults. This study attempts to bridge this gap in the literature by addressing the cognitive factors that might influence the intention of young adults to use mHealth services. This study incorporates the HBM (Alhalaseh et al., 2020, Rosenstock et al., 1988) and the Expectation Confirmation Model (ECM) (Chiu et al., 2020, Oliver, 1980) to provide a holistic interpretation of young adults' intention to use mHealth services from a behavioural aspect. Prior studies have used the HBM as a theoretical basis for understanding individuals' intentions to use different mHealth applications (Alhalaseh et al., 2020, Puspita et al., 2017), while the ECM has been used to examine the factors that affect individuals' devotion to using mHealth apps (Chiu et al., 2020, Tam et al., 2020). Moreover, most prior studies have employed a single model with a limited number of factors, resulting in a relatively lower capability for explaining users' intentions from a particular viewpoint. In this regard, this paper investigates the drivers that influence young adults’ intention to use mHealth apps by developing a research methodology that incorporates both the HBM and the ECM. The two models could complement each other, and an integrated model could mitigate the drawbacks of the single model by allowing a better understanding of young adults’ IINmH.

Methodologically, most prior studies on mHealth have applied a single-stage data analytical approach, particularly the partial least square-structural equation modelling (PLS-SEM) (Alsswey and Al-Samarraie, 2020, Chiu et al., 2020, Hoque and Sorwar, 2017, Tam et al., 2020). Lee et al. (2020) pointed out that a single-stage PLS-SEM analysis might only capture the linear relationship between the antecedents within a research framework, and this approach might be insufficient to predict complex decision-making processes in real-world problems. Others have attempted to mitigate this constraint by performing a second-stage data analysis using fuzzy-set qualitative comparative analysis (fsQCA) and/or an artificial neural network (ANN). However, the fsQCA has certain drawbacks in setting the True Table’s threshold value, since there is no universally accepted rule for using it. Different threshold values can result in solutions with different degrees of sufficient consistency (Roig-Tierno et al., 2017). Similarly, the majority of ANN analyses (Alam et al., 2021, Lee et al., 2020, Talukder et al., 2020) have employed a single hidden layer for training the model, which is referred to as a superficial form of ANN (Lee et al., 2020). This has resulted in the growing attraction towards alternative methodologies, such as machine learning (ML), for generating more profound insights (Kaya et al., 2020). Given this, we intend to contribute to the current literature by implementing a multi-stage SEM-fsQCA-ML analytical approach that can enhance the accuracy of a non-linear and asymmetric relationship due to its improved learning capacity.

The rest of this paper is organised as follows. An extensive literature review is provided in Section 2, following which we explain the research methodology in Section 3. Then, in Section 4, we discuss the methods used in this study and provide the statistical analysis with major findings in Section 5. The overall results and their implications are provided in 6, 7, respectively. Finally, we draw conclusions and discuss the limitations of the study and potential future research in Section 8.

2. Theoretical background

2.1. Health belief model (HBM)

The HBM, an explanatory paradigm, was initially introduced by a team of social psychologists from the US General Health Service in the 1950s, and was applied to health promotion (Ataei et al., 2021). Afterward, it was improved to explain why citizens refused to take preventive measures for diseases (Janz & Becker, 1984). It has also been used to evaluate the perceptions of patients while making health-related decisions. This paradigm provides a strong foundation for people's intrinsic desire to engage in behavioural action. It is a standard paradigm for healthcare studies in understanding and predicting public health practice and preventive healthcare behaviour. According to the HBM, individuals are more willing to take an initiative that overcomes barriers if the action leads to benefits, such as decreased potential vulnerability to a disease with serious illness (C.C & Prathap, 2020). The HBM is a broadly recognised framework in health behaviour studies that illustrates how health-related behaviours improve, sustain, and strive towards an optimal quality of living (Wei et al., 2020). Considering the recent research, investigating the intention to invest in healthcare may be seen as a preventive health behaviour that tends to decrease the likelihood of risky lifestyles, e.g., contracting the COVID-19 virus (García and Cerda, 2020, Nembrini et al., 2020). Due to the need to use remote healthcare facilities during a pandemic, this research explores various antecedents such as perceived susceptibility, perceived barriers, perceived severity of COVID-19, and perceived benefits. Five constructs make up the HBM: perceived susceptibility, perceived severity, health motivation, perceived barriers, and perceived benefits (Fig. 1 ). This model posits that people are more willing to make decisions pertaining to their own fitness practice because they want to remain healthy and believe that their practice will facilitate and support their well-being (Ataei et al., 2021).

Fig. 1.

Fig. 1

Research model and hypotheses.

The HBM has been proven to be applicable for studying the adoption of new health technologies in the literature. Numerous researchers have examined technology-enabled health behaviour. For example, Ahadzadeh et al. (2015) discussed health-related Internet usage by integrating the HMB, TAM, and UTAUT. As part of the investigation into patients’ adoption of smartphone health technologies, Dou et al. (2017) used various hypotheses and theories, including the HBM, to better understand patients’ attitudes and perspectives. Shang et al. (2021) employed the HBM to explore older adults’ intentions to exchange health knowledge on social media. Daragmeh et al. (2021) examined customers’ prospects of using an E-wallet service, employing the HBM and Technology Continuous Theory (TCT) throughout the COVID-19 pandemic coevolutionary period. Therefore, the HBM is an ideal tool to further understand young adults’ investment intention regarding mHealth during the COVID-19 crisis.

2.2. Expectation confirmation model (ECM)

The ECM has gained substantial interest for studying post-acceptance behaviour from the information systems (IS) domain in the last decade. This framework has become popular for demonstrating customers' satisfaction and their willingness to use an IS repeatedly (Bhattacherjee, 2001, Venkatesh et al., 2011). The ECM was developed by Bhattacherjee (2001), and is inspired by the expectation-confirmation theory (Oliver, 1980), which has been widely used in the business field to assess customer loyalty and post-purchase behaviour through belief, consequence, and motivation in the use of IS. He associated IS users’ continuation decisions with customers’ repurchase decisions, noting that both include the steps of (1) initial experience, (2) initial acceptance or purchase, and (3) post-decision for repurchasing the product or service. The degree of service confirmation is considered one of the three primary predictors (performance value, confirmation of service, and value-for-money) and the most influential indicator of users' intention to invest in mHealth technology (Leung & Chen, 2019). The ECM has consistently proven itself for predicting consumer behaviour as a common paradigm in numerous healthcare studies, including the intention to invest in mobile app purchases (Hsu & Lin, 2015), to ascertain the intention to use a mobile device in the hospital waiting room (Reychav et al., 2021), and acceptance of smart wearable devices (Park, 2020). However, only a few mHealth studies that utilised the ECM have been conducted, such as the continuation of mHealth service analysis in Bangladesh (Akter et al., 2013).

Users arrive at their repurchase intentions following a chain mechanism. Users will have formed an expectation before purchasing the products or services. Immediately after usage, users develop an impression regarding the effectiveness of the products or services and compare them with their expectations (Wu et al., 2020). Their degree of happiness is determined by how well their expectations match their perceived performance, and afterward, they make repurchase decisions or discontinue the service. Although behavioural models such as the TAM and UTAUT have seen widespread usage in assessing technology adoption rates, these models do not clarify how people use technology at the initial acceptance stage (Chiu et al., 2020). With the increasing prevalence of health-related apps, the ECM framework is crucial to analysing users’ investment decisions in mHealth technologies. At the same time, the ECM serves as an essential framework to understand individuals' decisions to invest in new technologies (Pee et al., 2018).

2.3. Motivation behind integrated the HBM and ECM model

The prevention of COVID-19 outbreaks primarily includes following hygienic lifestyles and coping with social distancing due to the lack of vaccinations or treatment plans. Society is shifting to contactless healthcare systems to prepare people to protect themselves during periods of crisis. To ensure patient safety, healthcare providers globally are preparing to implement a virtualised treatment strategy that eliminates the need for physical visits with patients and health professionals (Webster, 2020).

The use of mHealth technology could be seen as a preventive strategy (adopting social distance). The beliefs about the threat of the epidemic and one’s anxieties about vulnerability to the outbreaks would significantly impact the implementation of preventive health interventions (switching from a physical visit to a virtual consultation). The HBM (Rosenstock et al., 1988) is a theoretical paradigm that might be used to direct interventions intended to promote well-being and prevent disease (Fathian-Dastgerdi et al., 2021). It can assist in explaining and predicting changes in users' health behaviours. Previously, the HBM was employed empirically to investigate individuals’ beliefs and behaviour towards seasonal influenza and pandemic swine flu vaccinations (Santos et al., 2017). Additionally, this model was also used to predict the acceptance of and willingness to invest in COVID-19 treatment (Wong et al., 2021). Although the HBM helps understand mHealth investment decisions, it is just one factor among many involved in the investment in technology. Therefore, the HBM alone is insufficient to justify consumers’ willingness to invest in mHealth technologies. Moreover, a single framework with a limited number of factors could be inadequate to explain individuals’ post-adoption decisions (Chiu et al., 2020). On the other hand, to investigate what factors affect mHealth technology investment decisions, the ECM, designed initially to explain repurchase decisions (Oliver, 1980), was extended in the technological context. Following the implementation of mHealth, investment decisions will be influenced by additional factors (satisfaction or monetary value), which must be investigated. Thus, this study intends to incorporate the HBM and ECM to explore this research problem in the context of mHealth investment decision-making challenges. As a result of integrating the HBM and ECM, this research provides a more detailed structure and extends the existing literature on consumers' investment decisions in mHealth applications.

3. Research model and hypotheses development

The research model presented in Fig. 1 incorporates the dimensions of two well-established theories: the HBM (Janz & Becker, 1984) and modified ECM (Hsu & Lin, 2015). These theoretical models take into account the factors that may affect an individual’s decision-making strategy. These dimensions include all intrinsically linked aspects and highlight the decision-making process in a particular context. The following section will discuss the development of the hypotheses based on these predictors.

3.1. Perceived susceptibility

The term “perceived susceptibility” refers to an individual's ambiguous assessment of the probability or likelihood of suffering from a health problem (Janz & Becker, 1984). Regarding COVID-19, while the use of digital services becomes increasingly vital in the scenario of a pandemic, the current research considered the perceived susceptibility to COVID-19 as a predictor of young adults’ investment intentions in mHealth technologies (C.C & Prathap, 2020). According to several scholars, perceived susceptibility correlates significantly with healthy behaviour, and understanding susceptibility will motivate people to mitigate threats and make investment decisions (Huang et al., 2020). Considering COVID-19, young people are more likely to invest in effective mobile healthcare systems, for example, after identifying their susceptibility to getting infected during a physical visit to a healthcare facility. Thus, we hypothesise the following:

H1: Perceived susceptibility to COVID-19 has a significant impact on investment intentions in mHealth technology.

3.2. Perceived severity

The term “perceived severity” applies to an individual's assessment of the seriousness of a particular problem (Janz & Becker, 1984), as well as perceptions about the adverse effects of getting an infection (Wong et al., 2020), and this assessment may result in appropriate prevention measures. Individuals may take preventive measures against sickness if they think it may have harmful effects (Huang et al., 2020). Individuals are more willing to have a robust protective motivation for individual investment decisions with substantial consequences. Therefore, it is deemed more beneficial to avoid adverse effects. According to Wong et al. (2020), most people believed the COVID-19 virus was severe and those perceptions of severity were significantly associated with increased willingness to invest in mHealth. Thus, the following hypothesis is proposed:

H2: Perceived severity of COVID-19 has a significant impact on the intention to invest in mHealth technology.

3.3. Health motivation

Health motivation is defined as an intense desire to practice healthy behaviour in order to avoid health problems, such as eating healthy, living in a healthy atmosphere, and paying for health benefits. While this factor was not originally introduced in the HBM, Becker (1974) argued that it should be included in the model. According to Janz and Becker (1984), cues to action cover a wide variety of factors that influence an individual’s motivation for a particular activity and help formulate a decision for health benefits. In Becker's view, motivation for healthy living includes the willingness to be proactive about it, and he, therefore, included it in the HBM. A person’s motivation to undertake a healthy behaviour can be divided into three categories: people’s beliefs, modified factors, and probability of practice (McKellar & Sillence, 2020). The last component of the motivation is easily transformed into potential promises, such as investing in healthcare technology. Therefore, we hypothesised the following:

H3: Health motivation regarding COVID-19 has a significant impact on the intention to invest in mHealth technology.

3.4. Perceived benefits

Perceived benefits refer to the beneficial effects associated with the adoption of healthy lifestyles (Janz & Becker, 1984). According to the concept of HBM, potential benefits would motivate people to engage in digital healthcare services. Perceived benefits typically involve individual and societal preventive health habits, including home self-quarantine to avoid unnecessary hospital costs and social quarantine to stop transmitting the disease within the community (Fathian-Dastgerdi et al., 2021). People are far more willing to follow proactive health behaviours if they feel the perceived benefits outweigh the perceived barriers. People should indeed believe that the intervention will be beneficial and trust that if it is taken, then the adverse health condition can be avoided (Mou et al., 2016). When people believe that utilising mHealth technologies could alleviate a health problem or enhance their current health condition, they are more willing to use the latest technology. Therefore, we hypothesise:

H4: Perceived benefits have a strong significant positive effect on the intention to investment in mHealth technology.

3.5. Perceived barriers

Discouraging the promotional health activities that may hinder the acceptance of the desired action or new health behaviour can be defined as perceived barriers (Green et al., 2020). Barriers may describe any negative attribute, such as cost, risks, alarms, inconvenience, irritation, etc. When discussing the current context, perceived barriers represent factors that might hinder or deter individuals from investing in mHealth technologies. Understanding and identifying possible barriers would effectively increase engagement in disease prevention initiatives (Julinawati et al., 2013). If barriers are determined and resolved, investments in emerging technologies for health benefits can be reconsidered (Wang et al., 2021). In some instances, perceived barriers in healthcare negatively affect healthcare actions that discourage investment in mHealth technologies. Thus, we have proposed the following hypothesis:

H5: Perceived barriers have a significant negative influence on the intention to invest in mHealth technology.

3.6. Performance value

The performance of mHealth is confirmed by an assessment of users’ feedback and the fulfilment of specific expectations for continued use of the service. Better performance can encourage post-adoption beliefs such as intention to invest in emerging technologies (Bhattacherjee, 2001). Additionally, the confirmation of technology performance could be crucial in determining whether to continue using IS. Numerous studies have examined the use of the digital platform or paid mobile apps, using the ECM as a theoretical foundation (Hsu & Lin, 2015). Recent studies have shown a correlation between investment intention and emerging health technology (C.C & Prathap, 2020). More specifically, investments in mHealth applications can help reduce physical visits to medical centres and help prevent the spread of infections such as COVID-19. As a result, we identified a function to reinforce the need for investment decisions on mHealth, making the recognition of mHealth performance a critical prerequisite. Thus, the following hypothesis is formed:

H6: mHealth performance value is strongly related to the intention to invest in mHealth technology.

3.7. Confirmation of service

Confirmation refers to the users' expectations about the effects of mHealth usage and whether the system actually performs as expected (Bhattacherjee, 2001). According to Joo and Choi (2016), confirmation of service seems to affect the continuous intention to use IS. Confirming customer expectations on mHealth services will increase customer satisfaction, enhance user loyalty, and encourage investment in new technology. When the customer's initial expectations are achieved or even surpassed, that will further user engagement and promote continued service usage (Venkatesh & Goyal, 2010). Based on the ECM, this study investigates the impact of confirmation of service on mHealth technology users’ investment decisions, and the following hypothesis is proposed:

H6: Confirmation of mHealth service significantly influences the intention to invest in mHealth technologies.

3.8. Value-for-money

Perceived value is an attribute that benefits marketers, as it is considered a multi-dimensional aspect in terms of consumer value, which includes value-for-money. In other words, value-for-money is the utility that considers both short-term and long-term cost savings associated with the adoption of emerging technologies (Shang & Wu, 2017). An encouraging experience resulting from value-for-money contributes to improving a better behavioural intention. Chen, 2008, Rajaguru, 2016 explored a strong relationship between users' perceived value-for-money and their purchasing intention for new services. Considering that mHealth has been recognised as a concept of consumer behaviour, this research incorporates value-for-money (Sweeney & Soutar, 2001) to further clarify mHealth users’ intention to invest in technology. There has been controversy and little research in the literature about how the value-for-money of mHealth technologies influences customers’ behavioural intentions. We believe that a user’s IINmH is determined by the technology’s perceived value-for-money and hypothesise that:

H8: Value-for-money has a significant positive influence on the intention to invest in mHealth technologies.

3.9. Internet speed

Slow Internet speed is a leading cause of everyday annoyance for Bangladeshi mobile Internet users. Most rural areas have insufficient Internet access relative to their urban counterparts. According to Index (2021), the average mobile Internet speed is 11.32 Mbps, and broadband Internet speed is 36.02 Mbps in Bangladesh, which is significantly lower than the global average mobile Internet speed of 53.38 Mbps and broadband Internet speed of 102.12Mbps. Bangladesh is ranked 132nd for mobile Internet speed out of 134 countries and 99th for broadband Internet speed among 176 countries in 2021. Considering the low speed of mobile network connectivity in Bangladesh, it is almost impossible for mHealth services to succeed. Users may experience buffering and delays while using mHealth technologies because of the slower Internet speeds. There is a strong association between Internet speed and Internet-based services (Chiu et al., 2017). We believe that high-quality Internet access impacts mHealth acceptance and the willingness to invest in it. Therefore, we propose the following hypothesis:

H9: Internet speed has a significant positive influence on the intention to invest in mHealth technologies.

3.10. Internet cost

Internet cost is another major determinant when considering the adoption and intention to continue the online services (Nethananthan et al., 2018). Because of the high cost of the Internet in developing and least developed countries, online-based services have broken down, as overall service costs, set-up, and operational costs have increased (Chiu et al., 2017). While Bangladesh has a large population under the poverty line, mobile Internet is incredibly expensive (Dutta & Smita, 2020). In a recent survey by Islam et al. (2020), out of a total of 13,525 young participants, 55.3% were classified as belonging to lower and middle-income households, while 88% reported using the Internet every day for more than two hours. The majority, 64.2% of young adults, did not attend any online classes, and surprisingly, 65.7% of them did not play an online game, which might be due to the high Internet cost. Consequently, cost perceptions have a significant detrimental impact on the intention to invest in mobile Internet-based services. Thus, we hypothesised:

H10: Mobile Internet cost has a significant influence on the intention to invest in mHealth technologies.

4. Methods

4.1. Sample and data collection

The target population consists of young people who are or were active users of mHealth apps in Bangladesh during the COVID-19 pandemic. The G*Power (ver.3.1.9.4) (Faul et al., 2009) tool was used to determine the accurate sample size. A total number of 543 responses were required for multiple linear regression having ten predictors with high power (1- β = 0.95), a low probability of error (α = 0.05) and small effect size (f2 = 0.02). However, we collected data using a non-probability convenient sampling technique (Iqbal & Iqbal, 2020). The participants were not asked for any identifying information, and confidentiality was thus ensured. The interview was completely anonymous, and the data obtained was kept strictly confidential. The interview began with an overview of the research objective, a brief description of mHealth apps, and obtained the participants’ oral consent to participate in the study. A self-reported survey questionnaire containing 34 items, including demographic information, was distributed to target participants via online channels (Facebook/WhatsApp groups of medical colleges, nursing institutes, etc.), and direct data was collected from physical exercise centres, healthcare centres, nursing training institutes, and certain medical college hospitals as well as certain young mHealth users known to the researchers. Throughout the survey (October 2020 to February 2021), a total of 583 responses were received. Initially, the data was thoroughly reviewed for fraudulent content. A multivariate outlier test based on the Mahalanobis distance (Hair et al., 2010) was undertaken to detect potential outliers. Consequently, we retained 558 valid responses for further analysis. We implemented two-step protocols to mitigate possible sampling bias. First, young mHealth users from across the country were selected as the source samples. Second, duplicate responses by the same participant were prohibited. We conducted a pilot study that included the first 150 responses to figure out the internal reliability of the items. The findings revealed that Cronbach's alpha, which represents the reliability of a measurement, was above the recommended threshold of 0.70 (Hu & Bentler, 1998).

4.2. Measurement instruments

The survey questionnaire consists of two sections. The first section includes demographic information of the participants, while the second section includes 34 items from existing theories that investigate respondents' IINmH. The following questions were asked to further comprehend demographic information: age, gender, education, mHealth usage behaviour, and experience in using mHealth. Additionally, we inquired about their chronic disease conditions. The construct and its corresponding items were taken from the relevant literature to confirm the content validity. We adopted the survey instruments with eleven constructs. Among them, perceived susceptibility (three items), perceived severity (three items), and perceived barriers (three items) to COVID-19, and perceived benefits (four items) were adopted and modified from C.C and Prathap, 2020, Daragmeh et al., 2021, and Janz and Becker (1984). Health motivation (three items) measurements were derived from (Becker, 1974). The items of the ECM framework, namely, performance value (four items), confirmation of service (three items), value-for-money (four items), and intention to invest (three items), were adapted from Hsu and Lin (2015). Additionally, the scale of Internet speed and Internet cost were derived from Chiu et al., 2017, Islam et al., 2020, respectively. Details of the items can be found in the supplementary document. Moreover, to confirm the same interpretation, the questionnaire was initially developed in English and then translated into Bengali using a back-translation method (Brislin, 1970). Initially, two health informatics specialists and fifteen mHealth apps users thoroughly reviewed the Bengali version of the questionnaire to ensure readiness. Preliminary observations indicate that the Bengali questionnaire was correctly completed by the participants without any confusion or concerns regarding readability. The respondents were asked to rate their feelings by answering on a 5-point Likert scale, with 1 meaning strongly disagree and 5 meaning strongly agree.

5. Results

5.1. Multivariate assumptions

Prior to validating the modelling process, we initially conducted a preliminary test of the multivariate assumptions to verify that the research results were valid and trustworthy (Leong et al., 2019).

5.1.1. Normality test

For parametric statistics, it is assumed that data is normally distributed. Therefore, the normal distribution test is required. This study assessed normality of the data using the One-sample Kolmogorov–Smirnov test (Leong et al., 2019). The results of the test, as shown in Table 1 , illustrate that all p -values of the Kolmogorov- Smirnov test statistics are<0.05, which confirms that all the predictors in the research model are not normally distributed. Therefore, we employed the variance based PLS in our study since it is robust for non-normal distribution.

Table 1.

One-sample Kolmogorov–Smirnov test for normality assessment.

N Normal Parametersa,b
Most Extreme Differences
Kolmogorov–Smirnov
Z
Asymp. Sig. (P-Value)
(2-tailed)c
Mean Std. Deviation Absolute Positive Negative
PSus 558 3.4438 1.03149 0.157 0.076 -0.157 0.157 0.000
PSev 558 3.2855 1.01853 0.106 0.067 -0.106 0.106 0.000
HM 558 3.7210 0.74605 0.155 0.082 -0.155 0.155 0.000
PBe 558 2.8920 0.82888 0.143 0.081 -0.143 0.143 0.000
PBa 558 4.0287 0.76632 0.189 0.102 -0.189 0.189 0.000
CS 558 3.3035 0.75076 0.102 0.086 -0.102 0.102 0.000
VfM 558 3.3616 0.72341 0.104 0.053 -0.104 0.104 0.000
PeV 558 3.4789 0.80763 0.104 0.070 -0.104 0.104 0.000
MIS 558 3.5681 0.89273 0.177 0.119 -0.177 0.177 0.000
MIC 558 3.5224 0.93118 0.158 0.100 -0.158 0.158 0.000
IINmH 558 3.4863 0.71942 0.149 0.075 -0.149 0.149 0.000

a. Test distribution is Normal.

b. Calculated from data.

c. Lilliefors Significance Correction.

Note:PSus = Perceived susceptibility, PSev = Perceived severity, HM = Health motivation, PBe = Perceived benefits, PBa = Perceived barriers, CS = Confirmation of service, VfM = Value-for-money, PeV = Performance value, MIS = Mobile Internet speed, MIC = Mobile Internet Cost, IINmH = Intention to Invest in mHealth technology.

5.1.2. Linearity test

Analysis of variance (ANOVA) was applied to test the linear relationship within the variables using P-values of significant level (Ooi et al., 2018). Table 2 reveals that some linear and non-linear relationships have arisen in the data.

Table 2.

Deviation from linearity test.

Sum of Squares df Mean Square F P-Value Linear
IINmH * PSus 8.413 11 0.765 1.556 0.108 Yes
IINmH * PSev Deviation from Linearity 3.872 11 0.352 0.719 0.721 Yes
IINmH * HM 10.281 10 1.028 2.156 0.019 No
IINmH * PBe 11.693 15 0.780 1.535 0.088 Yes
IINmH * PBa 11.853 10 1.185 2.390 0.009 No
IINmH * CS 5.942 11 0.540 1.157 0.315 Yes
IINmH * VfM 10.623 14 0.759 1.838 0.031 No
IINmH * PeV 17.433 15 1.162 2.464 0.002 No
IINmH * MIS 8.918 7 1.274 2.773 0.008 No
IINmH * MIC 8.403 7 1.200 2.946 0.005 No

5.1.3. Common method variance (CMV)

Since the measurement scales in our study were self-reported, it is crucial to eliminate the potential CMV to ensure that the findings are unbiased. First, Harman's single-factor analysis with all eleven constructs was utilised to ensure that the obtained data was free from CMV (Podsakoff et al., 2003). The results reveal that a single factor explained 24.92% of the variation, which was much less than the recommended threshold of 50% (Podsakoff et al., 2003), suggesting that no evidence of CMV was presented in this study. Secondly, we conducted the collinearity test that has resulted in variance inflation factors (VIFs). The model is considered free from CMV if the VIF values are less than or equal to 3.3 (Kock, 2015). The results of the collinearity test, in this study, are equal to or lower than 3.3 (see Appendix A), which confirmed that the test failed to reveal potential sources of CMV.

5.1.4. Homoscedasticity

At this stage, we analysed scatter plots of standardised residuals and dependent factors to identify and assess whether the data was homoscedastic. Homoscedasticity enables us to comprehend the ramifications of implementing multiple regressions. Fig. 2 illustrates that we may accept our conceptual framework for SEM analysis since all residuals are evenly scattered and uniformly distributed along a straight line. These finding provide evidence for the homoscedasticity of the distribution (Ooi et al., 2018).

Fig. 2.

Fig. 2

Homoscedasticity test on raw data.

5.1.5. Non-response bias

Non-response bias in questionnaire surveys occurs when respondents do not reply to a survey simultaneously. According to Ooi et al. (2018), we employed an independent t-test to evaluate potential non-response bias. The respondents were initially divided into two groups according to the median day of data collection and the constructs were constructed afterwards. There were no substantial differences between early and late responses since all the t-test values had P-values>0.05. Thus, we did not find non-response bias in this study.

5.2. Measurement model

The model fit index should always be assessed at the beginning of the model evaluation. The standardised root mean square residual (ERMR) has been taken into account in conjunction with the criteria for an acceptable model fit index proposed by Henseler et al. (2015). To prevent model misspecification, the value of SRMR should be less than or equal to 0.08 or 0.10 (Hu & Bentler, 1998). However, our model provides an excellent model fit index with SRMR = 0.069. To validate the measurement model in this study, we used two different validity methods (convergent validity and discriminant validity). This study took into account the factor loading, the Alpha Cronbach's, the composite reliability (CR) and average extracted variance (AVE) to confirm convergent validity; and both Fornell-Larcker and Heterotrait-Monotrait Ratio (HTMT) were applied to justify discriminant validity (Hair et al., 2020). Cronbach’s alpha values higher than 0.50 have been used to measure the internal reliability of the constructs (Hu & Bentler, 1998). Additionally, we assessed the construct reliability with composite reliability (CR) values>0.70, as suggested by Fornell and Larcker (1981). Finally, we computed the extracted average variance (AVE), and the AVE threshold value must exceed 0.50 (Fornell & Larcker, 1981) to indicate that the measurement error is less than the structure’s observed variation. The Cronbach's Alpha, rho A, Composite Reliability, and AVE values, as shown in Table 3 , are all acceptable. The factor loadings of each item presented in Appendix A are also deemed acceptable. Moreover, Fornell–Larcker criteria (Fornell & Larcker, 1981) and the newly developed HTMT (Henseler et al., 2015) were investigated to measure whether they could effectively differentiate two constructs. The square root of AVE should be greater than the inner-correlation of the constructs (Fornell & Larcker, 1981), and Table 4 portrays that this research satisfies the criteria for discriminant validity. Finally, the results presented in Table 5 reveal that the reported HTMT ratio of discrimination correlation has a poor correlation at 0.90 or below (Henseler et al., 2015), confirming the qualified discriminant validity.

Table 3.

Analysis of convergent validity.

Cronbach's Alpha rho_A Composite Reliability Average Variance Extracted (AVE)
PeV 0.770 0.773 0.853 0.593
HM 0.613 0.680 0.832 0.714
MIC 0.704 0.709 0.871 0.771
MIS 0.613 0.623 0.826 0.704
PBa 0.837 0.862 0.900 0.750
PBe 0.812 0.840 0.887 0.723
VfM 0.684 0.696 0.824 0.610
PSev 0.822 0.878 0.891 0.732
PSus 0.785 0.905 0.863 0.680
CS 0.687 0.692 0.865 0.761
IINmH 0.605 0.608 0.835 0.717

Table 4.

Inter-correlation between the constructs and the square root of AVEs. (Fornell-Larcker Criterion).

PeV HM MIC MIS PBa PBe VfM PSev PSus CS IINmH
PeV 0.770
HM 0.090 0.845
MIC 0.361 0.132 0.878
MIS 0.717 0.117 0.296 0.839
PBa 0.194 −0.009 0.060 0.034 0.866
PBe 0.085 0.043 0.080 0.068 0.049 0.850
VfM 0.133 0.188 0.247 0.260 −0.120 0.272 0.781
PSev 0.091 −0.035 −0.043 −0.067 0.199 0.297 −0.149 0.855
PSus 0.049 −0.054 −0.030 −0.069 0.157 0.348 −0.074 0.829 0.825
CS 0.226 0.122 0.236 0.269 −0.092 0.245 0.568 −0.163 −0.103 0.873
IINmH 0.347 0.212 0.559 0.369 −0.057 0.126 0.398 −0.195 −0.157 0.393 0.847

Table 5.

Heterotrait-Monotrait ratio (HTMT).

PeV HM MIC MIS PBa PBe VfM PSev PSus CS IINmH
PeV
HM 0.133
MIC 0.490 0.199
MIS 0.917 0.198 0.464
PBa 0.256 0.029 0.091 0.081
PBe 0.124 0.092 0.105 0.127 0.067
VfM 0.192 0.290 0.347 0.406 0.176 0.365
PSev 0.177 0.067 0.052 0.095 0.257 0.372 0.190
PSus 0.137 0.100 0.078 0.092 0.227 0.425 0.103 0.939
CS 0.304 0.180 0.332 0.423 0.130 0.335 0.822 0.199 0.138
IINmH 0.502 0.336 0.854 0.615 0.077 0.174 0.606 0.264 0.201 0.607

5.3. Structural model

The measurement model provided significant findings, but we examined the structural model further before drawing any conclusions. To ensure the model accurately portrays the relationship between various paths, a bootstrapping method with 5,000 sub-samples was applied (Hair et al., 2010). Table 6 and Fig. 3 represent the findings of the test of hypotheses. In statistical hypothesis testing, path coefficient (β), T-Statistics, and P-values were calculated to decide whether the hypotheses had been accepted. The results, as shown in Table 6 and Fig. 3, show that perceived severity (β = 0.123, P < 0.05), health motivation (β = 0.090, P < 0.05), performance values (β = 0.101, P < 0.05), confirmation of service (β = 0.122, P < 0.01), and value-for-money (β = 0.132, P < 0.001) have a significant positive impact on intention to invest in mHealth technology; thus supporting H2, H3, H6, H7, and H8 respectively. However, mobile Internet cost (β = −0.417, P < 0.001) has a significant negative impact on the intention to invest in mHealth technology, indicating that an increase of 1 unit in the mobile Internet cost will reduce the IINmH by 0.417 units in young adults. However, results showed that perceived susceptibility (β = −0.026, P > 0.05), perceived benefits (β = 0.057, P > 0.05), perceived barriers (β = −0.051, P > 0.05) and mobile Internet speed (β = 0.083, P > 0.05) did not affect the young adults’ intentions to invest in mHealth technology, indicating that hypotheses H1, H4, H5, and H9, respectively were insignificant.

Table 6.

PLS-SEM path analysis.

Hypothesis β T Statistics P Values SE 2.5% 97.5% Supported f2
H1 PSus -> IINmH −0.026 0.450 0.653 0.002 −0.140 0.087 No 0.001
H2 PSev -> IINmH 0.123 1.982 0.048 0.003 −0.240 0.004 Yes 0.008
H3 HM -> IINmH 0.090 2.327 0.020 0.002 0.010 0.162 Yes 0.014
H4 PBe -> IINmH 0.057 1.581 0.115 0.002 −0.032 0.116 No 0.005
H5 PBa -> IINmH −0.051 1.424 0.155 0.002 −0.103 0.043 No 0.004
H6 PeV -> IINmH 0.101 1.965 0.051 0.002 −0.014 0.206 Yes 0.017
H7 CS -> IINmH 0.122 2.619 0.009 0.002 0.037 0.214 Yes 0.019
H8 VfM -> IINmH 0.132 3.283 0.001 0.002 0.050 0.203 Yes 0.008
H9 MIS -> IINmH 0.083 1.600 0.110 0.002 −0.007 0.185 No 0.006
H10 MIC -> IINmH −0.417 9.807 0.000 0.002 0.320 0.493 Yes 0.262

Predictive Relevance: R-Square: 0.457, Q-Square: 0.303 (DV = IINmH).

Fig. 3.

Fig. 3

Path analysis diagram.

Nevertheless, it is worth noting that the path coefficient might not be quantified and evaluated until the predictive values are determined. Additionally, the model reveals that around 45.7% of the variation is in investment intentions in mHealth technology, indicating that the model captured approximately half of the total variance. The final step of model validation was to examine predictive relevance by using Stone–Geisser's Q-square values. A Q2 value higher than zero indicates that the model is accurately predictive (Rehman Khan & Yu, 2020). Our model correctly predicted investment intentions in mHealth technologies since Q2 is equal to 0.303 in Table 6. The evaluation of the independent predictor and its involvement in the model is expressed as the effect size (f2) for the path. The f2 values of 0.02, 0.15, and 0.35 reflect small, medium, and large effects, respectively (Cohen, 1988). As illustrated in Table 6, the results indicate that mobile Internet cost has a large effect (ƒ2 = 0.262) on the intention to invest in mHealth technology.

5.4. Post hoc analysis of PLS-SEM

We used importance-performance map analysis (IPMA) alongside standard PLS analysis to better understand the future investment intentions in mHealth technology among young adults. IPMA provides us the opportunity to supplement our PLS-SEM findings, which can offer additional insights (Ringle and Sarstedt, 2016). The primary purpose of the IPMA is to explore the important predictors, which allows substantial impact while also having a lower average latent factor score (Pisitsankkhakarn & Vassanadumrongdee, 2020). We implemented the IPMA technique following Hair et al. (2017) and Pisitsankkhakarn & Vassanadumrongdee (2020) to identify the most important predictors for investigating IINmH. Table 7 and Fig. 4 show the findings of IPMA, and these results imply that mobile Internet cost has a substantial effect on investment intention in mHealth technology, with the highest total effect score of −0.417 at the performance level of 63.029. By looking at Fig. 4, we can see that increasing the “mobile Internet cost” by 1 unit will reduce the overall investment intention by 0.417 units. Similarly, the value for money enhances investment intention with a total effect of 0.132 and overall performance level of 54.976. Furthermore, policymakers and/or governments may assist in increasing the IINmH among young adults by lowering mobile Internet costs. These findings confirm that policymakers should strongly prioritise these predictors.

Table 7.

Importance-Performance Analysis.

Constructs Importance Performance
PeV 0.101 61.941
HM 0.090 68.462
MIC −0.417 63.029
MIS 0.083 64.242
PBa −0.051 76.522
PBe 0.057 45.425
VfM 0.132 54.976
PSev −0.123 56.905
PSus −0.026 60.473
CS 0.122 58.640

Fig. 4.

Fig. 4

Importance-performance map analysis.

5.5. Asymmetric analysis (fsQCA)

In an asymmetric modelling technique called fsQCA, fuzzy sets are combined with fuzzy logic. The key concepts of fsQCA are fuzzy set theory and Boolean algebra, which are used to investigate how many predictors interact to eventually lead to a particular outcome either with their presence or absence (Ragin, 2009). There are multiple significant reasons why “asymmetric modelling with complexity theory” is crucial: correlation coefficients, even if very reliable, cannot account for the strength of the relationship between endogenous and exogenous factors, and fuzzy sets can help resolve this problem (Kaya et al., 2020, Pappas et al., 2017). The second issue is that the findings of symmetric analysis like multiple regression analysis (MRA) and SEM can also result in overfitting, owing to collinearity confusing related effects (Olya & Altinay, 2016). The third issue is that, in addressing a real-world problem, numerous outcome variables depend on multiple factors and combinations of the predictor, rather than just a single predictor. While symmetric analysis implies that high coefficient values for predictor variables are necessary and sufficient to predict outcome variables, asymmetric analysis suggests that high coefficient values for predictor variables are sufficient but not necessarily essential to predict outcome variables (Kaya et al., 2020). Nevertheless, fsQCA assists in distinguishing between independent and dependent variables, identifying patterns that explain a consequence, and more crucially differentiating from other types of analyses, such as regression, correlation, and ANOVA, since it provides possible alternative solutions to a single problem (Pappas et al., 2017).

5.5.1. Calibration

Before performing fsQCA analysis, calibration of raw data is required. The Likert scale used in this research needed rescaling. Since the study constructs were assessed on a 5-point Likert scale, rescaling the uncalibrated data is crucial. We checked the One-sample Kolmogorov–Smirnov test (Leong et al., 2019) of our uncalibrated data such that all p –values (Table 1) of the Kolmogorov- Smirnov test statistics are<0.05, which confirms all the predictors in the research model are not normally distributed. We also verified that our raw data was not normally distributed by checking that the kurtosis was less than ± 2 and the skewness was less than ± 1 (Appendix A), suggesting non– normal distributions (Kaya et al., 2020). Prior studies in the literature have theorised that the values must be set at 1 for full membership, 0.5 for crossover point, and 0 for full non-membership (Fiss, 2011). A software program called fsQCA (version 3.1b) was used to transform the variables into calibrated sets. The average of items was used first to compute all constructs (Pappas & Woodside, 2021), and then the original value, which covered 95%, 50%, and 5% of the results, was used as the calibration threshold. The quartile statistics for these transformations’ strategies are shown in Table 8 .

Table 8.

Quartiles results for calibration concepts

PSus PSev HM PBe PBa CS VfM PeV MIS MIC IINmH
Full non-membership (5%) 1.67 1.33 2.33 1.25 2.67 2.00 2.00 2.00 2.00 2.00 2.33
Crossover point (50%) 3.67 3.33 4.00 3.00 4.00 3.33 3.50 3.50 3.50 3.50 3.67
Full membership (95%) 5.00 4.67 4.67 4.25 5.00 4.33 4.50 5.00 5.00 5.00 4.33

5.5.2. Analysis of necessary conditions

While fsQCA focuses on developing a sufficient condition, evaluation is always at the centre of the model and the necessary conditions should always be formulated first. This study examines the single endogenous variable referred to as ‘IINmH’ in the PLS-SEM model (Fig. 1), as well as the outcome conditions that are associated with it. The fsQCA analysis examines the conditions of ten predictors for the outcome variable, as with the SEM model, that affects ‘IINmH’ and investigates all conditions that influence and do not influence the outcome results. According to Ragin (2009), the consistency range is from 0 to 1. Typically, the consistency range should never be ≤ 0.75 but must be ≥ 0.80. In addition, a condition is “almost always necessary” or “necessary” when the associated consistency value is ≥ 0.80 or ≥ 0.90, respectively (Ragin, 2000). The specific findings are shown in Table 9 , and they reveal that none of the predictors is necessary alone for ‘IINmH’, since the consistency score is<0.90.

Table 9.

Analysis of necessary conditions (Outcome variable: IINmH).

Conditions tested Consistency Coverage Conditions tested Consistency Coverage
PSus 0.587101 0.589008 ∼PSus 0.695186 0.674623
PSev 0.609482 0.564850 ∼PSev 0.661870 0.698008
HM 0.654241 0.687695 ∼HM 0.610788 0.567706
PBe 0.661761 0.658962 ∼PBe 0.635458 0.621174
PBa 0.645594 0.572746 ∼PBa 0.602361 0.669251
CS 0.727919 0.695066 ∼CS 0.542197 0.553275
VfM 0.746557 0.752767 ∼VfM 0.548737 0.529928
PeV 0.717057 0.666702 ∼PeV 0.547574 0.575355
MIS 0.791135 0.672514 ∼MIS 0.472624 0.555468
MIC 0.756765 0.716572 ∼MIC 0.529699 0.545434

5.5.3. Analysis of sufficient conditions

Generating the truth table is the first step in implementing a sufficient condition analysis (Ragin, 2009). The truth table contains 2^k rows (k = number of conditions), and each row indicates every causative condition combination. The fsQCA technique was used to generate the truth table in this research to understand the IINmH. For large sample sizes, Fiss (2011) recommends that the frequency cut-off be set at 3 (or higher) (Pappas & Woodside, 2021). Consistency values above 0.74 indicate that the combined assessment has provided meaningful solutions (Elbaz et al., 2018). The frequency threshold value is set at 5 since our sample size is 558, and any combinations with a lower frequency are excluded from further investigations. Table 10 summarises the findings of the fsQCA analyses for the IINmH (intermediate solution). Simplified illustrations have been used to enhance the readability of the results, where black circles (●) indicate the existence of a causative condition, blank cross circle (⊗) indicate the absence or negation of a condition, and empty cells represent instances where the absence of such a condition does not affect the outcome. Table 10 also represents the raw consistency for each solution, which is a measurement similar to the regression coefficient. In addition, coverage scoring for each solution and circumstance indicates the size of the effects in hypothesis testing (Woodside & Zhang, 2012). Finally, when evaluating the overall solution coverage, which is similar to the R-square value given in variable-based approaches (Elbaz et al., 2018), it is possible to see whether the revealed configurations influence the IINmH.

Table 10.

fsQCA analysis (intermediate solution).

Model: IINmH = f(PSus, PSev, HM, PBe, PBa, SS, PG, FC, MIS, MIC)
Configuration Solution 1 Solution 2 Solution 3 Solution 4 Solution 5 Solution 6
PSus
PSev
HM
PBe
PBa
CS
VfM
PeV
MIS
MIC
Raw coverage 0.315214 0.402109 0.396821 0.227046 0.25426 0.262326
Unique coverage 0.042711 0.0137572 0.0165019 0.0156987 0.0121839 0.0158997
Consistency 0.98597 0.968712 0.965784 0.943525 0.97975 0.971248
Solution coverage: 0.546477Solution consistency: 0.940925

Table 10 illustrated that the performance of no single predictors would be superior to combinations of predictors. The results of the fsQCA demonstrated that six pathways related to IINmH are possible. However, all the solutions have high raw consistency (above 0.90), which has been identified as leading to high performance in IINmH. In particular, the findings demonstrated that the combination of  less perceived susceptibility × less perceived severity × health motivation × low perceived barriers × confirmation of service × value-for-money × performance value × mobile Internet speed × mobile Internet cost (solution 1) is more likely to achieve high performance than the other combinations with a consistency score of 0.98597. Some 31.5% of the young adults have supported solution 1 (Raw coverage). According to Solution 2, all ten predictors are equally significant except perceived barriers, which is supported by 40.2% of respondents with a consistency of 0.968712. Similarly, solution 3 demonstrated that, except health motivation, each of the ten predictors is equally important in predicting IINmH (consistency of 0.965784 and raw coverage of 0.396821). Solution 4 has comparatively low consistency (0.943525) with the combination of the full importance of value-for-money and mobile Internet cost, negating the importance of the other seven predictors and ignorance of perceived barriers. Alternatively, the combination of  less perceived susceptibility × less perceived severity × health motivation × low perceived benefits × low perceived barriers × low confirmation of service × value-for-money × low-performance value × mobile Internet speed × mobile Internet cost (solution 5) is equally expected to provide excellent performance since it has a consistency score of 0.97975 and is accepted by 25.4% of young adults. Finally, solution 6 is also competitive with full importance of perceived susceptibility, perceived severity, health motivation, perceived benefits, confirmation of service, value-for-money, and mobile Internet cost while perceived barriers, performance value, and mobile Internet speed can be neglected. The above six solutions explain 54.6% of the likelihood of achieving high performance. In summary, the findings suggest that having the same causative predictors leads to strong intentions to invest in mHealth, depending on how the presence or absence of predictors are configured with other causal factors.

5.6. Machine learning for predicting IINmH

ML classification algorithms were employed to predict the interconnection of the proposed integrated model. Eight ML algorithms were used to predict IINmH. These classifiers were trained using labelled data in a supervised learning approach. Python 3.7 was employed for modelling, training, and evaluating the prediction model based on several classifiers, including the support vector machine (SVM), logistic regression (LR), random forest, k-Nearest Neighbours (KNN), Naive Bayes, Adaboost, neural network, and XGBoost. With the SVM, the target variable transforms into a binary classification problem (0 = do not change IINmH, 1 = change IINmH), with the mean score of the items representing the threshold value. The cross-channel normalisation technique was used to normalise the input data in the range between [0, 1] (Mezzatesta et al., 2019). In this study, 80% of the data was used for training the model, while the remaining 20% was used for testing. Finally, sensitivity analysis was performed on the best performing ML model to explore the association between the input predictors and the target factor (IINmH), as well as the degree of relative importance for these predictors.

5.6.1. Feature selection and parameter tuning

Feature selection is a widely utilised technique for pre-processing data. Identifying the most important feature for optimal model fitting in ML is a challenging task. Retrieving important features enables the elimination of superfluous and redundant features, resulting in quicker and more accurate computations. There are two common ways of evaluating criteria: filter and wrapper (Hu et al., 2015). Embedded techniques are also popular in ML because they are less computationally expensive than wrapper methods. For classification modelling, Hasan and Bao (2021) showed that the wrapper method outperforms other methods in terms of feature selection. Therefore, the wrapper method was used for feature selection in this study.

Moreover, every ML algorithm has pre-defined parameters that may have a significant impact on its performance. A grid search method was used to test a large number of combinations of various parameter values to find the most suitable parameters for each ML model (Syarif et al., 2016). Parameter ranges and the best values of ML models for tuning are shown in Table 11 . A five-fold cross-validation technique was applied to avoid overfitting. Additionally, early stopping criteria were employed in the ANN to halt the training process after a large number of epochs where the loss has not been improved (Zhou et al., 2021).

Table 11.

Parameter ranges/best values.

Classifier Parameter ranges/best values
Support Vector Machine C = 10, gamma = 0.001
Logistic Regression C = 10.0, penalty = l2
Random Forest Number of trees = 200, Number of features for splitting = 8
KNN Number of neighbours = 8
Naive Bayes
AdaBoost 'n_estimators': [100,200],
'learning_rate': [0.001, 0.01, 0.1, 0.2, 0.5]
Neural Network 'alpha': array ([1.e-01, 1.e-02, 1.e-03, 1.e-04, 1.e-05, 1.e-06]),
'hidden_layer_sizes': array ([ 5, 6, 7, 8, 9, 10, 11]),
'max_iter': [500, 1000, 1500],
'random_state': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
XGBoost 'min_child_weight': [1, 5, 10],
'gamma': [0.5, 1, 1.5, 2, 5],
'subsample': [0.6, 0.8, 1.0],
'colsample_bytree': [0.6, 0.8, 1.0],
'max_depth': [3, 4, 5]

5.6.2. Performance matrix and evaluation of findings

When it comes to predicting IINmH, the goal is to identify whether a user will invest in mHealth. The classifications and model assessment findings in Table 12 indicate the accuracy of the observations based on the confusion matrix. Accuracy was determined based on the confusion matrix (Equation (1)). Eight classifiers were considered to measure performance analysis: accuracy, true positive, false positive, precision, recall, f-measure, and GMean. Accuracy (in percentage) measures how well a model’s predictions match actual occurrences. However, accuracy measurements cannot figure out the difference between the numbers of correctly identified samples of each class, particularly for the positive class in classification problems. A somewhat reliable classifier may mislabel the positive class as negative. For evaluating model performance, accuracy in classification tasks alone might not ensure adequate performance measurement.

Accuracy=TP+TNTP+TN+FP+FN (1)
Table 12.

Confusion Matrix for Cardiovascular Disease Prediction

Predicted
Actual Intention to invest No intention to invest
Intention to invest True positive (TP) False Negative (FN)
No intention to invest False Position (FS) True Negative (TN)

In addition, we incorporated five metrics: precision, recall, F-measure, GMean, and area under curve (AUC), along with confusion matrix and accuracy percentage, which are often applied to classification issues. These values are calculated as follows:

Precision=TruePositiveTruePositive+FalsePositive (2)
Recall=TruePositiveTruePositive+FalseNagetive (3)
F-Measure=2×Precision×RecallPrecision+Recall (4)

GMean=TPTP+FP×TPTP+FN=Precision×Recall= (5)

AUC=1+TPTP+FN-FPFP+TN2 (6)

The F-measure represents a weighted average of precision. This is the percentage of accurate positive predictions made, and recall is a method of measuring how well a classifier can identify positive instances. The objective of GMean is to evaluate the balance between the two-class recall. A low GMean score will be obtained if the model is inherently biased towards one of the two classes. Finally, the AUC is used to evaluate the average performance of a classification model when various parameters are used. With increasing AUC, the model's classification capacity becomes more accurate and reliable. Provost and Fawcett (1997) recommended that instead of assessing accuracy rate, the receiver operating characteristics (ROC) curve of AUC be used. Thus, this technique has become more popular in the field of categorisation. Therefore, we used the F-measure, the GMean, and the ROC curve of the AUC to evaluate the model's performance in terms of predicting the IINmH.

We partitioned our dataset into two segments before running the ML algorithms: the original dataset (Ori), and the dataset with feature selection (FS). Similarly, we computed our results for four sections: the original dataset, the dataset with feature selection (FS), the dataset with FS and hyperparameter optimisation (FS_HPO), and the original dataset with HPO (Ori_HPO). The findings obtained from the eight distinct ML models regarding performance analysis are shown in Table 13 . The most accurate models were provided by XGBoost for both Ori_HPo and FS_HPO, with values of 0.90 and 0.881, respectively; whereas the least accurate models were provided by NN, with values of 0.68 and 0.6520 for Ori_HPo and FS_HPO, respectively.

Table 13.

Performance analysis of ML models.

Classifiers CC1 (%) TP2 FP3 precision recall f1-score Gmean
Ori Support Vector Machine 80.40 0.32 0.06 0.78 0.91 0.84 0.84
Logistic Regression 79.50 0.32 0.07 0.78 0.89 0.83 0.83
Random Forest 72.30 0.31 0.10 0.75 0.83 0.79 0.79
KNN 78.60 0.28 0.04 0.75 0.94 0.83 0.84
Naive Bayes 75.90 0.32 0.11 0.77 0.83 0.8 0.8
AdaBoost 75.00 0.29 0.09 0.74 0.86 0.8 0.8
Neural Network 72.00 0.28 0.12 0.73 0.81 0.77 0.77
XGBoost 68.80 0.28 0.15 0.71 0.77 0.74 0.74
FS Support Vector Machine 76.80 0.27 0.05 0.74 0.92 0.82 0.83
Logistic Regression 77.70 0.30 0.07 0.76 0.89 0.82 0.82
Random Forest 72.30 0.28 0.12 0.74 0.8 0.77 0.77
KNN 75.80 0.28 0.07 0.74 0.89 0.81 0.81
Naive Bayes 76.80 0.30 0.08 0.76 0.88 0.81 0.82
AdaBoost 72.30 0.34 0.17 0.77 0.73 0.75 0.75
Neural Network 69.60 0.24 0.10 0.69 0.84 0.76 0.76
XGBoost 71.40 0.33 0.17 0.76 0.73 0.75 0.74
FS_HPO (Grid Search) Support Vector Machine 77.80 0.30 0.07 0.76 0.89 0.82 0.82
Logistic Regression 77.80 0.30 0.07 0.76 0.89 0.82 0.82
Random Forest 71.00 0.31 0.15 0.74 0.77 0.75 0.75
KNN 73.20 0.27 0.09 0.72 0.86 0.79 0.79
Naive Bayes
AdaBoost 75.90 0.31 0.10 0.76 0.84 0.8 0.8
Neural Network 65.20 0.26 0.17 0.68 0.73 0.71 0.7
XGBoost 88.10 0.34 0.04 0.81 0.94 0.87 0.87
Ori_HPO (Grid Search) Support Vector Machine 79.5 0.31 0.06 0.77 0.91 0.83 0.84
Logistic Regression 79.5 0.32 0.07 0.78 0.89 0.83 0.83
Random Forest 74.1 0.31 0.11 0.75 0.83 0.79 0.79
KNN 75.9 0.31 0.10 0.76 0.84 0.8 0.8
Naive Bayes
AdaBoost 75.0 0.30 0.10 0.75 0.84 0.79 0.79
Neural Network 67.6 0.29 0.17 0.71 0.73 0.72 0.72
XGBoost 90.0 0.30 0.07 0.82 0.95 0.88 0.88

Ori = Original dataset, FS = feature selection, HPO = hyperparameter optimisation.

1

Correctly Classified (%), 2TP: True Positive, 3FP: False Positive.

When the FS dataset was used, both Random Forest and AdaBoost achieved the same accuracy of 72.30. Similarly, in terms of FS_HPO and Ori_HPO, the SVM and LR both had the same accuracy (79.5%). The noteworthy finding is that neural networks provided excellent outcomes when applied to a wide feature collection. When features were eliminated, accuracy values decreased (Ori: 72%, FS: 69.6%, FS_HPO: 65.2%, and Ori_HPO: 67.6%). In terms of precision, recall, F1-score, and GMean, it is therefore noteworthy that the XGBoost model outperformed other ML models. In contrast, the neural network provided the worst values for both the FS_HPO and Ori_HPO contexts. Moreover, with an AUC score of 0.847 for Ori_HPO, XGBoost outperformed the other ML models, followed by LR with a value of 0.779 for both the Ori and Ori_HPO instances. Fig. 5 represents the ROC curve of the AUC score of each ML model. We can conclude that feature selection is not a major concern in a small dimensional dataset. Compared to the seven ML models, XGBoost performed the best. Thus, the XGBoost model was employed to further investigate the relation between the predictors and the target variable of the proposed integrated research framework that determines the intention of young adults to invest in mHealth.

Fig. 5.

Fig. 5

ROC curve for AUC score for each ML mode.

5.6.3. Sensitivity analysis (SA)

SA is an approach for determining how the variability of a target outcome may be affected by the predictors that are strongly dependent on the input. The relative importance of predictive variables on the target output was determined by performing a sensitivity analysis. Numerous techniques for SA have been described in scientific literature. In recent literature (Zhou et al., 2021), Sobol’s indices (Sobol, 2001) have gained prominence because of variance-based methods for substitute model concepts. Sobol’s SA can determine the contribution of each predictor variable and their interconnections to the overall model output variance (Zhang et al., 2015). It is worth noting that the purpose of Sobol’s SA is not to determine the source of input variability; it simply shows the effect and magnitude of the change on the model's output.

Consequently, the Sobol's SA for the identified XGBoost model was used in this study to investigate and quantify the relevance of predictor variables influencing the IINmH. The relative feature importance values of each predictor variable are represented in Fig. 6 . The value-for-money was the most important predictor affecting the IINmH, followed by mobile Internet cost. Health motivation and perceived susceptibility were the third and fourth-ranked predictors, respectively. Mobile Internet speed was the least important predictor. In summary, financial concerns (value-for-money and the cost of mobile Internet access) greatly impact young adults' intentions to invest in mHealth.

Fig. 6.

Fig. 6

Relative feature importance.

6. Discussion

Understanding young adults’ IINmH is critical for the long-term viability of healthcare systems, since this digitalised group of individuals is likely to be the most prominent consumers of mHealth services. To better understand the mechanism of investment intentions in mHealth, this study incorporates the HBM and ECM and presents a novel model based on task needs and technological functionalities. Besides, prior research on mHealth has overlooked the incorporation of monetary aspect (i.e., Internet cost) and network factors (i.e., Internet speed), and the possible effect of users' expectations on the decision to use mHealth, all of which may affect IINmH. Thus, these research conclusions are limited because they cannot fully represent the asymmetrical relationship among the predictors involved. This study used an integrated research model based on the HBM and ECM to investigate the theoretical relationship between health beliefs and users' expectations-decision to use mHealth services. The integrated model used a multi-analytical technique, including SEM, fsQCA, and ML approaches.

The SEM findings indicated that the VfM, MIC, HM, and CS substantially impact young adults’ IINmH during the COVID-19 pandemic. Surprisingly, Pba and MIS are insignificant to IINmH. The MIC is negatively significant to IINmH, which indicates that increasing the MIC would decrease investment in mHealth. The findings of the fsQCA revealed that no single predictor is necessary alone, but all predictors had an important impact in predicting the IINmH. The fsQCA solution analysis supplied six different solutions. Following the post-hoc analysis of SEM, the fsQCA and ML-based feature importance method also showed that VfM and MIC are the most significant predictors of IINmH. Additionally, while the SEM model revealed that around 45.7% of the variation is in IINmH technology, fsQCA provided 54.6% of the variation in IINmH with six different solutions. At the same time, the ML-based XGBoost model can predict users’ IINmH with 90% accuracy. Thus, this study revealed the importance of using configurational analysis in mHealth. It is thus recommended to use a multi-analytical approach (SEM-fsQCA-ML) to explore the complex causation underpinning mHealth investment intention, which can overcome the possible constraints and drawbacks of traditional statistical techniques. These findings have several implications for theory and practice, which are discussed in the next section.

7. Implications

7.1. Theoretical implications

The findings of this study provide a range of theoretical contributions to consider. First, this study contributes to developing an integrated research framework that is employed in the context of decisions by young adults to invest in mHealth. The integrated framework is more advanced than the single HBM or ECM, demonstrating that integration of the HBM and ECM concepts enhances the predictive power for IINmH. Thus, the integrated model provides a more comprehensive understanding of young adults regarding their IINmH. This result contributes to the gap identified by Veeramootoo et al. (2018), who stated that to comprehend behavioural approaches better, researchers must implement integrated IS models and constructs rather than depending on conventional models.

Second, we investigated the most significant predictors that affect the investment decision of young consumers in mHealth. Unlike prior research, this study demonstrates that mobile Internet cost is a crucial antecedent for young consumers' investment in mHealth, which was previously overlooked in the literature. This study objectively investigates the predictors' MIC and MIS to explore relevant research designs, find a strategy for enhancing the IS model, strengthen the argument constructively, and finally establish an approach to extend the IS model. This method addressed the gap identified by Duarte and Pinho (2019), who recommended the inclusion of additional dimensions to the existing IS paradigm.

Third, most of the previous research in mHealth has examined the factors that influence either the intention to use or the actual usage of the mHealth technology. To the best of our knowledge, this study is one of the first efforts to investigate the factors influencing investment intention in mHealth among young adults and how this might affect the investment decision in mHealth during the COVID-19 pandemic.

Finally, this study applied data analysis by utilising a combination of SEM, fsQCA and ML, which offers theoretical breakthroughs in strengthening analytical approaches. An excellent method of measuring the symmetric relationships between predictor variables and target output is through SEM. At the same time, the fsQCA approach can increase the ability to identify sufficient causal antecedents for outcomes. Moreover, unlike other traditional analytical techniques such as SEM and fsQCA, ML algorithms can overcome the constraints associated with establishing assumptions about variable distributions and conducting hypothesis tests. The ML methods presented in this research can investigate non-linear connections between predictors and bridge the gap left by the use of a single hidden layer (Alam et al., 2021, Lee et al., 2020) throughout the model training process. Ensuring that the most relevant ML model is selected involves prioritising and quantifying the importance of major contributing predictors.

7.2. Practical implications

This research offers significant practical implications by outlining several alternatives for young adults to invest in mHealth during the COVID-19 outbreak. The findings indicate that governments and mHealth care providers should adopt a cost-oriented strategy while delivering mHealth services. The monetary functionalities (i.e., VfM and MIC) and service values (CS) that are positively associated with investing in mHealth should be taken into consideration by mobile-based healthcare service providers. Simultaneously, healthcare providers should ensure real-time service confirmation, since this is the foundation for increasing the likelihood that young adults would invest in mHealth. Furthermore, this study highlighted the relationship among crucial predictors of IINmH through the integration of the HBM and ECM, which incorporated two exogenous constructs named MIS and MIC. This integrated model clearly explains how mobile-based healthcare providers could encourage young adults to utilise and invest in mHealth services as a substitute for physical visits to healthcare centres, thereby reducing or eliminating the transmission of the COVID-19 virus. Moreover, value-for-money, mobile Internet cost, confirmation of service, performance values, and health motivation were uncovered and validated as factors affecting young adults’ intentions to invest in mHealth services. By spreading the word about “mHealth” and publicising it on social media, healthcare providers may encourage more people to invest in mHealth in terms of maintaining social distance. Finally, healthcare providers should offer inexpensive mHealth services for young adults, and mobile Internet service providers should keep mobile Internet packages as affordable as possible, which is also likely subject to government regulation.

8. Conclusions, limitations, and future research directions

This study developed a comprehensive model to explain the mechanism of intentions to invest in mHealth by young adults during the COVID-19 pandemic, using an integrative approach that incorporates the HBM and ECM with two additional exogenous constructs. Applying a multi-analytical approach, including SEM, fsQCA, and ML, we advanced the knowledge of how younger mHealth users can be motivated to invest in this service. While the SEM results revealed that value-for-money, mobile Internet costs, and confirmation of services influence younger users' mHealth investment intentions, the fsQCA results indicated that performance values and health motivations must always be combined with these variables. Furthermore, the comparison of eight distinct ML models indicated that XGBoost outperformed the other classification models regarding the accuracy, precision, recall, GMean, F1-measure, and ROC curve for AUC score. XGBoost model-based feature importance analysis showed that value-for-money and mobile Internet costs are the most prominent contributors to IINmH of young adults.

While the current study extends the existing body of knowledge on mHealth, it has certain shortcomings that might point to potential future research directions. This research has been designed to understand the motivation of young adults to invest in mHealth. Future research should include a wider range of age groups to participate as respondents, and new concepts might be applied to expand upon this. In addition, this research was conducted in a least developed country, Bangladesh, where young people are financially constrained to use mHealth services. Therefore, future studies should look at expanding this model to include other countries. In accordance with this proposal, cultural diversity might be also incorporated as a moderating factor for the new research framework, which might contribute meaningfully in the context of mHealth.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgement

This study was supported by the National Natural Science Foundation of China (Grant No. 71810107003). We are grateful to the Editor-in-Chief, handling editor and anonymous reviewers for their valuable comments, which improved the quality of our paper.

Biographies

Najmul Hasan is a PhD candidate at the School of Management, Huazhong University of Science and Technology (HUST), Wuhan, China. His research interests are in information systems research, mHealth, machine learning, predictive analytics, and medical informatics. He has published 29 journal articles and 4 conference papers. He received the Excellent Academic Award at HUST in 2019 and 2020. He is a regular reviewer for numerous renowned journals. During his career, he has been involved in more than 70 academic, social and market research projects.

Yukun Bao is a professor with the School of Management, Huazhong University of Science and Technology, China, and the Deputy Director of Center for Modern Information Management at the same university. His research interests are in computational intelligence-based predictive analytics, information systems and IT management. He has published more than 80 papers, and has been the principal investigator for four research projects funded by the National Science Foundation of China. He received the IBM’s Excellent Faculty Award in 2012. He is a senior member of IEEE, and an Associate Editor of Neurocomputing and the Journal of Systems and Information Technology.

Raymond Chiong is currently an associate professor with the School of Information and Physical Sciences at the University of Newcastle, Australia. He is also a guest research professor with the Center for Modern Information Management at Huazhong University of Science and Technology, China. His research interests include data analytics, machine learning and optimisation. He has published over 200 papers in these areas. He is the Editor-in-Chief of the Journal of Systems and Information Technology (Emerald), an Editor of Engineering Applications of Artificial Intelligence (Elsevier), and an Associate Editor of Engineering Reports (Wiley).

Footnotes

Appendix B

Supplementary data to this article can be found online at https://doi.org/10.1016/j.tele.2021.101765.

Appendix A. Weights and loadings of the measure of development outcomes with normality and bias testing.

Bias Corrected CI
Items Outer Loadings Skewness Kurtosis SE T Statistics P Values 2.5% 97.5% VIF
PSus1 0.846 −0.564 −0.878 0.064 13.148 0.000 0.711 0.909 1.715
PSus2 0.906 −0.395 −0.775 0.061 14.872 0.000 0.853 0.988 1.643
PSus3 0.710 −0.472 −0.509 0.113 6.293 0.000 0.362 0.803 1.574
PSev1 0.775 −0.400 −0.822 0.044 17.443 0.000 0.648 0.838 1.781
PSev2 0.915 −0.351 −0.743 0.015 59.380 0.000 0.882 0.943 2.297
PSev3 0.870 −0.115 −0.700 0.025 34.644 0.000 0.815 0.914 1.787
HM2 0.908 −0.869 0.312 0.041 21.965 0.000 0.827 0.983 1.242
HM3 0.777 −0.574 −0.107 0.081 9.586 0.000 0.522 0.874 1.242
PBe1 0.882 −0.046 −0.510 0.053 16.594 0.000 0.804 0.962 1.813
PBe2 0.833 −0.193 −0.443 0.104 7.997 0.000 0.693 0.901 1.919
PBe3 0.836 −0.170 −0.463 0.073 11.481 0.000 0.687 0.927 1.665
PBa1 0.867 −0.840 0.076 0.174 4.979 0.000 0.389 0.977 2.188
PBa2 0.871 −1.143 2.590 0.208 4.197 0.000 0.663 0.994 1.686
PBa3 0.860 −0.737 0.195 0.179 4.815 0.000 0.522 0.977 2.275
CS2 0.859 −0.534 −0.487 0.026 32.462 0.000 0.796 0.901 1.378
CS3 0.886 −0.555 0.017 0.018 48.213 0.000 0.840 0.913 1.378
VfM2 0.793 −0.142 −0.893 0.029 26.898 0.000 0.726 0.844 1.255
VfM3 0.731 −0.603 −0.343 0.041 17.706 0.000 0.634 0.801 1.357
VfM4 0.817 −0.491 −0.395 0.026 31.269 0.000 0.764 0.859 1.467
PeV1 0.717 −0.631 0.086 0.038 18.825 0.000 0.632 0.781 1.317
PeV2 0.825 −0.256 −0.719 0.026 32.013 0.000 0.767 0.866 1.744
PeV3 0.796 −0.316 −0.357 0.034 23.684 0.000 0.716 0.849 1.650
PeV4 0.738 −0.472 −0.278 0.040 18.498 0.000 0.649 0.803 1.532
MIS1 0.868 −0.691 −0.167 0.024 35.619 0.000 0.815 0.913 1.204
MIS2 0.810 −0.631 0.086 0.036 22.486 0.000 0.724 0.867 1.204
MIC1 0.865 −0.542 −0.269 0.019 45.564 0.000 0.825 0.895 1.419
MIC2 0.891 −0.525 −0.183 0.015 60.522 0.000 0.859 0.915 1.419
IINmH1 0.832 −0.778 0.216 0.024 34.074 0.000 0.772 0.869 1.232
IINmH2 0.861 −0.588 −0.051 0.016 52.608 0.000 0.822 0.886 1.232

Appendix B. Supplementary data

The following are the Supplementary data to this article:

Supplementary data 1
mmc1.docx (52KB, docx)

References

  1. Ahadzadeh A.S., Pahlevan Sharif S., Ong F.S., Khong K.W. Integrating health belief model and technology acceptance model: An investigation of health-related internet use. J. Med. Internet Res. 2015;17(2):e45. doi: 10.2196/jmir.3564. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Akter S., Ray P., D'Ambra J. Continuance of mHealth services at the bottom of the pyramid: The roles of service quality and trust. Electronic Markets. 2013;23(1):29–47. doi: 10.1007/s12525-012-0091-5. [DOI] [Google Scholar]
  3. Alam M.M.D., Alam M.Z., Rahman S.A., Taghizadeh S.K. Factors influencing mHealth adoption and its impact on mental well-being during COVID-19 pandemic: A SEM-ANN approach. J. Biomed. Inform. 2021;116 doi: 10.1016/j.jbi.2021.103722. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Alhalaseh L., Fayoumi H., Khalil B. The Health Belief Model in predicting healthcare workers' intention for influenza vaccine uptake in Jordan. Vaccine. 2020;38(46):7372–7378. doi: 10.1016/j.vaccine.2020.09.002. [DOI] [PubMed] [Google Scholar]
  5. Alsswey A., Al-Samarraie H. Elderly users’ acceptance of mHealth user interface (UI) design-based culture: The moderator role of age. J. Multimodal User Interfaces. 2020;14(1):49–59. doi: 10.1007/s12193-019-00307-w. [DOI] [Google Scholar]
  6. Altmann V., Gries M. Factors influencing the usage intention of mHealth apps : An Empirical Study on the example of Sweden [Student thesis. DiVA. 2017 [Google Scholar]
  7. Asadzadeh A., Kalankesh L.R. A scope of mobile health solutions in COVID-19 pandemics. Inf. Med. Unlocked. 2021;23 doi: 10.1016/j.imu.2021.100558. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Ataei P., Gholamrezai S., Movahedi R., Aliabadi V. An analysis of farmers’ intention to use green pesticides: The application of the extended theory of planned behavior and health belief model. J. Rural Studies. 2021;81:374–384. doi: 10.1016/j.jrurstud.2020.11.003. [DOI] [Google Scholar]
  9. Becker M.H. The health belief model and personal health behavior. Health Educ. Monogr. 1974;2:324–473. [Google Scholar]
  10. Bhattacherjee A. Understanding information systems continuance: An expectation-confirmation model. MIS Quarterly. 2001;25(3):351–370. doi: 10.2307/3250921. [DOI] [Google Scholar]
  11. Brislin R.W. Back-translation for cross-cultural research. J. Cross Cult. Psychol. 1970;1(3):185–216. doi: 10.1177/135910457000100301. [DOI] [Google Scholar]
  12. C.C S., Prathap S.K. Continuance adoption of mobile-based payments in Covid-19 context: An integrated framework of health belief model and expectation confirmation model. Int. J. Pervasive Comput. Commun. 2020;16(4):351–369. doi: 10.1108/IJPCC-06-2020-0069. [DOI] [Google Scholar]
  13. Chen C.-F. Investigating structural relationships between service quality, perceived value, satisfaction, and behavioral intentions for air passengers: Evidence from Taiwan. Transport. Res. Part A: Policy Practice. 2008;42(4):709–717. [Google Scholar]
  14. Chiu J.L., Bool N.C., Chiu C.L. Challenges and factors influencing initial trust and behavioral intention to use mobile banking services in the Philippines. Asia Pacific J. Innov. Entrepren. 2017;11(2):246–278. doi: 10.1108/APJIE-08-2017-029. [DOI] [Google Scholar]
  15. Chiu W., Cho H., Chi C.G. Consumers’ continuance intention to use fitness and health apps: An integration of the expectation–confirmation model and investment model. Inform. Technol. People. 2020;34(3):978–998. doi: 10.1108/ITP-09-2019-0463. [DOI] [Google Scholar]
  16. Cohen J. Lawrence Erlbaum; Mahwah, NJ: 1988. Statistical Power Analysis for the Behavioral Sciences. Hillsdle. [Google Scholar]
  17. Daragmeh A., Sági J., Zéman Z. Continuous intention to use E-wallet in the context of the COVID-19 pandemic: Integrating the health belief model (HBM) and technology continuous theory (TCT) J. Open Innov. Technol. Market Complex. 2021;7(2):132. [Google Scholar]
  18. Dou K., Yu P., Deng N., Liu F., Guan YingPing, Li Z., Ji Y., Du N., Lu X., Duan H. Patients’ acceptance of smartphone health technology for chronic disease management: A theoretical model and empirical test. JMIR Mhealth Uhealth. 2017;5(12):e177. doi: 10.2196/mhealth.7886. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Duarte P., Pinho J.C. A mixed methods UTAUT2-based approach to assess mobile health adoption. J. Business Res. 2019;102:140–150. doi: 10.1016/j.jbusres.2019.05.022. [DOI] [Google Scholar]
  20. Dutta S., Smita M.K. The impact of COVID-19 pandemic on tertiary education in Bangladesh: Students’ perspectives. Open J. Soc. Sci. 2020;8(09):53–68. doi: 10.4236/jss.2020.89004. [DOI] [Google Scholar]
  21. Elbaz A.M., Haddoud M.Y., Shehawy Y.M. Nepotism, employees’ competencies and firm performance in the tourism sector: A dual multivariate and Qualitative Comparative Analysis approach. Tourism Manage. 2018;67:3–16. doi: 10.1016/j.tourman.2018.01.002. [DOI] [Google Scholar]
  22. Fathian-Dastgerdi Z., khoshgoftar M., Tavakoli B., Jaleh M. Factors associated with preventive behaviors of COVID-19 among adolescents: Applying the health belief model. Res. Soc. Administr. Pharm. 2021;17(10):1786–1790. doi: 10.1016/j.sapharm.2021.01.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Faul F., Erdfelder E., Buchner A., Lang A.-G. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behav. Res. Methods. 2009;41(4):1149–1160. doi: 10.3758/BRM.41.4.1149. [DOI] [PubMed] [Google Scholar]
  24. Fiss P.C. Building better causal theories: A fuzzy set approach to typologies in organization research. Acad. Manag. J. 2011;54(2):393–420. [Google Scholar]
  25. Fornell C., Larcker D.F. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. J. Mark. Res. 1981;18(1):39–50. doi: 10.2307/3151312. [DOI] [Google Scholar]
  26. Fujibayashi K., Takahashi H., Tanei M., Uehara Y., Yokokawa H., Naito T. A new influenza-tracking smartphone app (Flu-Report) based on a self-administered questionnaire: Cross-sectional study. JMIR Mhealth Uhealth. 2018;6(6) doi: 10.2196/mhealth.9834. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. García L.Y., Cerda A.A. Contingent assessment of the COVID-19 vaccine. Vaccine. 2020;38(34):5424–5429. doi: 10.1016/j.vaccine.2020.06.068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Green E.C., Murphy E.M., Gryboski K. In: The Wiley Encyclopedia of Health Psychology. Paul R.H., Salminen L.E., Heaps J., Cohen L.M., editors. Wiley; 2020. The Health Belief Model; pp. 211–214. [DOI] [Google Scholar]
  29. Guo J., Liu N., Wu Y., Zhang C. Why do citizens participate on government social media accounts during crises? A civic voluntarism perspective. Information & Management. 2021;58(1) doi: 10.1016/j.im.2020.103286. [DOI] [Google Scholar]
  30. Hair J.F., Anderson R.E., Babin B.J., Black W.C. Pearson; Upper Saddle River, NJ: 2010. Multivariate data analysis: A global perspective. [Google Scholar]
  31. Hair J., Sarstedt M., Ringle C., Gudergan S. Thousand Oaks, CA 91320; SAGE Publications Inc.; 2017. Advanced Issues in Partial Least Squares Structural Equation Modeling. [Google Scholar]
  32. Hair J.F., Howard M.C., Nitzl C. Assessing measurement model quality in PLS-SEM using confirmatory composite analysis. J. Business Res. 2020;109:101–110. doi: 10.1016/j.jbusres.2019.11.069. [DOI] [Google Scholar]
  33. Hampshire K., Porter G., Owusu S.A., Mariwah S., Abane A., Robson E., Munthali A., DeLannoy A., Bango A., Gunguluza N., Milner J. Informal m-health: How are young people using mobile phones to bridge healthcare gaps in Sub-Saharan Africa? Soc. Sci. Med. 2015;142:90–99. doi: 10.1016/j.socscimed.2015.07.033. [DOI] [PubMed] [Google Scholar]
  34. Hasan N. A Methodological Approach for Predicting COVID-19 Epidemic Using EEMD-ANN Hybrid Model. Internet of Things. 2020;11 doi: 10.1016/j.iot.2020.100228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Hasan N., Bao Y. Comparing different feature selection algorithms for cardiovascular disease prediction. Health Technol. 2021;11(1):49–62. doi: 10.1007/s12553-020-00499-2. [DOI] [Google Scholar]
  36. Henseler J., Ringle C.M., Sarstedt M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015;43(1):115–135. doi: 10.1007/s11747-014-0403-8. [DOI] [Google Scholar]
  37. Hoque R., Sorwar G. Understanding factors influencing the adoption of mHealth by the elderly: An extension of the UTAUT model. Int. J. Med. Inf. 2017;101:75–84. doi: 10.1016/j.ijmedinf.2017.02.002. [DOI] [PubMed] [Google Scholar]
  38. Hsu C.-L., Lin J.-C.-C. What drives purchase intention for paid mobile apps? – An expectation confirmation model with perceived value. Electron. Commer. Res. Appl. 2015;14(1):46–57. doi: 10.1016/j.elerap.2014.11.003. [DOI] [Google Scholar]
  39. Hu L.-T., Bentler P.M. Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification. Psychol. Methods. 1998;3(4):424–453. [Google Scholar]
  40. Hu Z., Bao Y., Xiong T., Chiong R. Hybrid filter-wrapper feature selection for short-term load forecasting. Eng. Appl. Artif. Intell. 2015;40:17–27. [Google Scholar]
  41. Huang X., Dai S., Xu H. Predicting tourists' health risk preventative behaviour and travelling satisfaction in Tibet: Combining the theory of planned behaviour and health belief model. Tourism Management Perspectives. 2020;33 doi: 10.1016/j.tmp.2019.100589. [DOI] [Google Scholar]
  42. Index, S.G., 2021. Speedtest Global Index. Retrieved May 26 from https://www.speedtest.net/global-index.
  43. Iqbal M.S., Khan S.-U.-D., Iqbal M.Z. University Students’ Perception of Ebola Virus Disease. Journal of Pharmaceutical Research International. 2020;32(34):132–140. doi: 10.9734/jpri/2020/v32i3430989. [DOI] [Google Scholar]
  44. Islam M.S., Sujan M.S.H., Tasnim R., Ferdous M.Z., Masud J.H.B., Kundu S., Mosaddek A.S.M., Choudhuri M.S.K., Kircaburun K., Griffiths M.D. Problematic internet use among young and adult population in Bangladesh: Correlates with lifestyle and online activities during the COVID-19 pandemic. Addict. Behav. Rep. 2020;12:100311. doi: 10.1016/j.abrep.2020.100311. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Janz N.K., Becker M.H. The Health Belief Model: A Decade Later. Health Educ. Q. 1984;11(1):1–47. doi: 10.1177/109019818401100101. [DOI] [PubMed] [Google Scholar]
  46. Joo S., Choi N. Understanding users’ continuance intention to use online library resources based on an extended expectation-confirmation model. The Electronic Library. 2016;34(4):554–571. doi: 10.1108/EL-02-2015-0033. [DOI] [Google Scholar]
  47. Julinawati S., Cawley D., Domegan C., Brenner M., Rowan N.J. A review of the perceived barriers within the health belief model on PAP smear screening as a cervical cancer prevention measure. Journal of Asian Scientific Research. 2013;3(6):677–692. [Google Scholar]
  48. Kathuria-Prakash N., Moser D.K., Alshurafa N., Watson K., Eastwood J.A. Young African American women’s participation in an m-Health study in cardiovascular risk reduction: Feasibility, benefits, and barriers. Eur. J. Cardiovasc. Nurs. 2019;18(7):569–576. doi: 10.1177/1474515119850009. [DOI] [PubMed] [Google Scholar]
  49. Kaya B., Abubakar A.M., Behravesh E., Yildiz H., Mert I.S. Antecedents of innovative performance: Findings from PLS-SEM and fuzzy sets (fsQCA) Journal of Business Research. 2020;114:278–289. doi: 10.1016/j.jbusres.2020.04.016. [DOI] [Google Scholar]
  50. Kock N. Common method bias in PLS-SEM: A full collinearity assessment approach. Internat. J. e-Collab. (ijec) 2015;11(4):1–10. [Google Scholar]
  51. Lee V.-H., Hew J.-J., Leong L.-Y., Tan G.-W.-H., Ooi K.-B. Wearable payment: A deep learning-based dual-stage SEM-ANN analysis. Expert Syst. Appl. 2020;157 doi: 10.1016/j.eswa.2020.113477. [DOI] [Google Scholar]
  52. Leong L.-Y., Hew T.-S., Ooi K.-B., Lee V.-H., Hew J.-J. A hybrid SEM-neural network analysis of social media addiction. Expert Syst. Appl. 2019;133:296–316. doi: 10.1016/j.eswa.2019.05.024. [DOI] [Google Scholar]
  53. Leung L., Chen C. E-health/m-health adoption and lifestyle improvements: Exploring the roles of technology readiness, the expectation-confirmation model, and health-related information activities. Telecommunications Policy. 2019;43(6):563–575. doi: 10.1016/j.telpol.2019.01.005. [DOI] [Google Scholar]
  54. Mao K., Zhang H., Yang Z. An integrated biosensor system with mobile health and wastewater-based epidemiology (iBMW) for COVID-19 pandemic. Biosens. Bioelectron. 2020;169 doi: 10.1016/j.bios.2020.112617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. McKellar K., Sillence E. In: Teenagers, Sexual Health Information and the Digital Age. McKellar K., Sillence E., editors. Academic Press; 2020. Chapter 2 - Current Research on Sexual Health and Teenagers; pp. 5–23. [Google Scholar]
  56. Mezzatesta S., Torino C., Meo P.D., Fiumara G., Vilasi A. A machine learning-based approach for predicting the outbreak of cardiovascular diseases in patients on dialysis. Comput. Methods Programs Biomed. 2019;177:9–15. doi: 10.1016/j.cmpb.2019.05.005. [DOI] [PubMed] [Google Scholar]
  57. Mou J., Shin D.-H., Cohen J. Health beliefs and the valence framework in health information seeking behaviors. Information Technology & People. 2016;29(4):876–900. doi: 10.1108/ITP-06-2015-0140. [DOI] [Google Scholar]
  58. Murnane E.L., Huffaker D., Kossinets G. Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, Osaka, Japan. 2015. Mobile health apps: adoption, adherence, and abandonment Adjunct. [DOI] [Google Scholar]
  59. Nachega J.B., Leisegang R., Kallay O., Mills E.J., Zumla A., Lester R.T. Mobile health technology for enhancing the COVID-19 response in Africa: A potential game changer? Am. J. Trop. Med. Hygiene. 2020;103(1):3–5. doi: 10.4269/ajtmh.20-0506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Nembrini S., Ceretti E., Gelatti U., Castaldi S., Schulz P.J., Levaggi R., Auxilia F., Covolo L. Willingness to pay for risky lifestyles: Results from the Pay for Others (PAY4O) study, Italy. Public Health. 2020;182:179–184. doi: 10.1016/j.puhe.2020.01.022. [DOI] [PubMed] [Google Scholar]
  61. Nethananthan S., Shanmugathas D., Shivany M. Exploring the Factors influencing Adoption of Internet Banking in Jaffna District. Internat. J. Recent Sci. Res. 2018;9(4):26404–26415. [Google Scholar]
  62. Nikolaou C.K., Tay Z., Leu J., Rebello S.A., Te Morenga L., Van Dam R.M., Lean M.E.J. Young People’s Attitudes and Motivations Toward Social Media and Mobile Apps for Weight Control: Mixed Methods Study. JMIR Mhealth Uhealth. 2019;7(10) doi: 10.2196/11205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Oliver R.L. A cognitive model of the antecedents and consequences of satisfaction decisions. J. Mark. Res. 1980;17(4):460–469. doi: 10.1177/002224378001700405. [DOI] [Google Scholar]
  64. Olya H.G.T., Altinay L. Asymmetric modeling of intention to purchase tourism weather insurance and loyalty. J. Business Res. 2016;69(8):2791–2800. doi: 10.1016/j.jbusres.2015.11.015. [DOI] [Google Scholar]
  65. Ooi K.-B., Lee V.-H., Tan G.-W.-H., Hew T.-S., Hew J.-J. Cloud computing in manufacturing: The next industrial revolution in Malaysia? Expert Syst. Appl. 2018;93:376–394. doi: 10.1016/j.eswa.2017.10.009. [DOI] [Google Scholar]
  66. Pappas I.O., Kourouthanassis P.E., Giannakos M.N., Lekakos G. The interplay of online shopping motivations and experiential factors on personalized e-commerce: A complexity theory approach. Telematics Inform. 2017;34(5):730–742. doi: 10.1016/j.tele.2016.08.021. [DOI] [Google Scholar]
  67. Pappas I.O., Woodside A.G. Fuzzy-set Qualitative Comparative Analysis (fsQCA): Guidelines for research practice in Information Systems and marketing. Int. J. Inf. Manage. 2021;58 doi: 10.1016/j.ijinfomgt.2021.102310. [DOI] [Google Scholar]
  68. Park E. User acceptance of smart wearable devices: An expectation-confirmation model approach. Telematics Inform. 2020;47 doi: 10.1016/j.tele.2019.101318. [DOI] [Google Scholar]
  69. Pee L.G., Jiang J., Klein G. Signaling effect of website usability on repurchase intention. Int. J. Inf. Manage. 2018;39:228–241. [Google Scholar]
  70. Pisitsankkhakarn R., Vassanadumrongdee S. Enhancing purchase intention in circular economy: An empirical evidence of remanufactured automotive product in Thailand. Resour. Conserv. Recy. 2020;156 doi: 10.1016/j.resconrec.2020.104702. [DOI] [Google Scholar]
  71. Podsakoff P.M., MacKenzie S.B., Lee J.-Y., Podsakoff N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 2003;88(5):879–903. doi: 10.1037/0021-9010.88.5.879. [DOI] [PubMed] [Google Scholar]
  72. Provost F., Fawcett T. In: Proc of the 3rd International Conference on Knowledge Discovery and Data Mining. 1997. Analysis and visualization of classifier performance: Comparison under imprecise class and cost distributions. [Google Scholar]
  73. Puspita R.C., Tamtomo D., Indarto D. Health belief model for the analysis of factors affecting hypertension preventive behavior among adolescents in Surakarta. Journal of Health Promotion and Behavior. 2017;02(02):183–196. [Google Scholar]
  74. Ragin C.C. Fuzzy-set social science. University of Chicago Press. 2000 doi: 10.1017/S0022381607080309. [DOI] [Google Scholar]
  75. Ragin C.C. University of Chicago Press; 2009. Redesigning social inquiry: Fuzzy sets and beyond. [Google Scholar]
  76. Rajaguru R. Role of value for money and service quality on behavioural intention: A study of full service and low cost airlines. J. Air Transport Manage. 2016;53:114–122. doi: 10.1016/j.jairtraman.2016.02.008. [DOI] [Google Scholar]
  77. Rehman Khan S.A., Yu Z. Assessing the eco-environmental performance: An PLS-SEM approach with practice-based view. Internat. J. Log. Res. Appl. 2020:1–19. [Google Scholar]
  78. Reychav I., Arora A., Sabherwal R., Polyak K., Sun J., Azuri J. Reporting health data in waiting rooms with mobile technology: Patient expectation and confirmation. Int. J. Med. Inf. 2021;148 doi: 10.1016/j.ijmedinf.2021.104376. [DOI] [PubMed] [Google Scholar]
  79. Ringle C.M., Sarstedt M. Gain more insight from your PLS-SEM results. Ind. Manage. Data Sys. 2016;116(9):1865–1886. doi: 10.1108/IMDS-10-2015-0449. [DOI] [Google Scholar]
  80. Rodriguez-Valero N., Luengo Oroz M., Cuadrado Sanchez D., Vladimirov A., Espriu M., Vera I., Sanz S., Gonzalez Moreno J.L., Muñoz J., Ledesma Carbayo M.J., Lau E.HY. Mobile based surveillance platform for detecting Zika virus among Spanish Delegates attending the Rio de Janeiro Olympic Games. PLoS ONE. 2018;13(8):e0201943. doi: 10.1371/journal.pone.0201943. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Roig-Tierno N., Gonzalez-Cruz T.F., Llopis-Martinez J. An overview of qualitative comparative analysis: A bibliometric analysis. J. Innovaion Knowl. 2017;2(1):15–23. doi: 10.1016/j.jik.2016.12.002. [DOI] [Google Scholar]
  82. Rosenstock I.M., Strecher V.J., Becker M.H. Social Learning Theory and the Health Belief Model. Health Educ. Q. 1988;15(2):175–183. doi: 10.1177/109019818801500203. [DOI] [PubMed] [Google Scholar]
  83. Santos A.J., Kislaya I., Machado A., Nunes B. Beliefs and attitudes towards the influenza vaccine in high-risk individuals. Epidemiol. Infect. 2017;145(9):1786–1796. doi: 10.1017/S0950268817000814. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Shammi M., Bodrud-Doza M., Towfiqul Islam A.R.M., Rahman M.M. COVID-19 pandemic, socioeconomic crisis and human stress in resource-limited settings: A case from Bangladesh. Heliyon. 2020;6(5) doi: 10.1016/j.heliyon.2020.e04063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Shang D., Wu W. Understanding mobile shopping consumers’ continuance intention. Industrial Management & Data Systems. 2017;117(1):213–227. doi: 10.1108/IMDS-02-2016-0052. [DOI] [Google Scholar]
  86. Shang L., Zhou J., Zuo M. Understanding older adults' intention to share health information on social media: the role of health belief and information processing. Internet Res. 2021;31(1):100–122. doi: 10.1108/INTR-12-2019-0512. [DOI] [Google Scholar]
  87. Sharpe E.E., Karasouli E., Meyer C. Examining Factors of Engagement With Digital Interventions for Weight Management: Rapid Review. JMIR Res Protoc. 2017;6(10) doi: 10.2196/resprot.6059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Shirato K., Nao N., Matsuyama S., Kageyama T. Ultra-rapid real-time RT-PCR method for detecting middle east respiratory syndrome coronavirus using a mobile PCR Device, PCR1100. Japan. J. Infect. Dis. 2020;73(3):181–186. doi: 10.7883/yoken.JJID.2019.400. [DOI] [PubMed] [Google Scholar]
  89. Sittig S., Hauff C., Graves R.J., Williams S.G., McDermott R.C., Fruh S., Hall H., Campbell M., Swanzy D., Wright T., Hudson G.M. Characteristics of and factors influencing college nursing students' willingness to Utilize mHealth for health promotion. Computers, informatics, nursing : CIN. 2020;38(5):246–255. doi: 10.1097/CIN.0000000000000600. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Slater H., Campbell J.M., Stinson J.N., Burley M.M., Briggs A.M. End user and implementer experiences of mHealth technologies for noncommunicable chronic disease management in young adults: systematic review. J Med Internet Res. 2017;19(12) doi: 10.2196/jmir.8888. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Sobol I.M. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Math. Comput. Simul. 2001;55(1):271–280. doi: 10.1016/S0378-4754(00)00270-6. [DOI] [Google Scholar]
  92. Somers C., Grieve E., Lennon M., Bouamrane M.-M., Mair F.S., McIntosh E. Valuing mobile health: an openended contingent valuation survey of a national digital health program. JMIR mHealth uHealth. 2019;7(1) doi: 10.2196/mhealth.9990. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Sweeney J.C., Soutar G.N. Consumer perceived value: The development of a multiple item scale. J. Retail. 2001;77(2):203–220. doi: 10.1016/S0022-4359(01)00041-0. [DOI] [Google Scholar]
  94. Syarif I., Prugel-Bennett A., Wills G. SVM parameter optimization using grid search and genetic algorithm to improve classification performance. Telkomnika. 2016;14(4):1502. [Google Scholar]
  95. Talukder M.S., Chiong R., Bao Y., Malik B.H. Acceptance and use predictors of fitness wearable technology and intention to recommend: An empirical study. Ind. Manage. Data Syst. 2019;119(1):170–188. doi: 10.1108/IMDS-01-2018-0009. [DOI] [Google Scholar]
  96. Talukder M.S., Sorwar G., Bao Y., Ahmed J.U., Palash M.A.S. Predicting antecedents of wearable healthcare technology acceptance by elderly: A combined SEM-Neural Network approach. Technol. Forecast. Soc. Chang. 2020;150 doi: 10.1016/j.techfore.2019.119793. [DOI] [Google Scholar]
  97. Tam C., Santos D., Oliveira T. Exploring the influential factors of continuance intention to use mobile Apps: Extending the expectation confirmation model. Information Systems Frontiers. 2020;22(1):243–257. [Google Scholar]
  98. To W.-M., Lee P.K.C., Lu J., Wang J., Yang Y., Yu Q. What Motivates Chinese Young Adults to Use mHealth? Healthcare. 2019;7(4):156. doi: 10.3390/healthcare7040156. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Veeramootoo N., Nunkoo R., Dwivedi Y.K. What determines success of an e-government service? Validation of an integrative model of e-filing continuance usage. Gov. Inform. Q. 2018;35(2):161–174. doi: 10.1016/j.giq.2018.03.004. [DOI] [Google Scholar]
  100. Venkatesh V., Goyal S. Expectation disconfirmation and technology adoption: Polynomial modeling and response surface analysis. MIS Quarterly. 2010;34(2):281–303. doi: 10.2307/20721428. [DOI] [Google Scholar]
  101. Venkatesh V., Thong J.Y.L., Chan F.K.Y., Hu P.J.H., Brown S.A. Extending the two-stage information systems continuance model: Incorporating UTAUT predictors and the role of context. Information Systems Journal. 2011;21(6):527–555. doi: 10.1111/j.1365-2575.2011.00373.x. [DOI] [Google Scholar]
  102. Wang M., Huang L., Pan C., Bai L. Adopt proper food-handling intention: An application of the health belief model. Food Control. 2021;127 doi: 10.1016/j.foodcont.2021.108169. [DOI] [Google Scholar]
  103. Webster P. Virtual health care in the era of COVID-19. The Lancet. 2020;395(10231):1180–1181. doi: 10.1016/S0140-6736(20)30818-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Wei J., Vinnikova A., Lu L., Xu J. Understanding and Predicting the Adoption of Fitness Mobile Apps: Evidence from China. Health Communication. 2020;36(8):950–961. doi: 10.1080/10410236.2020.1724637. [DOI] [PubMed] [Google Scholar]
  105. Wong L.P., Alias H., Wong P.-F., Lee H.Y., AbuBakar S. The use of the health belief model to assess predictors of intent to receive the COVID-19 vaccine and willingness to pay. Human Vaccines & Immunotherapeutics. 2020;16(9):2204–2214. doi: 10.1080/21645515.2020.1790279. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Wong M.C.S., Wong E.L.Y., Huang J., Cheung A.W.L., Law K., Chong M.K.C., Ng R.W.Y., Lai C.K.C., Boon S.S., Lau J.T.F., Chen Z., Chan P.K.S. Acceptance of the COVID-19 vaccine based on the health belief model: A population-based survey in Hong Kong. Vaccine. 2021;39(7):1148–1156. doi: 10.1016/j.vaccine.2020.12.083. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Woodside A.G., Zhang M. Identifying X-Consumers Using Causal Recipes: “Whales” and “Jumbo Shrimps” Casino Gamblers [Article] J. Gambl. Stud. 2012;28(1):13–26. doi: 10.1007/s10899-011-9241-5. [DOI] [PubMed] [Google Scholar]
  108. Wu I.-L., Chiu M.-L., Chen K.-W. Defining the determinants of online impulse buying through a shopping process of integrating perceived risk, expectation-confirmation model, and flow theory issues. Int. J. Inf. Manage. 2020;52 doi: 10.1016/j.ijinfomgt.2020.102099. [DOI] [Google Scholar]
  109. Zhang X.Y., Trame M.N., Lesko L.J., Schmidt S. Sobol Sensitivity Analysis: A Tool to Guide the Development and Evaluation of Systems Pharmacology Models [https://doi.org/10.1002/psp4.6]. CPT: Pharmacometrics & Systems. Pharmacology. 2015;4(2):69-79. doi: 10.1002/psp4.6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Zhou T., Law K.M.Y., Yung K.L. An empirical analysis of intention of use for bike-sharing system in China through machine learning techniques. Enterprise Inform. Syst. 2021;15(6):829–850. doi: 10.1080/17517575.2020.1758796. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data 1
mmc1.docx (52KB, docx)

Articles from Telematics and Informatics are provided here courtesy of Elsevier

RESOURCES