ABSTRACT
Objective
The aim of the study was to develop the “Digital Amnesia Scale Adolescent Form‐DAADF” to determine digital amnesia in adolescents aged 12–18 years and to examine its psychometric characteristics.
Methods
In the study, 625 adolescents between the ages of 12–18 who use digital devices such as smartphones/watches/tablets/computers/televisions were reached. The development process of the scale consisted of content validity, construct validity, convergent and discriminant validity, reliability studies, and validation of the obtained structure. Exploratory and confirmatory factor analyses were conducted to determine construct validity, and a 27% lower‐upper group comparison was conducted to determine internal validity. The linguistically and psychometrically validated scale consists of 9 items and has a 5‐point Likert scale.
Results
The developed scale consists of 2 sub‐dimensions and 9 items and explains 63.1% of the total variance. Scoring of the scale is obtained by summing the scores of the sub‐dimensions and a score between 0 and 36 points is obtained. The higher the score, the higher the level of digital amnesia. The Cronbach α reliability coefficient of the scale was calculated as 0.84, indicating high reliability.
Conclusion
The developed “Digital Amnesia Scale Adolescent Form ‐DAADF” is valid in terms of scope and content and has high reliability.
Keywords: digital amnesia, Digital Amnesia Scale Adolescent Form, psychometric characteristics, reliability and validity, Turkish
1. Introduction
Digital technologies refer to technological systems that are used to process, store, transmit or present digital data. The frequent use of digital technologies such as smartphones, online applications and social media in daily life has led to increased attention in the literature on the concept of “Digital Amnesia,” which describes attention and memory problems in cognitive areas (Domingues‐Montanari 2017; Hermawati et al. 2018; Kanbay et al. 2025; Laconi et al. 2017; Swaminathan 2020). The term “Digital Amnesia” was coined by the cybersecurity firm Kaspersky and defined as “the experience of forgetting information stored on a digital device that you trust to remember.”(Kaspersky Lab 2015). Digital amnesia is a concept that describes memory problems that occur as a result of the intensive use of digital devices and individuals relying on digital devices instead of remembering information (Kanbay et al. 2025; Sparrow et al. 2011; Sweller 1988; Yadav 2019).
It is known that digital technologies such as smartphones and social media are widely used by adolescents between the ages of 12 and 18. Adolescence is a period of significant identity development, the search for independence and the establishment of social relationships. The use of digital technology has been shown to have both positive and negative effects on adolescents. While the advantages of digital technology in terms of easy access to information, self‐directed learning and creativity development are well‐documented, there are also potential negative effects on the quality of life of adolescents in areas such as time management, self‐perception, cognitive processes and social skill development (Cao and Su 2007; Daniyal et al. 2022; Li et. al. 2021; Musa and Ishak 2020; Weinstein et al. 2015).
Adolescents are currently in a critical period of cognitive development when memory, attention and learning skills are developing rapidly. Frequent use of digital technologies such as smartphones in daily life has been shown to cause problems with cognitive functioning in adolescents, particularly in the areas of attention and memory. Research indicates that students may be storing less information in long‐term memory, possibly because they are using digital devices to store course materials and relying on these devices instead of taking notes (Baron et al. 2017; Mueller and Oppenheimer 2014; Uysal and Balci 2018). The use of digital technology devices has been shown to increase reliance on short‐term memory by reducing the cognitive load associated with information recall and processing. This has the effect of causing individuals to rely on digital devices instead of actively recalling information, which prevents the effective use of long‐term memory (Sparrow et al. 2011; Storm and Stone 2015). This has a negative impact on active information processing and repetition, which are critical to the learning process. Research has shown that handwritten note‐taking behaviour supports students’ information processing and memory storage processes, highlighting that the use of digital technology devices can negatively affect these processes (Mueller and Oppenheimer 2014). A multinational study revealed that 92% of students had better attention spans when reading printed materials, which were also more effective in learning processes due to the students’ tendency to reread these materials more often than digital materials. The same study also found that materials read on digital devices were less effective in attention, memory and learning than printed materials because more multitasking was done on the screen at the same time (Baron et al. 2017).
Adolescents’ frequent use of digital technology has been shown to have a negative impact not only on their cognitive functioning, but also on their life skills, such as academic performance, mood regulation, and social relationships (Carr 2010; Greenfield 2009; Prensky 2012). These changes can potentially lead to the development of comorbid psychiatric disorders, such as attention deficit and hyperactivity, depression, and anxiety, over time (Cao and Su 2007; Domingues‐Montanari 2017; Girela‐Serrano et al. 2022; Hermawati et al. 2018; Laconi et al. 2017; Li et al. 2021; Weinstein et al. 2015). These findings highlight the importance of studying the impact of digital amnesia on adolescents. Consequently, there is a necessity to increase the limited number of scientific studies on this topic and to develop measurement tools that define digital amnesia (Lodha 2019; Neelima and Sunder 2019). To address this need in the literature, this study will create the Digital Amnesia Scale Adolescent Form (DAADF) and examine its psychometric properties.
The DAADF has been developed to assess the symptoms of digital amnesia in adolescents between the ages of 12 and 18. The scale assesses the impact of adolescents’ memory use and storage habits on their cognitive performance. The DAADF can be used as an important tool for assessing cognitive functioning in the information age and can help adolescents maintain their cognitive health by identifying digital amnesia status. The scale does not measure digital addiction, but rather identifies the level of cognitive load that may occur in adolescents as a result of digital technology use. Protecting adolescents’ cognitive health has important implications for their more effective learning, healthy mood regulation, and social relationships. Measurements taken with the DAADF can be used to plan effective interventions to protect adolescents’ cognitive health, and the effectiveness of these interventions can be evaluated. Given this situation, it is believed that the DAADF measurement tool will provide a basis for protecting and improving cognitive health by assessing digital amnesia in adolescents.
2. Methods
The aim of the present study was to develop the Digital Amnesia Scale Adolescent Form‐DAADF that can be applied to adolescents between the ages of 12 and 18 and to examine its psychometric characteristics. The study is a scale development study. The study had a methodological design because scale development studies require factor analytic techniques and a certain methodology (DeVellis 2014). To develop the DAADF scale in Turkish culture, ethics committee permission was obtained from the Uskudar University Non‐Interventional Research Ethics Committee (29.08.2024/61351342/020‐311). Written informed consent was obtained from the adolescents and their parents who agreed to participate in the study.
The development process of the scale consisted of two stages. In the first stage, the conceptual structure was addressed and the psychometric evaluation of the scale was made in the second stage.
2.1. Creating the Conceptual Structure
The conceptual structure of the scale was formed by researching the relevant literature and conducting interviews, creating the item pool, content validity, preparing a draft form and pilot implementation.
Literature Review and Interviews: To address the structure of the concept, studies that investigated concepts such as adolescents’ use of digital devices, access to information, and habits of remembering and storing information were reviewed. In the evaluations, it was found that the digital amnesia concept consisted of elements such as personal information and habits, information and remembering, use of digital devices, addiction to digital devices, daily life and technology, easy access to digital tools, digital storage, continuity of digital stimuli, memory weakness, concentration problems, and learning difficulties.
In addition, while creating the item pool in accordance with the aim of the study, interviews were conducted with 8 adolescents from the 12–18 age group after obtaining parental permission after reviewing the relevant literature. In these interviews, it was tried to define the adolescents’ digital device usage habits, information recall and new information learning behaviors, digital device usage behaviors for information storage, screen time, experiences and thoughts about physical and social interactions. The interviews were conducted to create an item pool for this study. The results obtained from the interviews were discussed by the researchers by taking the relevant literature into consideration and the item pool was formed.
Creation of the Item Pool: Information describing the cognitive, psychological and social dimensions of digital amnesia was collected by utilizing studies on digital amnesia. The opinions of experts in the fields of cognitive development, educational psychology and digital technology played a critical role in the creation of the item pool. Interviews with adolescents aged 12–18 were conducted to understand their digital device usage habits and the effects of this usage on their daily lives. The combination of these processes resulted in an item pool of 19 statements for the digital amnesia scale.
Content Validity: The Lawshe Technique was used in the present study for the content validity of the measurement tool planned to be developed (Gilbert and Prion 2016). To this end, the item pool created was presented to the evaluation of a total of 9 experts and academicians in the fields of psychiatry, psychology, and psychiatric nursing. Finally, the Turkish Language and Literature expert opinion was also added to the content validity evaluation.
Pilot Implementation: When conducting a pilot implication, it is important to select a representative sample. The size of the sample for piloting is usually determined by the complexity of the scale and the diversity of the target population (Hair et al. 2010). The sample size for this application is usually between 30 and 50 participants, but this number may vary depending on the characteristics of the scale and the nature of the issue being tested. Therefore, in this study, a sample of 30 participants was selected to represent the target population. During the piloting process, participants were asked to provide feedback on any difficulties they had in completing the scale or any items they did not understand. After the piloting process, the feedback provided on the content, comprehensibility, item appropriateness, and overall usability of the scale was evaluated (DeVellis 2014). In addition, statistical measures such as consistency among the items of the scale, item‐total correlations, and the relationship between the scale and the total score were also examined (Hair et al. 2010). As a result of the evaluations, a 19‐item “Digital Amnesia Scale Adolescent Draft Form (DAADF)” was created.
2.2. Examining the Psychometric Characteristics of the Scale
Examining the psychometric characteristics of the DAADF draft form created consisted of the stages of determining the population and sample, collecting data with the determined sample, and statistical analysis of the data.
2.2.1. Population and Sample
This cross‐sectional study was conducted on 625 participants in Turkey from September 1, 2024, to January 1, 2025. The participants of this study consisted of adolescents between the ages of 12–18. The participants were recruited via social media groups (e.g. Facebook and Instagram) and WhatsApp messaging groups. The snowball sampling method was utilised. Factor analysis techniques had to be used because it was a scale adaptation study. The Exploratory Factor Analysis (EFA) was first conducted to uncover the structure, and the Confirmatory Factor Analysis (CFA) was used to verify the resulting structure in the study. For this reason, two separate samples that had similar characteristics were needed in the study. It is important to reach a sufficient sample size in scale development studies. Generally, the sample size for factor analysis must be at least 5 times the size for each item. However, this number may be larger in some cases. For example, Comrey and Lee (1992) reported that a sample of at least 300 people would be appropriate for a good factor analysis (Comrey and Lee 1992). In the present study, it was planned to conduct EFA to uncover the structure, and then to conduct CFA on a different sample to verify the emerging structure. In this study, 625 adolescents were reached. The sample size was 625 in this study, and 321 adolescents between the ages of 12 and 18 were reached for EFA, and 304 adolescents between the ages of 12 and 18 for CFA. 66% of the EFA sample was female, and the mean age of the sample was 16.9 ± 1.5. The average daily digital device usage time was 4.7 ± 2.3 h in this sample. A total of 65.8% of the CFA sample was female, and the mean age of the sample was 16.8 ± 1.6. In this sample, the average daily digital device usage time was 4.5 ± 2.2 h. While reaching adolescents, provinces from all geographical regions of Turkey were tried to be reached, and adolescents residing not only in metropolitan cities (62%) but also in towns (38%) were reached by snowball method. There are both public and private schools in Turkey for education. Public schools are free. Private schools usually charge tuition fees. In addition, to ensure socioeconomic and socio‐cultural diversity among adolescents, participants who attended both public (71%) and private schools (29%) were reached.
Inclusion Criteria
-
−
Using digital devices (e.g., smartphones, tablets, computers, televisions, etc.)
-
−
Being between the ages of 12–18
-
−
Agreeing to participate in the study
-
−
Providing written consent of the parent to participate in the study
-
−
Not being medically diagnosed with a psychiatric disorder
Exclusion Criteria
-
−
Not using digital devices (e.g., smartphones, tablets, computers, televisions, etc.)
-
−
Being under 12 years old and over 18 years old
-
−
Refusing to participate in the study
-
−
Parent's disapproval of participation in the study
-
−
Having a medical diagnosis of a psychiatric disorder
2.2.2. Data Collection
In the study, data were collected via Google forms. Google Forms is an online tool that facilitates reaching a wide audience and allows participants to fill out the survey in their own time. In this way, participants from all geographical regions of Turkey were reached. The data collection step was performed over Google Forms in two stages. In the first step, the link that was created through Google Forms was sent to the participants via online and social media accounts, and their participation in the study was ensured voluntarily and with parental permission. The online tool was directed to 681 adolescents. 56 adolescents refused to participate in the study. 625 adolescents participated in the online survey. The online survey was opened only once per account. In this way, repeated entries from the same account were prevented. Before starting to answer the questions in the data collection forms in the online form, adolescents were asked questions about the inclusion criteria. If the characteristics of the adolescents did not meet the inclusion criteria, the system did not allow the adolescents to fill out the form. In the second step, the data were collected from the participants who were not included in the first stage to verify the structure of the scale. The questionnaire was completed by the participants at the times and in the environments they determined themselves. This situation brings advantages such as the elimination of the observer effect (Reynolds et al. 2007). At the same time, a mandatory response button was prepared for each question to complete the questionnaire. When the adolescents skipped a question without answering, the system gave a warning. After all questions were answered, the questionnaire could be completed and the save button could be pressed. In this way, it was ensured that the questionnaires were 100% filled out. Adolescents who did not want to continue the study could withdraw from the study by saying end the survey. However, no adolescents wanted to withdraw from their study.
2.2.3. Data Collection Tools
To determine the descriptive characteristics of the adolescents, the “Descriptive Information Form” and the “Digital Amnesia Scale Adolescent Form (DAADF) Draft” were used in the study. The Descriptive Information Form included 4 questions on the age, gender, education, location and duration of digital device use of the participants. The DAADF Draft consisted of 19 items had a 5‐point Likert style and was scored as 0 = Never, 1 = Rarely, 2 = Sometimes, 3 = Often, 4 = Always.
2.2.4. Evaluation of Data
The SPSS 26 and AMOS 23 package programs were used in the data analysis. The construct validity of the scale was performed with Exploratory Factor Analysis (EFA) and the verification of the structure was done with Confirmatory Factor Analysis (CFA). Average Variance Explained (AVE) and Compound Reliability (CR) values were calculated for convergent and discriminant validity. For the reliability of the scale, Cronbach α coefficient and split‐half reliability were used.
For EFA, “Principal Components Analysis” was used, and “Direct Oblimin Technique” from the “Oblique Rotation” techniques was used as the factor rotation technique based on the assumption that the factors were associated with each other. To verify the structure, CFA was performed and goodness of fit indices were examined. For the goodness of fit indices, 0.05 < RMSEA < 0.10; 0.90 ≤ CFI ≤ 0.95; 0.90 ≤ GFI ≤ 0.95; AGFI > 0.90; 0.90 ≤ CFI ≤ 0.95 values were taken as acceptable limits (Meydan and Şeşen 2011; Schermelleh‐Engel and Moosbrugger 2003; Wang and Wang 2012). The AVE values of the items must be 0.50 and above to ensure convergent validity (Bagozzi and Yi 1988) and the CR value must be 0.70 and above (Hair et al. 2010). It is also recommended that the Cronbach α reliability coefficient must be 0.70 and above (DeVellis 2014). For this reason, it was planned to accept that the scale had sufficient reliability if Cronbach α was > 0.70.
3. Results
The results regarding the validity, reliability, verification of the structure, and convergent and discriminant validity of the scale are included in this section.
1. Validity: Validity expresses the extent to which a measurement tool accurately and completely measures the concept it aims to measure in line with its purpose. In the present study, the validity of the scale was evaluated with content validity, structural validity, and internal validity.
Content Validity: The Lawshe Technique was used for content validity, which provides a systematic approach to evaluate the content validity of scale items in scale development and validity studies and offers an objective evaluation based on expert opinions (Büyüköztürk 2010; Lawshe 1975; Polit and Beck 2006). In line with the Lawshe Technique, the Content Validity Ratio (CVR) and Content Validity Index (CVI) were calculated. Based on the results, the CVR values of items I4, I5, I9, and I10 were below the 0.80 value calculated for 10 experts and were removed from the study. The CVR values of the remaining items varied between 0.80 and 1. The CVI value calculated for the remaining 15 items in the scale was 0.92. It was concluded that the form provided content validity as a whole because the CVI value obtained was greater than the CVR value (CVI > CVR).
Construct Validity: Construct validity determines how well a measurement tool measures a theoretical construct (Strauss and Smith 2009). The construct validity of the scale was tested with EFA. The main purpose of EFA is to determine how many factors the items in the scale will be collected (Costello and Osborne 2005). It is also recommended to examine the item‐total correlations of the items before EFA. If the item‐total score correlation coefficient is below 0.30, it must be considered that there is a problem with the item and it must either be changed or removed from the scale (Çokluk et al. 2014; Şencan 2005).
It was seen in the item‐total correlations that the item‐total correlation of item I11 was below 0.30 and was removed from the study (I 11 ‘I care about where I find information in digital environments’). The item‐total correlations of the other items ranged from 0.399 to 0.684 (see Table 1). For this reason, it was decided that the reliability of these items included in the analysis was at a sufficient level and they must remain in the scale.
Table 1.
Item means, Cronbach α values, and item total correlations.
No | x | sd | α | ITC | No | x | sd | α | ITC |
---|---|---|---|---|---|---|---|---|---|
I1 | 2.33 | 1.18 | 0.871 | 0.472 | I13 | 1.27 | 1.12 | 0.861 | 0.684 |
I2 | 2.13 | 1.21 | 0.871 | 0.479 | I14 | 2.81 | 1.09 | 0.874 | 0.399 |
I3 | 1.83 | 1.21 | 0.870 | 0.485 | I15 | 1.36 | 1.23 | 0.866 | 0.578 |
I6 | 1.42 | 1.17 | 0.867 | 0.559 | I16 | 1.56 | 1.29 | 0.872 | 0.453 |
I7 | 2.19 | 1.14 | 0.866 | 0.575 | I17 | 1.29 | 1.21 | 0.862 | 0.649 |
I8 | 1.90 | 1.10 | 0.863 | 0.643 | I18 | 1.13 | 1.13 | 0.864 | 0.610 |
*I11 | 2.55 | 1.12 | 0.882 | 0.212 | I19 | 1.09 | 1.08 | 0.868 | 0.524 |
I12 | 1.52 | 1.10 | 0.864 | 0.631 | — | — | — | — | — |
Abbreviations: α, Cronbach Alpha; ITC, Total Item Collection; sd, standart deviation; x, mean.
EFA Findings: To perform EFA, the suitability of the data metric for factor analysis must be investigated. To do this, the KMO value and Bartlett's Sphericity test must be examined.
Based on the findings, the KMO value was found to be 0.866. Bartlett's Sphericity test gave a significant result (x2 = 1101.858; p < 0.001). The values obtained are within the limits recommended in the literature (Büyüköztürk 2010). In light of these findings, it was decided that the data matrix was suitable for factor analysis (see Table 2).
Table 2.
EFA findings regarding the factor structure of DAADF.
No | Items | F1 | F2 |
---|---|---|---|
I1 | I store the information I need to remember on my digital devices. | 0.864 | |
I2 | I rely on my digital devices to help me remember information | 0.836 | |
I3 | I use digital reminders for personal tasks or activities. | 0.774 | |
I12 | I remember where the information is in the digital environment rather than remembering the information itself. | 0.750 | |
I13 | I forget information quickly because I rely on my digital devices | 0.822 | |
I15 | I do not feel the need to remember the information I store on my digital device | 0.787 | |
I17 | I feel that the memory of my digital device is my own memory. | 0.713 | |
I18 | When I have difficulty accessing information without my digital device I feel helpless | 0.785 | |
I19 | I cannot remember information (name, date, number, etc.) without my digital device. | 0.764 | |
Variance Explained: | 17.9 | 45.2 | |
Total Variance: | 63.1 | ||
KMO: 0.866; Barlett's: 1101.858; df: 36; p: 0.000 |
Note: F1, Recalling Information Factor; F2, Accessing and Storing Information Factor.
The construct validity of the scale was determined by EFA. “Principal Components Analysis” was used for EFA, and as the factor rotation technique, the “direct oblimin technique” from the “oblique rotation” techniques were used based on the assumption that the factors are related to each other, and rotation was performed in three iterations. To determine the factorization in the scale, factors with eigenvalues greater than 1 were taken into consideration and the line graph was evaluated. It was preferred that the variance rate explained by each factor formed be at least 5% and that the variance explained in the total of the scale be at least 50%. For the item elimination process from the factors, the item factor loadings being at least 0.50 was taken as a basis (Finch et al. 2016), when an item was loaded on two factors simultaneously if the difference was less than 0.10, these items were considered overlapping items and removed from the study (Büyüköztürk 2010).
Based on the analysis, a 2‐factor structure with an eigenvalue greater than 1 was obtained. Within the obtained structure, item I7 was evaluated as an overlapping item since it had high loadings on different factors (I 7 ‘Instead of trying to recall information, I would like to be able to access it quickly’) and some items (I6, I8, I14, and I16) were removed from the scale since their factor loadings were below 0.50 (e.g., I 16 ‘I cannot recall my passwords (social media etc.) without my digital devices’). As a result of item elimination, a 9‐item structure was determined. The total variance explained by the scale was 63.1%. This variance explanation is large enough for a multi‐factor scale. Since Factor 1 is related to the characteristics of adolescents in remembering information, it was named the “Remembering Information” factor. The variance explained by this factor is 17.9%. Since Factor 2 includes the individual's habits of accessing and storing information, this factor was named the “Information Access and Storage” factor. The variance explained by this factor was 45.2% (see Table 2).
Internal Validity: The internal validity of the items constituting the structure of the scale was tested with a 27% lower‐upper group comparison (see Table 3).
Table 3.
Comparison of 27% lower‐upper groups.
Factor | Groups | n | Mean | Standard deviation | t | p |
---|---|---|---|---|---|---|
F1. Recalling Information | Upper group | 86 | 2.44 | 1.21 | −42.144 | 0.000 |
Lower group | 9.93 | 1.11 | ||||
F2. Accessing and Storing Information | Upper group | 86 | 1.72 | 1.33 | −34.232 | 0.000 |
Lower group | 14.73 | 3.27 | ||||
Digital Amnesia | Upper group | 86 | 5.70 | 2.60 | −35.093 | 0.000 |
Lower group | 22.86 | 3.72 |
A 27% subgroup/upper group comparison was performed for the internal validity of the scale. It was determined that the mean scores of the upper group with a high mean score and the subgroup with a low mean score on the scale showed statistically significant differences from each other both in the subgroups and in the total scale (p < 0.001). The structure showed that the scale correctly distinguishes individuals with high and low levels of digital amnesia, that is, it has internal validity (see Table 3).
2. Reliability: Reliability is the degree to which a measurement tool demonstrates consistency and stability. Reliability aims to minimize the error margin of the measurement tool and ensures that the measurements are stable (Crocker and Algina 2008). In the present study, internal consistency (Cronbach α) and split‐half test consistency were used for the reliability of the scale.
The Cronbach α of the obtained structure was calculated for the factors and the overall scale. The Cronbach α reliability coefficient was calculated as 0.77 for the “Recalling Information” factor, 0.87 for the “Accessing and Storing Information” factor, and 0.84 for the overall scale (see Table 4). These values obtained are within the limits recommended in the literature (DeVellis 2014). This finding shows that the scale has sufficient reliability. Spearman–Brown correlation coefficient was evaluated for the two‐half test consistency of the scale. It was observed that the correlation values were high in both the sub‐dimensions and the total scale. Based on these findings, it was concluded that the reliability of the scale was sufficient (see Table 4).
Table 4.
Split‐half test consistency and Cronbach α.
Factor | Half | x | sd | r | Guttmann | Cronbach α |
---|---|---|---|---|---|---|
F1. Recalling Information | 1st Half | 4.46 | 2.11 | 0.732 | 0.66 | 0.77 |
2st Half | 3.94 | 2.10 | ||||
F2. Accessing and Storing Information | 1st Half | 4.15 | 2.89 | 0.841 | 0.84 | 0.87 |
2st Half | 3.62 | 2.81 | ||||
Digital Amnesia | 1st Half | 9.08 | 4.08 | 0.759 | 0.75 | 0.84 |
2st Half | 4.98 | 3.66 |
Abbreviation: r, Spearman–Brown Cofficient.
3. Confirmation of the Structure: The structure of the scale validation was performed with DFA. CFA is used to test the accuracy of the previously determined factor structure. It allows the evaluation of how well the proposed model is compatible with the data based on the AFA results (Kline 2015). While evaluating CFA, goodness‐of‐fit indices such as RMSEA (Root Mean Square Error of Approximation); CFI (Comparative Fit Indices); GFI (Goodness‐of‐Fit Index); AGFI (Adjusted Goodness of Fit Index) and NFI (Normed Fit Index) were used as the basis.
The two‐factor structure of the digital amnesia scale was tested with first‐level multifactor CFA. When the paths regarding the structure were examined, it was found that both sub‐dimensions made a significant contribution to the scale (see Figure 1). Also, the items in the factors loaded significantly on the factor they were in (p < 0.001). The factor loadings of the items in the “Remembering Information” factor varied between 0.69 and 0.71 and between 0.69 and 0.82 in the “Accessing and Storage of Information” factor. When the goodness of fit values of the structure were evaluated, it was seen that x2/df: 1.513, RMSEA: 0.045, CFI: 0.98, GFI: 0.97, AGFI: 0.95 and NFI: 0.96. These goodness of fit values of the indices were found to be at the level suggested by the literature (Meydan and Şeşen 2011; Schermelleh‐Engel and Moosbrugger 2003; Wang and Wang 2012). Based on these findings, it was concluded that the two‐factor structure of the scale was confirmed.
Figure 1.
CFA findings regarding the factor structure of the scale.
4. Convergent and Discriminant Validity: AVE and CR values were examined for the convergent and discriminant validity of the scale.
When the convergent validity of the two‐factor structure of the scale was evaluated, the AVE value of the “Recalling Information” factor was calculated as 0.51 and the CR value as 0.76, and the AVE value of the “Accessing and Storing Information” factor was calculated as 0.52 and the CR value as 0.87. Since the AVE values in both factors of the scale were greater than 0.50, it was concluded that the average explained variances of the items were significant. When the composite reliability of the structure of the scale was evaluated, it was determined that it was significant since the CR values were above 0.70. When the AVE and CR values were evaluated together, it was concluded that the conditions of AVE ≥ 0.50, CR ≥ 0.70 and CR > AVE, which is required for convergent validity, were met, and for this reason, the convergent validity of the scale was ensured. The square root results of the average explained variance (AVE) values calculated for discriminant validity are in parentheses and in bold. Since these values are higher than the correlation value in the same row and column, it can be said that the discriminant validity of the scale is achieved (see Table 5).
Table 5.
Convergent and discriminant validity findings.
Convergent Validity | |||||||
---|---|---|---|---|---|---|---|
λ | λ2 | 1‐λ2 | n | AVE | CR | ||
F1 Recalling Information | I1 | 0.705 | 0.497025 | 0.502975 | 3 | 0.51 | 0.76 |
I22 | 0.749 | 0.561001 | 0.438999 | ||||
I3 | 0.693 | 0.480249 | 0.519751 | ||||
F2 Accessing and Storing Information and Storage | I12 | 0.69 | 0.4761 | 0.5239 | 6 | 0.52 | 0.87 |
I13 | 0.819 | 0.670761 | 0.329239 | ||||
I15 | 0.706 | 0.498436 | 0.501564 | ||||
I17 | 0.719 | 0.516961 | 0.483039 | ||||
I18 | 0.702 | 0.492804 | 0.507196 | ||||
I19 | 0.693 | 0.480249 | 0.519751 | ||||
Discriminant Validity |
DAADF is a measurement tool that consists of 2 sub‐dimensions and 9 items. The scale is evaluated with a 5‐point Likert‐type scale (Never: ‘0,’ Rarely: ‘1,’ Sometimes: ‘2,’ Frequently: ‘3’ and Always: ‘4’). There is no reverse‐scored item in the scale. The total variance explained by the scale is 63.1% and the Cronbach α reliability coefficient calculated for the entire scale is 0.84. The scoring of the scale is obtained by summing the scores of the sub‐dimensions and a score between 0 and 36 is obtained. An increase in the score indicates a high level of digital amnesia. (see Appendix 1).
DAADF consists of the subdimensions of “Remembering information” and “Accessing and storing information”. The “Remembering information” subdimension includes statements about individuals’ habits of using digital tools for remembering information and the level of their trust in digital tools for remembering purposes and consists of items 1–3. The variance explained by this factor was 17.9% and Cronbach's α reliability coefficient of the factor was 77. The score that can be obtained from this factor varies between 0 and 12. The “Accessing and Storage of Information” subdimension includes statements about the use of digital tools for accessing information and the habits of storing information and consists of items 4–9. The variance explained by this factor is 45.2% and Cronbach's α reliability coefficient of the factor is 87. The score that can be obtained from this factor varies between 0 and 24.
To make comparisons between samples easily and consistently in studies conducted using the scale and in adaptation studies to be conducted, standardization has been made in the scoring of this scale. The obtained score must be standardized to 0–100 points based on the standardization rules specified in the guideline (if there is a digit after the decimal point, it must be rounded to the nearest integer) and it is mandatory to conduct analyses on standardized scores in future studies.
Standardization: To make easy and consistent comparisons between samples in studies using the DAADF scale and in adaptation studies to be conducted, standardization was made in the scoring of this scale.
The following formula must be used to standardize the scores obtained from the scale to 100.
For example, if a person scored 18 on the scale, his standardized score can be calculated as follows.
In such a case, the standardized score of someone who scores 18 points on the scale is 50. It is mandatory to use the standardized form of the scores obtained from the scale in the studies to be conducted.
4. Discussion
The DAADF was developed to examine the effects of the use of digital devices for accessing, remembering, and storing information on memory functions in adolescents between the ages of 12 and 18. To conduct the validity and reliability studies of the scale, 625 adolescents aged 12 to 18 and over were reached. A sample of 330 was evaluated for the EFA study and 304 for the CFA study. 66% of the AFA sample was female and the mean age of the sample was 16.9 ± 1.5. The average daily digital device use time in this sample was 4.7 ± 2.3 h. 65.8% of the CFA sample was female and the mean age of the sample was 16.8 ± 1.6. The average daily digital device use time in this sample was found to be 4.5 ± 2.2 h.
CVR and CVI values were calculated in the present study for the scope validity of the scale. While CVR is used for the acceptance of certain items, CVI reflects the content validity of the items in the scale as a whole. A high CVI value indicates that the scope validity of the scale is high (Lawshe 1975; Yurdugül 2005). In this study, CVR values ranged from 0.80 to 1, while CVI values were 0.92. These values obtained were interpreted as the scope validity of the items was at a sufficient level. Since the CVI value obtained was greater than the CVR value (CVI > CVR), it was concluded that the form provided scope validity as a whole.
Before EFA, the KMO coefficient and Bartlett's Sphericity Test were evaluated. KMO provides information about whether the data matrix is suitable for factor analysis and the suitability of the data structure for factor extraction. KMO is expected to be higher than 0.60. Bartlett's Sphericity test examines whether there is a relationship between variables based on partial correlations. The significant value of the calculated chi‐square statistic can be seen as evidence of the normality of the scores (Büyüköztürk 2010). In this study, the KMO coefficient was found to be 0.866, and Bartlett's Sphericity test yielded a significant result (x2 = 1101.858; p < 0.001). The values obtained are within the limits recommended in the literature (Büyüköztürk 2010). In light of these findings, it was decided that the data matrix was suitable for factor analysis (see Table 2). As a result of EFA, a two‐factor structure with an eigenvalue greater than 1 was obtained. The factor loadings of the items were above 0.50. The variance explained by the scale is 63.1%, which is a sufficiently large variance for a multidimensional scale (Büyüköztürk 2010; Finch et al. 2016).
The Cronbach's α coefficient and split‐half test consistency were also examined for the reliability of the scale. Cronbach's α coefficient measures the correlation between different items in the measurement tool and determines how compatible the items are with each other. According to the literature data, a Cronbach's α value of 0.60 and below is “unacceptable,” 0.60–0.65 is “undesirable”, 0.65–0.70 is “least acceptable,” 0.70–0.80 is “significant,” 0.80–0.90 is “very good,” and well above 0.90 is “the scale must be shortened.” A high Cronbach's α coefficient (usually 0.70 and above) indicates that the internal consistency of the scale is strong (DeVellis 2014).
Split‐half test consistency measures the correlation between two equal parts of the scale's items and evaluates the internal consistency of the scale. A high split‐half test consistency (usually 0.70 and above) indicates that the scale's items make consistent measurements (DeVellis 2014; Nunnally and Bernstein 1994). In this study, Cronbach's α coefficient was above 0.70 in the sub‐dimensions of the scale and the total scale, and there was a high correlation between the two halves. These findings showed that the scale had sufficient reliability.
The 2‐factor structure of the digital amnesia scale was tested with first‐level multifactor CFA. The factors of the scale make a significant contribution to the scale (see Figure 1) and the items show a significant relationship with the factors they are in (p < 0.001). When the goodness of fit values of the two‐factor structure was examined, it was seen that it took the values of x2/df: 1.513, RMSEA: 0.045, CFI: 0.98, GFI: 0.97, AGFI: 0.95 and NFI: 0.96. The goodness of fit values obtained were found to be at the level recommended by the literature (Meydan and Şeşen 2011; Schermelleh‐Engel and Moosbrugger 2003; Wang& Wang). Based on these findings, the 2‐factor structure of the scale was confirmed.
The AVE value, which is the abbreviation for Average Explained Variance, is calculated for each factor structure (Hair et al. 2010; Yaşlıoğlu 2017), and to ensure the convergent validity of the scale, the AVE values of the items must be 0.50 and above (Bagozzi and Yi 1988). The CR value, which is expressed as composite reliability, is calculated based on the factor loadings and error variance values obtained from CFA, and CR values are expected to be 0.70 and above (Hair et al. 2010). When the convergent validity of the two‐factor structure of the scale was evaluated, the AVE value calculated for the “Recalling Information” factor was 0.51, the CR value was 0.76, and the AVE value for the “Accessing and Storage of Information” factor was 0.52 and the CR value was 0.87. According to Fornell et al. (1981), an AVE value below 0.50 but a CR value above 0.70 indicates an acceptable value (Fornell et al. (1981). Since the AVE values of the factors were greater than 0.50 and the CR values were greater than 0.70, it was concluded that the convergent validity of the scale was ensured. Since the square root results of the AVE values calculated for discriminant validity were higher than the correlation value in the same row and column, it was understood that the discriminant validity of the scale was ensured (see Table 5).
5. Conclusion
The validity and reliability of the Digital Amnesia Scale Adolescent Form‐DAADF, which was developed for adolescents between the ages of 12–18, was conducted in Turkish culture in this study. DAADF was developed to examine the effects of using digital devices for accessing, remembering, and storing information on memory functions. Through various analytic methods, including content validity, construct validity, convergent and discriminant validity, reliability analyses, and construct validation, it was determined that the DAADF scale has sufficient reliability and validity and can effectively measure digital amnesia in adolescents between the ages of 12 and 18.
It is vital to acknowledge the limitations of this study. For instance, the sample is predominantly female, comprising adolescents living in a metropolitan area and attending a public school. In future studies, it would be advantageous to select a sample that reflects a more balanced gender and socioeconomic distribution. Conversely, this metadological study, characterised by a cross‐sectional design, was constrained by the specific time and place. This is a significant limitation of the study. It is recommended that the assessment of digital amnesia in adolescents be further developed in parallel with rapidly changing technological developments, such as artificial intelligence.
In this study, cultural characteristics could not be taken into account, and psychometric properties were analysed only in a single culture. In future studies, it is recommended to examine the psychometric properties in different cultures. Additionally, longitudinal studies are recommended to assess the digital amnesia levels of adolescents. It is anticipated that the findings from these assessments will inform crucial decisions regarding the educational journeys of adolescents. From a clinical perspective, the results obtained from the DAADF measurement are expected to be useful in preventing psychiatric diseases by detecting changes in cognitive functioning at an early stage.
Author Contributions
Elçin Babaoğlu: corresponding author, writing – review and editing, software, methodology, formal analysis, read, and approved the final manuscript. Yalçın Kanbay: writing – review and editing, writing – original draft, visualization, validation, project administration, software, methodology, conceptualization, read, and approved the final manuscript. Aydan Akkurt Yalçıntürk: writing – review and editing, methodology, data curation, conceptualization, read, and approved the final manuscript. Aysun Akçam: data curation, conceptualization, editing, read, and approved the final manuscript.
Ethics Statement
This study was approved by Uskudar University Non‐Interventional Clinical Research Ethics Committee with the date 29.08.2024 and decision number 61351342/020‐311.
Consent
Ethical approval was obtained from our institutional review board (No: 61351342‐312). Verbal and written consent statement was obtained from the participants or their legal guardians before their assignment to the study groups.
Conflicts of Interest
The authors declare no conflicts of interest.
Permission to Reproduce Material From Other Sources
Data can be obtained by contacting the corresponding author when necessary.
Acknowledgments
This study received no specific grant from any funding agency in the public, commercial, or not‐for‐profit sectors. We declare that this article has not been published or evaluated, in whole or in part, in any other journal or proceedings book at the same time. This study was supported by the the Scientific and Technological Research Council of Türkiye (TUBITAK).
Appendix 1.
Table A1.
Digital Amnesia Scale Adolescent Form – DAADF.
DIGITAL AMNESIA SCALE ‐ ADOLESCENT FORM Read the statements below and mark the box that you think is most appropriate for you. |
Never (0) | Rarely (1) | Sometimes (2) | Frequently (3) | Always (4) | |
---|---|---|---|---|---|---|
1 | I save information I need to recall on my digital devices. | |||||
2 | I rely on my digital devices to recall information. | |||||
3 | I use digital reminders for my tasks or events. | |||||
4 | Instead of remembering the information itself, I recall its location in the digital environment. | |||||
5 | I forget information quickly because I rely on my digital devices. | |||||
6 | I do not feel the need to keep in my mind the information I save on my digital devices. | |||||
7 | I feel like the memory of my digital devices is my memory. | |||||
8 | I feel helpless when I have difficulty accessing information without my digital devices. | |||||
9 | I cannot recall information (i.e., name, date, number, etc.) without my digital devices. |
Table A1.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
- Bagozzi, R. P. , and Yi Y.. 1988. “On the Evaluation of Structural Equation Models.” Journal of the Academy of Marketing Science 16: 74–94. [Google Scholar]
- Baron, N. S. , Calixte R. M., and Havewala M.. 2017. “The Persistence of Print Among University Students: An Exploratory Study.” Telematics and Informatics 34, no. 5: 590–604. [Google Scholar]
- Büyüköztürk, Ş. 2010. Handbook of Data Analysis for Social Sciences. Pegem Academia. [Google Scholar]
- Cao, F. , and Su L.. 2007. “Internet Addiction Among Chinese Adolescents: Prevalence and Psychological Features.” Child: Care, Health and Development 33: 275–281. [DOI] [PubMed] [Google Scholar]
- Çokluk, Ö. , Şekercioğlu G., and Büyüköztürk S. (2014). Multivariate Statistics for Social Sciences: SPSS and LISREL Applications. Pegem Academia. [Google Scholar]
- Comrey, A. L. , and Lee H. B.. 1992. A First Course in Factor Analysis. Lawrence Erlbaum Associates. [Google Scholar]
- Costello, A. B. , and Osborne J. W.. 2005. “Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most From Your Analysis.” Practical Assessment, Research & Evaluation 10, no. 7: 1–9. [Google Scholar]
- Carr, N. 2010. The Shallows: What the Internet Is Doing to Our Brains. 92–124. W. W. Norton & Company. [Google Scholar]
- Crocker, L. , and Algina J.. 2008. Introduction to Classical and Modern Test Theory. Cengage Learning. [Google Scholar]
- Daniyal, M. , Javaid S. F., Hassan A., and Khan M. A. B.. 2022. “The Relationship Between Cellphone Usage on the Physical and Mental Wellbeing of University Students: A Cross‐Sectional Study.” International Journal of Environmental Research and Public Health 19: 9352. [DOI] [PMC free article] [PubMed] [Google Scholar]
- DeVellis, R. F. 2014. Scale Development. Nobel Academic Publishing. [Google Scholar]
- Domingues‐Montanari, S. 2017. “Clinical and Psychological Effects of Excessive Screen Time on Children.” Journal of Paediatrics and Child Health 53: 333–338. [DOI] [PubMed] [Google Scholar]
- Finch, H. , French B. F., and Immekus J. C.. 2016. Applied Psychometrics Using SPSS and AMOS. Information Age Publishing in. [Google Scholar]
- Fornell, C. , Larcker D. F., Fornell C., and Larcker D. F.. 1981. “Evaluating Structural Equation Models With Unobservable Variables and Measurement Error.” Journal of Marketing Research 18, no. 1: 39–50. [Google Scholar]
- Gilbert, G. E. , and Prion S.. 2016. “Making Sense of Methods and Measurement: Lawshe's Content Validity Index.” Clinical Simulation in Nursing 12, no. 12: 530–531. [Google Scholar]
- Girela‐Serrano, B. M. , Spiers A. D. V., Ruotong L., Gangadia S., Tole dano M. B., and Di Simplicio M.. 2022. “Impact of Mobile Phones and Wire Less Devices Use on Children and Adolescents’ Mental Health: A Systematic Review.” In European Child and Adolescent Psychiatry. Springer. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Greenfield, P. M. 2009. “Technology and Informal Education: What Is Taught, What Is Learned.” Science 323, no. 5910: 69–71. [DOI] [PubMed] [Google Scholar]
- Hair, J. F. J. , Black W. C., Babin B. J., and Anderson R. E.. 2010. Multivariate Data Analysis. Seventh Edition Prentice Hall. [Google Scholar]
- Hermawati, D. , Rahmadi F. A., Sumekar T. A., and Winarni T. I.. 2018. “Early Electronic Screen Exposure and Autistic‐Like Symptoms.” Intractable & Rare Diseases Research 7: 69–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kanbay, Y. , Babaoğlu E., Akkurt Yalçıntürk A., and Akçam A.. 2025. “Digital Amnesia: The Erosion of Memory.” Psikiyatride Güncel Yaklaşımlar 17, no. 3: 544–553. [Google Scholar]
- Karspersky Lab . 2015. Kaspersky Lab Survey in India: Smartphone Addiction Cause Digital Amnesia, Kaspersky Lab, (accessed June 9, 2024). https://www.itvoice.in/index.php/it-voice-news/kaspersky-lab-survey-in-india-smartphoneaddiction-cause-digital-amnesia. [Google Scholar]
- Kline, R. B. 2015. Principles and Practice of Structural Equation Modeling. Guilford Press. [Google Scholar]
- Laconi, S. , Pirès S., and Chabrol H.. 2017. “Internet Gaming Disorder, Motives, Game Genres and Psychopathology.” Computers in Human Behavior 75: 652–659. [Google Scholar]
- Lawshe, C. H. 1975. “A Quantitative Approach to Content Validity.” Personnel Psychology 28, no. 4: 563–575. [Google Scholar]
- Li, H. , Wu D., Yang J., Luo J., Xie S., and Chang C.. 2021. “Tablet Use Affects Preschoolers’ Executive Function: dNIRS Evidence From the Dimensional Change Card Sort Task.” Brain Sciences 11, no. 5: 567. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lodha, P. 2019. “Digital Amnesia: Are We Headed Towards Another Am Nesia.” Indian Journal of Mental Health 6, no. 1: 18–22. [Google Scholar]
- Meydan, H. , and Şeşen H.. 2011. Structural Equation Modeling AMOS Applications. Detay Publishing. [Google Scholar]
- Mueller, P. A. , and Oppenheimer D. M.. 2014. “The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking.” Psychological Science 25, no. 6: 1159–1168. [DOI] [PubMed] [Google Scholar]
- Musa, N. , and Ishak M. S.. 2020. “The Identification of Student's Behaviours of Digital Amnesia Syndromes and Google Effect in the Department of Library Sciences in State Islamic University of Ar Raniry–Indonesia.” International Journal of Information Technology and Library Science 9: 1–8. [Google Scholar]
- Neelima, P. , and Sunder R. R. 2019. “Effect of Smartphone Over Usage on the Short‐Term Memory of Medical Students.” Indian Journal of Basic and Applied Medical Research 8, no. 2: 309–313. [Google Scholar]
- Nunnally, J. C. , and Bernstein I. H.. 1994. Psychometric Theory (3rd ed.). McGraw‐Hill. [Google Scholar]
- Polit, D. F. , and Beck C. T.. 2006. “The Content Validity Index: Are You Sure You Know What's Being Reported? Critique and Recommendations.” Research in Nursing & Health 29, no. 5: 489–497. [DOI] [PubMed] [Google Scholar]
- Prensky, M. 2012. From Digital Natives to Digital Wisdom: Hopeful Essays for 21st Century Learning. Corwin Press. [Google Scholar]
- Reynolds, R. A. , Woods R., and Roberts J. D.. 2007. Handbook of Research on Electronic Surveys and Measurements. Idea Group reference. [Google Scholar]
- Schermelleh‐Engel, K. , and Moosbrugger H.. 2003. “Evaluating the Fit of Structural Equation Models: Tests of Significance and Descriptive Goodness‐of‐Fit Measures.” Methods of Psychological Research Online 8, no. 2: 23–74. [Google Scholar]
- Şencan, H. 2005. Reliability and Validity in Social and Behavioral Measurement. Seçkin Publishing. [Google Scholar]
- Sparrow, B. , Liu J., and Wegner D. M.. 2011. “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips.” Science 333, no. 6043: 776–778. 10.1126/science.1207745. [DOI] [PubMed] [Google Scholar]
- Storm, B. C. , and Stone S. M.. 2015. “Saving‐Enhanced Memory: The Benefits of Saving on the Learning and Remembering of New Information.” Psychological Science 26, no. 2: 182–188. [DOI] [PubMed] [Google Scholar]
- Strauss, M. E. , and Smith G. T.. 2009. “Construct Validity: Advances in Theory and Methodology.” Annual review of clinical psychology 5: 1–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Swaminathan, S. 2020. “Digital Amnesia: The Smart Phone and the Modern Indian Student.” Journal of Humanities and Social Sciences Studies 2, no. 3: 23–31. [Google Scholar]
- Sweller, J. 1988. “Cognitive Load During Problem Solving: Effects on Learning.” Cognitive science 12, no. 2: 257–285. [Google Scholar]
- Uysal, G. , and Balci S.. 2018. “Evaluation of a School‐Based Program for Internet Addiction of Adolescents in Turkey.” Journal of Addictions Nursing 29: 43–49. [DOI] [PubMed] [Google Scholar]
- Wang, J. , and Wang X.. 2012. Structural Equation Modeling: Applications Using Mplus: Methods and Applications. John Wiley & Sons, 5–9. [Google Scholar]
- Weinstein, A. , Dorani D., Elhadif R., Bukovza Y., Yarmulnik A., and Dannon P.. 2015. “Internet Addiction Is Associated With Social Anxiety in Young Adults.” Annals of Clinical Psychiatry: Official Journal of the American Academy of Clinical Psychiatrists 27: 4–9. [PubMed] [Google Scholar]
- Yadav, A. 2019. “‘Digital Amnesia’ on the Rise as we Outsource Our Memory to the Web.” Journal of Critical Reviews 6: 234–241. [Google Scholar]
- Yaşlıoğlu, M. M. 2017. “Factor Analysis and Validity in the Social Sciences: Using Exploratory and Confirmatory Factor Analysis.” Istanbul University Journal of Business Administration Faculty 46: 74–85. [Google Scholar]
- Yurdugül, H. 2005. “A Method for Content Validity in Scale Development Studies: The Lawshe Technique.” Journal of Measurement and Evaluation in Education and Psychology 1, no. 1: 47–60. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.