Skip to main content
PLOS One logoLink to PLOS One
. 2020 Apr 16;15(4):e0231785. doi: 10.1371/journal.pone.0231785

Time series analyses with psychometric data

Tatjana Stadnitski 1,*
Editor: Stephan Doering2
PMCID: PMC7162519  PMID: 32298372

Abstract

Understanding of interactional dynamics between several processes is one of the most important challenges in psychology and psychosomatic medicine. Researchers exploring behavior or other psychological phenomena mostly deal with ordinal or interval data. Missing values and consequential non-equidistant measurements represent a general problem of longitudinal studies from this field. The majority of process-oriented methodologies was originally designed for equidistant data measured on ratio scales. Therefore, the goal of this article is to clarify the conditions for satisfactory performance of longitudinal methods with data typical in psychological and psychosomatic research. This study examines the performance of the Johansen test, a procedure incorporating a set of sophisticated time series techniques, in reference to data quality utilizing a Monte Carlo method. The main results of the conducted simulation studies are: (1) Time series analyses require samples of at least 70 observations for an accurate estimation and inference. (2) Discrete data and failing equidistance of measurements due to irregular missing values appear unproblematic. (3) Relevant characteristics of stationary processes can be adequately captured using 5- or 7-point ordinal scales. (4) For trending processes, at least 10-point scales are necessary to ensure an acceptable quality of estimation and inference.

Introduction

Process-oriented methodologies have become increasingly popular in different fields of empirical research. Multivariate time series techniques that treat variables as an interacting dynamic system, revealing their internal dynamics, represent relevant tools for understanding behavior or other psychological phenomena. In the last decade, there are growing numbers of empirical studies in psychology and psychosomatic medicine exploring intra-individual variability by means of sophisticated time series techniques like vector autoregressions, cointegration or vector error correction [15]. The problem is that these methods originate from physics, engineering, or econometrics and therefore are primarily designed for equidistant data measured on ratio scales. In psychology or psychosomatic, however, researchers mostly deal with ordinal or interval data. Furthermore, the measurements are often non-equidistant and contain missing values. Thus, the goal of this article is to clarify the conditions for satisfactory performance of time series analyses with psychometric data. We approach this issue by evaluating the performance of the Johansen test [6, 7] in reference to the data quality by using a Monte Carlo method. We chose the Johansen test for our simulation studies because its algorithm comprises various sophisticated time series techniques: it can handle autocorrelations; it is applicable to both stable and unstable data; it allows distinguishing between different types of dynamic systems; it provides the maximum likelihood estimation of time series parameters. Moreover, the Johansen test demonstrated good performance in Monte Carlo evaluations with continuous data [e.g., 8].

Theoretical background

This part of the article briefly introduces basic concepts of uni- and multivariate time series analyses and presents the Johansen test as a method for distinguishing between different types of dynamic systems. Stadnitski and Wild (2019) provide a detailed description of the methods within psychosomatic research and demonstrate their implementation with the R software [9]. For comprehensive examples of process-orientated modeling with empirical data from psychological research, consult Stadnitski (2014) [10].

Univariate time series models

To explain basic concepts of the time series approach, we start with a first order autoregressive model

yt=βyt1+ut (1)

which is a subtype of an autoregressive model of order m abbreviated as AR(m): yt = β1yt−12yt−2+…βmyt−m+ut. The autoregressive model incorporates the past or lagged values of the dependent variable as predictors (i.e., dependent and independent variables are both endogenous, that is, determined and interrelated inside the organism or system). External influences can enter the autoregressive system exclusively through the term ut. The residuals ut are not autocorrelated; this time-independent process is termed white noise. To determine external influences in the autoregression, the model from Eq 1 must be represented alternatively in the so-called moving-average form

yt=i=0βiuti=ut+βut1+β2ut2+ (2)

Since all independent variables in the autoregression are endogenous (lagged values of the dependent variable), Eq 2 points to the fact that autoregression actually models the impact of external random influences on the long-term development of the system. Therefore, the regression coefficient β in Eqs 1 and 2 can be interpreted as a memory parameter: β = 0 implies that the series yt is uncorrelated white noise with no memory; |β|<1 means that the effect of a random shock dissolves quickly without significant long-term impact on time series characteristics such as level or variance (short memory). Consequently, such processes are stable and their autocorrelations decline exponentially. When β = 1, the effect of a particular impulse does not dissipate over time and the series remembers the shock forever (long memory); as a result, observations remain strongly correlated even if they are far apart in time. The autoregressive process with β = 1 is called integrated of order 1 abbreviated I(1). Accordingly, short memory processes with |β|<1 are termed I(0). The I(1) process exhibits unstable trending behavior due to stochastic drifts, its first difference, Δyt = yt−yt-1,

Δyt=(1β)yt1+ut=ρyt1+ut (3)

is white noise.

Stability or instability as well as memory characteristics of time series can be inferred from their autocorrelation functions (ACF): non-zero autocorrelations at only a few lags are typical for stable short-memory processes, whereas significant autocorrelations on many lags indicate long memory or instability. Stationarity tests like the Augmented Dickey-Fuller (ADF) algorithm provide further possibilities to explore this issue. Stationarity means that the statistical characteristics of a process under study do not change over time (e.g., exhibit no trends or distinct fluctuations of mean or variance). Most time series are non-stationary due to a deterministic time trend or stochastic drift. The most general testing equation of the ADF test incorporates a constant term c for modelling drift, a coefficient τ for modelling time trend (T) and allows autocorrelated residuals at:

Δyt=c+τT+ρyt1+at. (4)

The null hypothesis of the ADF test is ρ = 0. Please note, if c as well as τ equal zero and the residuals are white noise, we obtain Eq 3 where the dependent variable is differenced and the predictors are not, the parameter ρ can therefore not be interpreted as the autocorrelation coefficient β in Eq 1: ρ = 0 means β = 1, which implies that the series is integrated (yt = yt-1+ut), ρ<0 stands for|β|<1 –that is, yt is stationary. For an applied description of stationarity tests, see Stadnytska (2010) [11].

Multivariate time series models

The simplest vector autoregressive VAR(m) model incorporates two processes. In contrast to the univariate autoregression, the current values of each variable are predicted not only from their own m lagged values; m extraneous lags (from the other variable) also serve as predictors. For instance, the bivariate VAR(1) model consists of two equations, each of them includes two predictors (y1,t-1 and y2,t-1): y1,t = β11y1,t−112y2,t−1+u1,t and y2,t = β21y1,t−122y2,t−1+u2,t or in matrix notation

(y1,ty2,t)=(β11β12β21β22)(y1,t1y2,t1)+(u1,tu2,t)yt=Byt1+ut. (5)

There are three types of dynamic systems with quite different characteristics: stationary, integrated and cointegrated. Stationary systems consist of stable processes without trends [e.g., y1,t and y2,t are I(0)]. Integrated systems incorporate integrated processes without common stochastic components [e.g., y1,t and y2,t are I(1)]. Systems of integrated time series with common trends so that they move together to some extend are called cointegrated.

Suppose that two processes share the same I(1) element xt

y1,t=b1xt+u1,ty2,t=b2xt+u2,t,

where u1,t and u2,t are stationary or I(0), then the following linear combination

y1,tb1b2y2,t=b1xt+u1,tb1b2b2xtb1b2u2,t=u1,tb1b2u2,t

is the weighted sum of stationary variables and therefore also I(0). Although the series y1,t and y2,t are individually integrated (that is, they have stochastic trends), there exists a stationary linear combination suggesting that the two variables have a long-term equilibrium relationship between them. The bivariate process of the previous example is called cointegrated of order 1 or CI(1). The relation y1,t=b1/b2y2,ty1,t=β0y2,ty1,tβ0y2,t=0 characterizes the long-run equilibrium between two processes. As noted earlier, time series can be represented alternatively in a differenced form (e.g., Eqs 3 and 4). Processes of cointegrated bivariate VAR(1) systems have the following differenced presentation, which is termed Vector Error Correction Model (VECM)

Δy1,t=α1(y1,t1β0y2,t1)+γ11,1Δy1,t1+γ12,1Δy2,t1+u1,tΔy2,t=α2(y1,t1β0y2,t1)+γ21,1Δy1,t1+γ22,1Δy2,t1+u2,tΔyt=αβyt1Γ1Δyt1+ut=Πyt1Γ1Δyt1+utwithΠ=αβ[a1β0a1a2β0a2] (6)

where the cointegrating vector β = (1, β0)´ models the long-run equilibrium relation between processes: y1,t-1 − β0y2,t-1 is the change in both variables that depends on the deviation from the equilibrium in period t−1.The absolute values of α1 and α2 reflect how quickly the variables restore the equilibrium. The γ coefficients of lagged differenced predictors capture short-term autocorrelations of the cointegrated system. For an elaborated discussion of cointegration in the context of psychological research, consult Stroe-Kunold et al. (2012) [12].

Johansen test

Note that every VAR(m) process has the representation

Δyt=Πyt1Γ1Δyt1Γm1Δytm+1+ut,

whereas the relation Π = αβ´of the VECM representation holds for cointegrated processes only. The rank of the matrix Π can therefore disclose properties of the system under study. For instance, in a stationary system all k series are stationary, thus linear combinations of them remain stationary. Hence, we can construct k independent stationary linear combinations which implies that Π has a full rank: rk(Π) = k. In the absence of common trends neither linear combination of integrated series becomes stationary, therefore rk(Π) = 0. The decomposition of Π as the product of two k x r matrices, Π = αβ′, is only possible for cointegrated data. The number of independent cointegrated relations (r) must be smaller than k and depends on the amount of common stochastic trends in the system (m): rk(Π) = r = km. For example, the bivariate VAR(1) system consists of two processes (k = 2), hence maximal one equilibrium relation r = 1 is possible: rk(Π) = 2 means the system is I(0), rk(Π) = 1 stands for CI(1), and rk(Π) = 0 implies that both series are I(1) without a common trend. The basic objective of the Johansen test is to distinguish between the different types of dynamic systems by estimating the rank of the matrix Π. In addition, the Johansen procedure provides the maximum likelihood estimation of the parameters α and β from Eq 6 for cointegrated systems.

Fig 1 demonstrates the performance of the Johansen test for simulated bivariate I(0), I(1), and CI(1) systems. We generated data using the R statistical environment and conducted the Johansen test with the command ca.jo of the R package urca [13]. All R commands of the discussed examples are provided in the Appendix in S1 File. Both time series of the I (0) example are stationary: y1,t = −0.5y1t−1+u1,t, y2,t = −0.2y2t−1+u2,t (|β|<1). The lag = 1 autocorrelations of the ACF provide estimations of βs. The ADF test performed with the command adf.test of the R package tseries rejects the H0: ρ = 0 (β = 1) in both cases (pADF1 = .01, pADF1 = .02). The Johansen test rejects the null hypotheses rk(Π) = 0 and rk(Π) = 1 because the test statistics exceed the 1% level significantly (97.67>23.52; 27.54>11.65). The estimated rank of the matrix Π is therefore 2. In the I(1) example the integrated series do not share a common trend. The ADF maintains the H0: ρ = 0 in both cases (pADF1 = .79, pADF1 = .21). As expected, the Johansen test does not reject the null hypothesis rk(Π) = 0: the test statistic (13.12) is smaller as the 1% critical value (23.52). The CI (1) series y1,t = 0.4xt+u1,t and y2,t = 0.6xt+u2,t share the same I (1) element xt, the cointegrating vector is β = (1, -0.4/0.6 = -0.67)´. The ADF identifies both cases as integrated (pADF1 = .43, pADF1 = .70). The estimated rank of the matrix Π in the Johansen test is 1, since the H0: rk (Π) = 0 is rejected (59.96>23.52) and H0: rk (Π) = 1 is maintained (3.97<11.65). The estimated cointegrating vector is (1, -0.61)´. Thus, the following linear combination y1,t−0.61y2,t of two integrated series is stationary.

Fig 1. Simulated bivariate I (0), I (1), CI (1) models and a bivariate empirical system with their autocorrelation functions and R outputs of the Johansen test.

Fig 1

The data of the empirical example in Fig 1 originated from the study by Kupfer, Brosig, and Brähler (2005), in which the authors analyze the marital interaction of a married couple under clinical conditions over a period of 144 days [14]. The time series represent the mood of the couple measured on Likert scales, the measurements are standardized. According to the Johansen test, the series are cointegrated since the estimated rank of the matrix Π is 1, the estimation of β0 is -0.93. In contrast to the simulated data, however, the correctness of the decision remains uncertain. The following Monte Carlo simulations aim, among other things, in providing information about the probability for obtaining a correct test decision in such cases.

Empirical versus simulated data

As stated above, the goal of this study is to clarify the conditions for satisfactory performance of the Johansen test with non-continuous data typical in psychological research. This goal can be achieved only with simulated data. To understand this statement, one should first know the differences amongst the following statistical concepts: “parameter”, “estimator”, and “estimate”. A parameter is a quantity that defines a particular system, such as the mean of the normal distribution. Strictly speaking, to obtain a population parameter one must measure an entire population, which is mostly infeasible, so instead one generally uses estimators (rules or formulae) to infer population parameters from observed samples. For any parameter, there are usually multiple estimators with diverse statistical properties. As an example, suppose we have n observations of some phenomenon X; then we can estimate the population mean (μ) using two well-known estimators, the sample mean, μ^1=1ni=1nXi, and the sample median, μ^2=X0.5. In contrast to parameters, estimators are not numbers but functions characterized by their distributions, expectancy values, and variances. An estimate is a particular numerical value obtained by applying an estimator. Good estimators are unbiased, i.e., their means equal the true parameter value, and have small variability, i.e., their estimates do not differ strongly. Considering that just one estimate per method is available in a typical research situation, an estimator with a narrow range is usually better than one with a broad range. For instance, both estimators of μ in large samples are normally distributed and unbiased; but μ^1 is a better estimator of μ than μ^2 because its variance is considerably smaller.

The Johansen test is the estimator of the present study. Among other things, it estimates the cointegrating vector β = (1, β0)´. Fig 1 shows that the estimate of β0 of the simulated CI(1) system is -0.61. Since the actual parameter is known in this case (β0 = -0.4/0.6 = -0.67), the estimated value can be compared with it to obtain the estimation error: -0.67-(-0.61) = -0.06. Therefore, working with simulated data allows to quantify the quality of the estimation. In contrast, the parameter value of the empirical system in Fig 1 is unknown, hence the difference between the estimate of β0 (-0.93) and the true parameter is incalculable, which makes quantification of the estimation error impossible. The examples of Fig 1 point to the fact that an evaluation of the Johansen test performance is only possible using simulated data.

Generally, an empirical way to determine the quality of an estimator is by means of Monte Carlo simulations. For instance, computational algorithms can generate a population with a known parameter value, and repeated samples of the same size can be drawn from this population, e.g., 1000 CI(1) systems with β0 = -1 and T = 100, then an estimator (e.g., the Johansen test) can be applied to the data, yielding 1000 estimates of the parameter β0. For a good estimator, the variability of the estimates must be low with the mean or median near the true parameter value.

Method

The following Monte Carlo experiments evaluate the performance of the Johansen test for bivariate time series systems measured on different scales with or without missing values. As quality indicators serve the percentage of false model identifications and the estimating precision of the cointegrating parameter β0 from Eq 6. Mean, median, interquartile range (IQR), and the percentage of estimates outside the 1.5 interquartile range (% OUT) obtained from 1000 replications are used as accuracy indicators of parameter estimations. All computations are performed with the R software.

For clarity purposes, we first evaluated the performance of the Johansen test for continuous data in dependence of system type ((I(0), I(1), CI(1)), significance level (10%, 5%, 1%), time series length (T = 10 to T = 500) and parametrizations (e.g., β0 = −b1/b2 for cointegrated systems; in stationary and integrated systems β0 = 0, b1≠b2, means that a bivariate system consists of time series with unequal variances). The first experiment with N = 1000 replications delivered the following outcome: The Johansen test achieved an acceptable discriminating performance with less than 5% of misclassification at the 1% level of significance in samples of at least 70 observations. The effect of the system type on the classification accuracy was stronger than the influence of parameterizations, best results were obtained for stationary systems. Parameter estimates were unbiased (mean ≈ median), the accuracy of estimation distinctly improved with growing sample size. Based on these results for continuous data, the following presentations are confined to T = 100, α = 1%, and the parameterization β0 = −1.

Incomplete data sets represent a widespread problem of psychological research; missing values are particularly prevalent in longitudinal studies [15]. In time series, missing observations usually imply not only shortened samples but also distorted equidistance. To investigate the impact of failing equidistance due to missing values on the performance of the Johansen test, we made the data discrete (only whole numbers were used as time series values) and manipulated the percentages of missing values (from 10% to 30%) as well as the nature of failing observations (regular vs. random).

To investigate the effect of scaling on the performance of the Johansen test, we compared test decisions and quality of estimations in dependence of the levels of measurement. We created various interval and ordinal scales as described in detail by Baker, Hardyck and Petrinovich (1966) [16]. For both scales, we varied the number of points: 3 through 10. In the ordinal case, we also manipulated the interval size between levels: the interval size varied randomly, increased from the median, decreased from the median, or increased monotonically.

Additionally, we examined the impact of score limitations at the top or the bottom of a scale on the performance of the Johansen test. In longitudinal analyses, ceiling effects can cause incorrect model selections and biased parameter estimations [17]. To investigate the impact of ceiling or floor effects on the performance of the Johansen test we created different data by merging a portion of the marginal values in a 10-point scale. The merged proportions varied from 30% through 50% with the scales bounded above, below or on both sides. For instance, in a scale bounded above the values 7, 8, 9, and 10 were summarized to 7 limiting the range of the scale.

R codes for generating the data are provided in the files Dataset I (0), Dataset I (1), and Dataset CI in S2S4 Files. For more examples and elaborated explanations how to simulate interval or ordinal data with missing values or with floor and ceiling effects, consult Gruber (2011) [18].

In sum, the described Monte Carlo studies generated the following types of psychometric data: continuous data, interval data with different points, ordinal data with different points, ordinal data with various distances between levels, interval and ordinal data with varying percentages of missing values and with equidistant and non-equidistant omissions, interval and ordinal data with scales bonded above, below or on both sides. There are two main goals of time series analysis: identifying the nature of the phenomenon represented by the sequence of observations (model identification) and predicting future values of the time series variable which requires an accurate parameter estimation. The present study examines the goal attainment in dependence of the data type.

Results

Fig 2 shows that the Johansen test achieved an acceptable discriminating performance with less than 5% of misclassification in samples with at least 70 observations. The estimates of β0 proved to be unbiased, therefore Fig 2 visualizes their variability in dependence of sample size and demonstrates a distinct improvement in the accuracy of the parameter estimation with increasing T. The results suggest T = 100 as a parsimonious optimal time series length.

Fig 2. Performance of the Johansen test in dependence of sample size.

Fig 2

Results presented in Table 1 show that in samples with 100 intended observations the Johansen test can cope with up to 30% of missing values: less than 5% of incorrect model identifications were observed; the estimations of β0 were unbiased and precise. The test demonstrated similar discriminating and estimating quality in the regular and random cases. Therefore, the Johansen test does not necessarily require equidistant measurements for correct inference.

Table 1. Performance of the Johansen test in time series with missing values.

% of misclassifications Accuracy of estimation of β0 = −1
I(1) CI(1) I(0) MEDIAN IQR % OUT
complete data 3.6 1.9 0 −1.002 0.058 3.7
every 10th value is missing 3.2 1.6 0 −1.003 0.060 3.6
every 7th value is missing 3.3 1.7 0 −1.033 0.062 5.2
every 5th value is missing 2.6 1.6 0 −1.035 0.058 4.0
every 3rd value is missing 2.5 4.2 0.4 −1.147 0.071 4.8
10% random missing values 3.2 1.8 0 −1.005 0.056 4.3
14% random missing values 3.2 1.9 0 −1.004 0.061 3.8
20% random missing values 1.9 1.5 0 −1.002 0.063 4.6
30% random missing values 1.9 4.3 0.2 −1.002 0.072 4.2

Table 2 summarizes the most important results concerning the levels of measurement. A 10-point scale seems to be necessary for a satisfactory discriminating accuracy with less than 10% of model misclassifications. For the ordinal scale, the number of points appeared to be more important than the cause for non-equidistant intervals, thus Table 2 provides summary statistics obtained from the data with different sizes between levels. The most common error was misidentification of integrated systems as cointegrated. Scales with more than 5 points were necessary for satisfactory performance of the Johansen test when dealing with processes containing stochastic trends. A 7-point ordinal scale was sufficient for accurate parameter estimation in the stationary case. Recall that the estimation of β0 in cointegrated systems is based on a stationary linear combination.

Table 2. Performance of the Johansen test in time series measured on different scales.

% of misclassifications Accuracy of estimation of β0 = −1
I(1) CI(1) I(0) MEDIAN IQR % OUT
continuous data 3.6 1.9 0 −1.002 0.058 3.7
10-point interval scale 5.6 2.3 0 −1.003 0.060 4.0
7-point interval scale 8.9 2.2 0 −1.001 0.067 3.7
5-point interval scale 14.0 2.5 0 −0.999 0.068 4.0
3-point interval scale 29.1 6.8 0 −1.000 0.082 4.5
7-point ordinal scale 16.6 4.4 0 −1.002 0.097 4.0
10-point ordinal scale 8.2 3.3 0 −1.003 0.076 3.5

Table 3 presents simulation results for the data with floor and ceiling effects. It demonstrates that even strong aggregation (50%) was not problematic for stationary processes. For merged proportions of no more than 30%, the test performance was even comparable to the performance based on untransformed continuous data. In systems with trending processes, however, the scale limitations were definitely disadvantageous for the performance of the Johansen test because ceiling and floor effects distinctly enhanced the number of incorrect model selections. Limited scales obviously failed to capture fluctuations of trending series adequately.

Table 3. Performance of the Johansen test in time series with floor and ceiling effects.

% of misclassifications Accuracy of estimation of β0 = −1
I(1) CI(1) I(0) MEDIAN IQR % OUT
untransformed data 3.6 1.9 0 −1.002 0.058 3.7
scale is bounded above (30%) 15.6 4.0 0 −1.001 0.063 4.2
scale is bounded below (30%) 20.1 2.4 0 −1.001 0.064 4.3
scale is bounded above (50%) 29.2 11.3 0 −1.002 0.114 5.3
scale is bounded on both sides (30%) 16.4 1.8 0 −1.000 0.046 5.1

Summary and conclusions

Psychometric data are often “imperfect”: measurements usually originate from ordinal scales like Likert questionnaires; moreover, missing observations are quite common. Consequently, psychological data from longitudinal designs are normally discrete values of a limited range with failing equidistance between them. The goal of the presented study was to find out under which conditions different sophisticated time series techniques implemented in the Johansen test work properly with data from empirical psychology.

The main results of the conducted Monte Carlo simulations are: (1) Time series analyses require samples of at least 70 observations for an accurate estimation and inference. (2) Discrete data and failing equidistance of measurements due to irregular missing values appear unproblematic. Thus, interruptions on weekends by investigating daily phenomena are non-hazardous and supplementary data collection in order to obtain an appropriate sample size is reasonable in longitudinal studies. (3) Relevant characteristics of stationary processes can be adequately captured using 5- or 7-point ordinal scales. (4) For trending processes, at least 10-point scales are necessary to ensure an acceptable quality of estimation and inference. Moreover, it is essential to consider a possible growth of the processes during scale construction, since ceiling or floor effects are especially consequential for series containing trends.

The results of the present study, among other things, allow for a better assessment of empirical findings from longitudinal empirical research. For instance, in the martial interaction study reported above, the Johansen test indicated that the mood time series of the married couple obtained on 144 successive days were cointegrated with the estimation of β0 = -0.93.The mood measures originated from a questionnaire with 58 items like “Right now I feel good”. The momentary intensity of emotions was rated with answers 1 = definitely not, 2 = not, 3 = not really, 4 = a little, 5 = very much, 6 = extremely. In summary, the data show the following characteristics: 6-point ordinal scale, T = 144, 0% missing values, standardized time series (i.e., with equal means and variances). In the Monte Carlo experiments, 6-point ordinal data with T≈150 and 7-point ordinal data with T = 100 provided similar results. Thus, from Table 2 follows that, under these conditions, the estimated probability of a correct identification for CI(1) systems is about 96%. On the other side, up to 16.6% I(1) systems can be misclassified as CI(1). The present study demonstrated that distinct ceiling or floor effects are expectable for 6-point ordinal data. Such scale limitations impede identification of cointegrated time series, i.e., forward misclassifications of CI(1) systems. Moreover, the estimate of β0 = -0.93 from the empirical example indicates the following 1.5 interquartile range for β0: [-0.93–1.5*0.097; -0.93+1.5*0.097] = [-1.076; -0.785]. The findings from the present study suggest that, under the conditions described, CI(1) systems with β0 = -1 are associated with the following 1.5 interquartile range for the estimations of β0: [-1.002–1.5*0.097; -1.002+1.5*0.097] = [-1.148; -0.857]. The estimate from the empirical example (-0.93) lies inside the interval and is therefore probable for CI(1) data with β0 = -1 (see % OUT in Table 2). Consequently, the analysis suggests that the evidence for cointegration is rather strong in this case.

The following examples demonstrate the implementation of the findings from the present Monte Carlo experiments in applied research. For this purpose, two clinical studies are used. The first study aimed at analyzing temporal relationships between awakening cortisol and psychosocial variables in inpatients with anorexia nervosa [4]. The items assessing psychosocial variables captured anticipations, depressive feelings, nervousness, anxiety, or stress (e.g., “Today, I am starting the day with positive anticipations”, “At the moment, I feel nervous”). The goal of the second clinical study was to investigate the interaction between emotional intolerance and core symptoms of anorexia nervosa over the course of inpatient treatment [3]. Items as “Today, I could not tolerate unpleasant emotions” assessed emotional intolerance. The essential symptoms of anorexia nervosa were rated with items monitoring restraint over eating, weight concern, fear of losing control over eating, and preoccupation with food, e.g., “Today, I had a definite fear of losing control over eating.” Multivariate time series analyses require similar scaling for all variables within a system. Therefore, in the first study, psychosocial variables need a wide numeric scale to associate them appropriately with continuous cortisol measurements. Both studies deal with potentially trending phenomena since changes in mood or clinical symptoms during the treatment are probable. The present Monte Carlo simulations demonstrated that limited scales failed to capture fluctuations of trending series adequately. Moreover, they showed that at least 10-point scales are necessary to ensure an acceptable quality of estimation and inference in trending processes. Thus, to mimic a metric scale and to consider a possible growth of the processes during the scale construction, measurements of psychosocial variables, emotional intolerance, and symptoms of anorexia nervosa were obtained as follows: Patients rated each item on a visual analogue scale with bipolar labels. The marked points were converted by a computer program to a numeric scale, from 0 to 100, visible to the patient while completing the questionnaire. In both studies, data were collected daily. The present Monte Carlo experiments suggested that time series analyses require samples of at least 70 observations for an accurate estimation and inference. Furthermore, from the simulations follow that supplementary data collection to guarantee an appropriate sample size is reasonable in the presence of missing values. Therefore, clinical samples of both studies should primarily include patients with intended inpatient stay of at least three months.

Supporting information

S1 File. Appendix.

(R)

S2 File. Dataset CI.

(R)

S3 File. Dataset I (0).

(R)

S4 File. Dataset I (1).

(R)

S5 File. Empirical example.

(CSV)

Acknowledgments

The author would like to thank Antje Heinle (Gruber) for her support in developing R codes for Monte Carlo simulations.

Data Availability

The data are generated in Monte Carlo simulations. R codes for generating the data are provided in the Supporting Information files.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Tschacher W, Ramseyer F. Modeling psychotherapy process by time-series panel analysis (TSPA). Psychother Res. 2009;19: 469–481. 10.1080/10503300802654496 [DOI] [PubMed] [Google Scholar]
  • 2.Rosmalen JGM, Wenting AMG, Roest AM, de Jonge P, Bos EH. Revealing causal heterogeneity using time series analysis of ambulatory assessments: Application to the association between depression and physical activity after myocardial infarction. Psychosom Med. 2012;74: 377–386. 10.1097/PSY.0b013e3182545d47 [DOI] [PubMed] [Google Scholar]
  • 3.Stroe-Kunold E, Friederich HC, Stadnitski T, Wesche D, Herzog W, Schwab M, et al. Emotional intolerance and core features of anorexia nervosa: A dynamic interaction during inpatient treatment? Results from a longitudinal diary study. PLoS One. 2016;11: e0154701 10.1371/journal.pone.0154701 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Wild B, Stadnitski T, Wesche D, Stroe-Kunold E, Schultz JH, Rudofsky G, et al. Temporal relationships between awakening cortisol and psychosocial variables in inpatients with anorexia nervosa–A time series approach. Int J Psychophysiol. 2016;102: 25–32. 10.1016/j.ijpsycho.2016.03.002 [DOI] [PubMed] [Google Scholar]
  • 5.Keller F, Stadnitski T, Nützel J, Schepker R. Process analysis of weekly self- and external assessments of adolescents with substance abuse disorder during long-term psychotherapy. Z Kinder Jugendpsychiatr Psychother. 2019;47: 126–137. 10.1024/1422-4917/a000594 [DOI] [PubMed] [Google Scholar]
  • 6.Johansen S. Statistical analysis of cointegration vectors. J Econ Dyn Control. 1988;12: 231–254. 10.1016/0165-1889(88)90041-3 [DOI] [Google Scholar]
  • 7.Johansen S. Estimation and hypothesis testing of cointegration vectors in ^gGaussian vector autoregressive models. Econometrica. 1991;59: 1551–1580. 10.2307/2938278 [DOI] [Google Scholar]
  • 8.Stroe-Kunold E, Werner J. Modeling human dynamics by means of cointegration methodology. Methodology. 2008;4: 113–131. 10.1027/1614-2241.4.3.113 [DOI] [Google Scholar]
  • 9.Stadnitski T, Wild B. How to deal with temporal relationships between biopsychosocial variables: A practical guide to time series analysis. Psychosom Med. 2019;81: 289–304. 10.1097/PSY.0000000000000680 [DOI] [PubMed] [Google Scholar]
  • 10.Stadnitski T. Multivariate time series analyses for psychological research. Hamburg: Verlag Dr. Kovac; 2014. [Google Scholar]
  • 11.Stadnytska T. Deterministic or stochastic trend: Decision on the basis of the augmented Dickey-Fuller test. Methodology. 2010;6: 83–92. 10.1027/1614-2241/a000009 [DOI] [Google Scholar]
  • 12.Stroe-Kunold E, Gruber A, Stadnytska T, Werner J, Brosig B. Cointegration methodology for psychological researchers: An introduction to the analysis of dynamic process systems. Br J Math Stat Psychol. 2012;65: 511–539. 10.1111/j.2044-8317.2011.02033.x [DOI] [PubMed] [Google Scholar]
  • 13.Pfaff B. VAR, SVAR and SVEC Models: Implementation within R package vars. J Stat Softw. 2008;27: 1–32. 10.18637/jss.v027.i04 [DOI] [Google Scholar]
  • 14.Kupfer J, Brosig B, Brähler E. A multivariate time-series approach to marital interaction. Psychosoc Med. 2005;2: Doc08 [PMC free article] [PubMed] [Google Scholar]
  • 15.Graham JW. Missing data analysis: Making it work in the real world. Annu Rev Psychol. 2009;60: 549–576. 10.1146/annurev.psych.58.110405.085530 [DOI] [PubMed] [Google Scholar]
  • 16.Baker BO, Hardyck CD, Petrinovich LF. Weak measurements vs. strong statistics: An empirical critique of S. S. Stevens’ proscriptions on statistics. Educ Psychol Meas. 1966;26: 291–309. 10.1177/001316446602600204 [DOI] [Google Scholar]
  • 17.Wang L, Zhang Z, McArdle JJ, Salthouse TA. Investigating ceiling effects in longitudinal data analysis. Multivariate Behav Res. 2008;43: 476–496. 10.1080/00273170802285941 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Gruber A. Kointegration in Theorie und Praxis: Statistische Analyse gemeinsamer Entwicklungstrends in psychologischen Zeitreihensystemen. Dissertation, Heidelberg University; 2011. Available from: http://archiv.ub.uni-heidelberg.de/volltextserver/12019/1/DissertationGruberAntje.pdf [Google Scholar]

Decision Letter 0

Stephan Doering

9 Jan 2020

PONE-D-19-32949

Time series analyses with psychometric data

PLOS ONE

Dear Dr. Stadnitski,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by February 9, 2020. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Stephan Doering, M.D.

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors present a study in which they evaluate the performance of Johansen test as a function of the data quality (using a Monte Carlo approach). They apply different types of transformations on the data, e.g., missing values, changing the scale and floor/ceiling effects) and evaluate the effect on these. There is no technical novelty in the proposed approach, this is a sole experimental study. My main concern is the lack of more experiments to support their claims: the authors use only one dataset for the conducted experiments and expect that the conclusions will generalize on any type of psycometric data. Experiments with at least two additional (and diverse) datasets are expected to support the claims of the authors. Furthermore, there are several typos in the manuscript (e.g., "maxim likelihood estimation", l.49, "autregrssive", l.60). Finally, the actual contribution of the study is unclear: the authors should better highlight the contribution of the paper and demonstrate that this study will be indeed important for the scientist of the relevant field.

Reviewer #2: In this paper, authors present Time series analyses with psychometric data. This reviewer’s comments are as follows:

[1] Equation (2) seems incorrect or there is typo of term ut-1 as it should be ut-i

[2] It would be better if author can provide two-three real-life applications of the proposed study by considering details of actual problems and solutions of same in Results section.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Pushpendra Singh, NIT Hamirpur, HP, India

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Apr 16;15(4):e0231785. doi: 10.1371/journal.pone.0231785.r002

Author response to Decision Letter 0


1 Feb 2020

Reviewer #1 (1): Explanations that dozens of different data types were generated in the Monte Carlos simulations of the present study are provided (241-249).

Reviewer #1 (2): Typos are corrected.

Reviewer #1 (3): Contribution of the study is explained (309-360).

Reviewer #2 (1): Equation 2 is improved.

Reviewer #2 (2): Real-life applications are provided (309-360).

Editor (1): PLOS ONE's style requirements are checked.

Editor (2): A study's minimal data set is provided as supplementary files S2_File.R, S3_File.R, S4_File.R.

Editor (3): Figures are uploaded to the PACE.

Attachment

Submitted filename: Response to Reviewers.doc

Decision Letter 1

Stephan Doering

6 Mar 2020

PONE-D-19-32949R1

Time series analyses with psychometric data

PLOS ONE

Dear Dr. Stadnitski,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by April 5, 2020. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Stephan Doering, M.D.

Academic Editor

PLOS ONE

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

Reviewer #3: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: (No Response)

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: As I have already explained in my original review, authors should perform additional experiments using *real* data to demonstrate their point. Using simulations to generate data is not enough, since in most domains, the the actual distribution differs from the one used in simulations. Further evidence should be provided if this is not the case for psychometric data.

Reviewer #2: Author has addressed all the issues raised by reviewers. Now, paper may be accepted for publication.

Reviewer #3: Given the time series applications and the Monte-Carlo evaluations, it appears that the investigators have met the goal of clarifying the conditions for satisfactory performance of the methods with data typical in psychological and psychosomatic research. The missing and non-equidistant issues certainly could use a fresh look from past procedures.

The Johansen test, in this context, does appear to distinguish between the different types of dynamic systems by estimating the rank of the matrix . In addition, the Johansen procedure provides the maximum likelihood statistical estimation of the parameters from the equations related to the cointegrated systems. Figure 2 is of interest and the lag patterns make sense for the different systems.

Examining the supplemental material, particularly the data sets for S2 to S4, these are really not data sets but simulated examples. The real life types of examples to which these procedures could be applied are descriptively outlined by the authors on lines 309 to 360. It would have been more helpful to have actual data associated with these types of examples to demonstrate the procedures numerically.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Pushpendra Singh, PhD, Department of ECE, NIT Hamirpur (HP) India

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Apr 16;15(4):e0231785. doi: 10.1371/journal.pone.0231785.r004

Author response to Decision Letter 1


18 Mar 2020

• 197-232: Explanations that the aim of the study can be achieved only with simulated data.

• The data generated in the study are typical for psychology: non-metric, with missing values... The evaluation method used here is a standard statistical procedure to answer the questions of the study adequately.

• 65-68: Please note that numerous empirical examples with psychometric data are already provided elsewhere.

• The data sets in S2 to S4 provide all simulated data of the study. For instance, in S2 the commands “X=ordinal_10_m(ls$X),Y=ordinal_10_m(ls$Y)” at the end of the code generate 10 cointegrated ordinal measured bivariate systems with length 100 and Beta0=-1, because the set parameters are N=10, T=100, b1=1, b2=1. Changing the parameters to, for example, N=1000 and T=500 generates 1000 cointegrated systems with length 500. With “X1_t=X[,1], Y1_t=Y[,1], CI_1=cbind(X1_t, Y1_t)“ one just gets the first system or with “X1_t=X[,10], Y1_t=Y[,10], CI_1=cbind(X1_t, Y1_t)“ the 10th one. I included the commands print(X) and print(Y) in the codes to demonstrate this (the simulated data can be viewed). Employing a for-loop one can access all N systems successively.

• Please note that the simulated examples in Fig 1 (S1) are metric, S2 to S4 provide simulations codes for generating non-metric (interval, ordinal…) data of the study.

• The data of the empirical example in Fig 1 are provided as S5_File.csv (S5). The R codes for analyzing the data are included in the appendix (S1: S1_File.R) to show that the Johansen test handles simulated and non-simulated data equally. There are no fundamental differences between these data and other time series from studies outlined on lines 309 to 360.

• Please note that numerous empirical examples with psychometric data are already provided elsewhere (see 65-68) and the aim of the study can be achieved only with simulated data (197-232).

Attachment

Submitted filename: Response to Reviewers.doc

Decision Letter 2

Stephan Doering

1 Apr 2020

Time series analyses with psychometric data

PONE-D-19-32949R2

Dear Dr. Stadnitski,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Stephan Doering, M.D.

Academic Editor

PLOS ONE

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: In this paper, authors present Time series analyses with psychometric data. Author has addressed this reviewer's comments and paper may be accepted for publication.

Reviewer #3: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Pushpendra Singh, PhD, Department of ECE, NIT Hamirpur India

Reviewer #3: No

Acceptance letter

Stephan Doering

6 Apr 2020

PONE-D-19-32949R2

Time series analyses with psychometric data

Dear Dr. Stadnitski:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Stephan Doering

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Appendix.

    (R)

    S2 File. Dataset CI.

    (R)

    S3 File. Dataset I (0).

    (R)

    S4 File. Dataset I (1).

    (R)

    S5 File. Empirical example.

    (CSV)

    Attachment

    Submitted filename: Response to Reviewers.doc

    Attachment

    Submitted filename: Response to Reviewers.doc

    Data Availability Statement

    The data are generated in Monte Carlo simulations. R codes for generating the data are provided in the Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES