Skip to main content
PLOS One logoLink to PLOS One
. 2020 Sep 18;15(9):e0239494. doi: 10.1371/journal.pone.0239494

Segmentation of time series in up- and down-trends using the epsilon-tau procedure with application to USD/JPY foreign exchange market data

Arthur Matsuo Yamashita Rios de Sousa 1, Hideki Takayasu 1,2, Misako Takayasu 1,3,*
Editor: J E Trinidad Segovia4
PMCID: PMC7500655  PMID: 32946503

Abstract

We propose the epsilon-tau procedure to determine up- and down-trends in a time series, working as a tool for its segmentation. The method denomination reflects the use of a tolerance level ε for the series values and a patience level τ in the time axis to delimit the trends. We first illustrate the procedure in discrete random walks, deriving the exact probability distributions of trend lengths and trend amplitudes, and then apply it to segment and analyze the trends of U.S. dollar (USD)/Japanese yen (JPY) market time series from 2015 to 2018. Besides studying the statistics of trend lengths and amplitudes, we investigate the internal structure of the trends by grouping trends with similar shapes and selecting clusters of shapes that rarely occur in the randomized data. Particularly, we identify a set of down-trends presenting similar sharp appreciation of the yen that are associated with exceptional events such as the Brexit Referendum in 2016.

Introduction

Time series segmentation consists in dividing the original time series in segments with similar behavior according to some criteria and it can be either a preprocessing step in order to represent the time series more efficiently or a data mining technique on its own, able to extract information about the dynamics of the underlying phenomenon [1, 2]. Segmentation methods have been utilized to analyze time series of diverse backgrounds, including biological, climate, remote sensing and crime-related data [39].

Especially in the context of finance, various techniques to segment time series by identifying periods with similar behavior or by finding switching points were developed [1017]. A particularly relevant category of such procedures is the segmentation of time series in up- and down trends, since the identification of periods presenting general tendency of increase or decrease is fundamental for risk management and to recognize investment opportunities. Often referred as drawdowns and drawups, there are several methods to determine trends in financial time series [1822]. A strict definition of drawdown (drawnup) is the continuous decrease (increase) of the series values, terminated by any movement in the opposite direction. Such definition, however, may not be adequate to correctly assess market risks because it is too sensitive to noise. Addressing this limitation, the epsilon-drawdown method was proposed; its improvement is based on the introduction of a tolerance ε within which fluctuations are ignored and the trend is not interrupted [2325]. Another way to reduce the noise sensitivity is to ignore movements within time horizon τ, method suggested but not discussed in [23]. In this work, we unify and extend those previous ideas and propose the epsilon-tau procedure, that simultaneously makes use of a tolerance level ε and a patience level τ to determine up- and down-trends in time series.

As for the paper outline, in the following section we present the epsilon-tau procedure as a method to define the up- or down-trend associated with a given reference point in a time series. We then illustrate its application in simple random walks. We derive exact expression for the marginal probability distributions of trend lengths and trend amplitudes and explain how to employ the epsilon-tau procedure to segment time series. Finally, we apply the segmentation to analyze foreign exchange data consisting of U.S. dollar/Japanese yen market time series from 2015 to 2018. We pay special attention to the internal structure of the trends, performing a systematic investigation of trend shapes that occur in the market time series and introducing an approach based on the Fisher’s exact test to select abnormal shapes that are rarely produced when the data is randomized.

Epsilon-tau procedure

Consider a time series {xt} and a reference point t = m with value xm. We say that the trend associated with the reference point is an up-trend if xm+1xm > 0 and it is a down-trend if xm+1xm < 0; if xm+1xm = 0, the trend is not determined for that reference point.

For the up-trend case (analogous for the down-trend case), the epsilon-tau procedure with tolerance level ε > 0 and patience level τ ≥ 1—both possibly time-dependent—consists in comparing values xt, tm + 1, with the previous rightmost maximum value maxm + 1 ≤ t′ ≤ t{xt}. The procedure stops at t = t* when one of the following conditions is met:

  • (a) value of time series reaches tolerance level ε (Fig 1a):
    maxm+1tt*{xt}-xt*ε; (1)
  • (b) time between consecutive maximum values reaches patience level τ (Fig 1b):
    t*-argmaxm+1tt*{xt}τ. (2)

Fig 1. Stop conditions of the epsilon-tau procedure for the up-trend case (analogous for the down-trend case).

Fig 1

Procedure stops when: (a) value of time series reaches tolerance level ε; or (b) time between consecutive maximum values reaches patience level τ. It defines the up-trend (red) of length ≥ 1 and amplitude a = xm+ℓxm > 0.

The trend determined from this procedure is the up-trend [m + 1, m + ] of length = argmaxm + 1 ≤ tt*{xt} − m, ≥ 1, and amplitude a = xm + xm, a > 0, where xm + = maxm + 1 ≤ tt*>{xt}. Observe that the trend ends in m + and not in t*; the point t* indicates the stop of the procedure, with at least one point and at most τ points beyond the end of the trend needing to be checked in order to determine it.

We remark that the epsilon-tau procedure does not require a predetermined functional form by which trends are approximated, as it happens, for instance, in piecewise linear methods, where trends are approximated by straight lines [2]; this is an important feature that allow us to explore the diversity of possible trend shapes.

The epsilon-drawdown method used in [2325] can be regarded as a particular instance of the epsilon-tau procedure with infinite patience level τ → ∞. Developed in a financial context, the cited works selected a time-dependent tolerance level ε proportional to the volatility (measure of price variation over time) estimated over a preceding time window, being more permissive when the market presents high volatility and becoming stricter during calmer periods. Instead, we use throughout this paper the time-dependent tolerance level ε = max{m + 1 ≤ tt} xtxm for the up-trend case (analogous for the down-trend case). Using such tolerance level, stop condition (a) is translated as xt*xm, i.e., the procedure stops if the reference value xm is reached, and then all points in the up-trend [m + 1, m + ] have values always in between the reference value xm and the maximum value xm + : xm < xtxm + , ∀t ∈ [m + 1, m + ]. Such choice of tolerance level is related to trading psychology by setting the initial price level as the tolerance to keep believing that an up- or down-trend will recover and continue (naturally, this tolerance can be set higher for more aggressive traders or lower for risk averse ones). As for the patience level τ, we use it here as a time constant parameter and it is interpreted as the interval of time that the observer is willing to wait to confirm that a up- or down-trend has ended. The choice of its value is then connected to the characteristics of the observer and her/his intentions; for example, if the aim is the development of real-time applications, a small τ is more suitable, but for historical analysis, larger τ values can provide valuable information.

In the next section, we apply the epsilon-tau procedure with the described tolerance and patience level to simple random walks to illustrate the theoretical study of the trend length and trend amplitude probability distributions for different values of τ and to introduce the time series segmentation using the procedure.

Up- and down-trends in random walks

Take the random walk:

xt=xt-1+ξt, (3)

where the independent and identically distributed increments ξt can take value + 1 with probability p, −1 with probability q, or 0 with probability r = 1 − pq.

For the strict definition of drawdown, which corresponds to tolerance level ε approaching zero or patience level τ = 1 in the epsilon-tau procedure, it was shown in [22] that the trend length and trend amplitude marginal probability distributions are asymptotically exponential when the time series increments are independent with a non-heavy-tailed distribution. Such asymptotic exponential behavior also occurs when applying the epsilon-tau procedure to the considered random walk, as we show next by presenting the exact expressions for the probability distributions and numerical simulations. The detailed derivation of the distributions can be found in S1 Appendix.

Trend length marginal probability distribution

The up-trend length probability distribution for patience level τ = 1 is given by (analogous for down-trend):

P(up,;τ=1)=p(p+r)-1q. (4)

For patience level τ = 2, the distribution is:

P(up,;τ=2)=pr-1q+p2q(r+q)(p+q)2+4pq×{-(p+r-(p+q)2+4pqp-r-(p+q)2+4pq)[(p+r-(p+q)2+4pq2)-1-r-1]+(p+r+(p+q)2+4pqp-r+(p+q)2+4pq)[(p+r+(p+q)2+4pq2)-1-r-1]}. (5)

For patience level τ ≥ 3, the computation becomes involved and we do not derive those distributions here. But in Fig 2 we show length distributions from numerical simulations for different values of τ and random walk parameters. We observe agreement with the theoretical distributions for cases τ = 1 and τ = 2 and the presence of exponential tails in all cases, where the value of τ controls the decay rate.

Fig 2. Up-trend length probability distributions for random walks.

Fig 2

Distributions for patience levels τ = 1 (black), τ = 2 (red), τ = 3 (blue), τ = 4 (orange), τ = 5 (green), τ = 10 (gray) and for random walk parameters: (a) p = 0.4, q = 0.4; (b) p = 0.5, q = 0.4; (c) p = 0.4, q = 0.5. Symbols refer to results from numerical simulations and lines represent theoretical values. Insets detail distributions for small .

Trend amplitude marginal probability distribution

The up-trend amplitude a probability distribution for arbitrary patience level τ is given by the following expression:

P(up,a;τ)=[k=0a-1p1-j=1τP(zjk)][j=1τP(sja(ε))+P(sτ(a-1)(τ))], (6)

where (using results on lattice path enumeration and powers of tridiagonal toeplitz matrices [2628]):

P(zjk)={0,ifj2,k=0;r,ifj=1,k0;2pqk+1u=1kλuk+1j-2sin2(uπk+1),ifj2,k1, (7)

with λuk+1=r+2pqcos(uπk+1).

P(sjk(ε))={0,ifj1,k=0orj=1,k2orj2,k=1;q,ifj=1,k=1;2q2k(qp)k-22u=1k-1λukj-2sin(uπk)sin((k-1)uπk),ifj2,k2. (8)

And:

P(sjk(τ))={0,ifj1,k=0;u=1k2qk+1(qp)u-12v=1kλvk+1j-1sin(uvπk+1)sin(vπk+1),ifj1,k1. (9)

For large amplitudes a > τ we can highlight the exponential behavior and write:

P(up,a;τ)=[k=0τ-1p1-j=1τP(zjk)][p1-j=1τP(zjk)]a-τP(sττ(τ)). (10)

Fig 3 shows amplitude distributions from numerical simulations for different values of τ and random walk parameters. Simulation results agree with theoretical distributions for all values of τ, which also control the decay rate of the exponential tails.

Fig 3. Up-trend amplitude a probability distributions for random walks.

Fig 3

Distributions for patience levels τ = 1 (black), τ = 2 (red), τ = 3 (blue), τ = 4 (orange), τ = 5 (green), τ = 10 (gray) and for random walk parameters: (a) p = 0.4, q = 0.4; (b) p = 0.5, q = 0.4; (c) p = 0.4, q = 0.5. Symbols refer to results from numerical simulations and lines represent theoretical values. Insets detail distributions for small a.

Time series segmentation

The epsilon-tau procedure can be straightforwardly employed to segment a time series in alternating up- and down-trends by setting the end of a trend as the reference point for the next one. An up-trend is always followed by a down-trend and vice-versa (except in the end of the time series, in which the last trend may not be determined due to the finite size of the series). An example of segmented random walk using patience level τ = 7200 is displayed in Fig 4a, where up-trends are colored red and down-trends, blue. Fig 4b shows the dependence of the segmentation result on the patience level τ, with larger values of τ producing a coarser segmentation with larger trends on average.

Fig 4. Time series segmentation of a random walk realization.

Fig 4

(a) Up- and down-trends segmentation using patience level τ = 7200 for random walk parameters p = 0.4, q = 0.4. (b) Segmentation results for different patience levels τ. Red indicates up-trends, blue indicates down-trends and light-gray (in (a)) or white (in (b)) shows points where the trend is not determined (in the end of the time series—an effect of the finite size of the series).

Note that the marginal probability distributions of length and amplitude a of trends from the segmentation of a random walk time series differ from the ones derived previously. Such difference arises from the fact that the reference point used to define a given trend is not arbitrary anymore, but it is conditioned to be the end of the previous trend. Figs 5 and 6 make explicit the distinction between the two cases for trend length and trend amplitude, respectively: gray symbols correspond to trends produced by taking arbitrary reference points and black symbols indicate trends resulting from the time series segmentation. In the segmentation case, the stop conditions of the epsilon-tau procedure acting in a trend restricts the next trend, strongly affecting the probability of the small ones (both in length and in amplitude); nevertheless, the decay rates of the exponential tails appear to be the same as the arbitrary reference point case.

Fig 5. Comparison between up-trend length probability distributions for random walk with parameters p = 0.4, q = 0.4.

Fig 5

Distributions considering arbitrary reference point (gray) and considering the trends obtained from time series segmentation (black) using patience levels: (a) τ = 10; (b) τ = 50; and (c) τ = 100. Symbols refer to results from numerical simulations.

Fig 6. Comparison between up-trend amplitude a probability distributions for random walk with parameters p = 0.4, q = 0.4.

Fig 6

Distributions considering arbitrary reference point (gray) and considering the trends obtained from time series segmentation (black) using patience levels: (a) τ = 10; (b) τ = 50; and (c) τ = 100. Symbols refer to results from numerical simulations and lines represent theoretical values.

Up- and down-trends in financial time series

We now use the time series segmentation by the epsilon-tau procedure to analyze actual financial time series from the foreign exchange market, which has the largest trading volume among all financial markets (6.6 trillion U.S. dollar per day as reported in April 2019 [29]). We use the dataset from the Electronic Broking Service (EBS), one of the main trading platforms in this market, continuously open during weekdays from Sunday 21:00:00 GMT to Friday 21:00:00 GMT. Traders in this platform, mostly banks and financial institutions, can place buy and sell quotes for a given currencies pair; the mid-quote is defined at each time as the average of the highest buy quote and the lowest sell quote and a deal occurs when there is a match between those quotes. We study here the mid-quote time series of the currency pair U.S. dollar (USD) and Japanese yen (JPY) in a time resolution of one second from 2015 to 2018: 51 weeks of 2015, from 2015 January 05 to 2015 December 25; 52 weeks of 2016, from 2016 January 04 to 2016 December 30; 52 weeks of 2017, from 2017 January 02 to 2017 December 29; and 51 weeks of 2018, from 2018 January 08 to 2018 December 28 (each week from Monday 00:00:00 GMT to Friday 12:00:00 GMT).

The considered period includes several events that impacted the financial markets, the Brexit Referendum in June 2016 being one among the most relevant [30, 31]. In the foreign exchange market, this event caused the pound sterling to fall against the U.S. dollar to its lowest level since 1985 and a strong appreciation of the Japanese yen. We use the week when the Brexit Referendum took place to illustrate the segmentation of financial time series. Fig 7a displays the segmentation of the mid-quote time series of the currency pair USD/JPY in the referred week, in which the surge of the Japanese yen against the U.S. dollar reflects the market realization of the decision of the United Kingdom to leave the European Union in the night of June 23 and morning of June 24. In this example of financial time series segmentation, we use patience level τ = 7200 (2h) for better visualization of the trends in the one week time frame; Fig 7b shows the segmentation results for different values of τ. Focusing on the yen surge, Fig 8 details the effect of the value of τ on the up- and down-trends of the segmented time series, with small τ emphasizing the microtrends and large τ, the trends regarded as macrotrends (for the one week time frame).

Fig 7. Time series segmentation of the mid-quote time series of the currency pair USD/JPY during the week from June 20 2016 00:00:00 GMT to June 24 2016 12:00:00 GMT, when the Brexit Referendum took place.

Fig 7

(a) Up- and down-trends segmentation using patience level τ = 7200 (2h). (b) Segmentation results for different patience levels τ. Red indicates up-trends, blue indicates down-trends and light-gray (in (a)) or white (in (b)) shows points where the trend is not determined.

Fig 8. Time series segmentation of the mid-quote time series of the currency pair USD/JPY during the 2016 Brexit Referendum.

Fig 8

Segmentation results depend on the used patience level: (a) τ = 60 (1min); (a) τ = 600 (10min); and (a) τ = 1800 (30min).

Trend length and trend amplitude marginal cumulative probability distributions

We start the statistical analysis of the trends obtained from the segmentation of the financial time series for the whole four years period by constructing the marginal (complementary) cumulative distributions of trend lengths and absolute trend amplitudes |a| for three values of patience level: τ = 60 (1min) (highlighting microtrends), τ = 600 (10min) (intermediate case) and τ = 1800 (30min) (highlighting macrotrends). We separate the up- and down-trend cases and also constructs the distributions for randomized data. For the randomization, we shuffle the increments of the mid-quote time series of each week individually; because of the high fraction of zero increments in the one second resolution mid-quote time series (82.87% of zero, 8.56% of positive and 8.57% of negative increments), we consider two kinds of randomization: fixed zeros randomization, where we fix the zero increments in their original positions and shuffle only the positive and negative ones, and total randomization, where all increments are shuffled.

The distributions of trend lengths are present in Fig 9. For the small value of τ = 60 (1min) (Fig 9a), the distributions corresponding to the totally randomization case decay exponentially while the one of the fixed zeros randomization are similar to the distributions of trends from the original mid-quote data, which have tails heavier than an exponential both for up- and down-trends. The similarity between the fixed zeros randomization case and the original data indicates that sequences of zero increments control the length of microtrends; in fact, the trend length distributions for small τ shed light on the silent periods of the market, i.e., when there is no trading activity that changes the mid-quote. The effect of the sequences of zeros is reduced for large values of τ (see results for τ = 1800 (30min) Fig 9b), for which both cases of randomization yield similar distributions with exponential tails and the distributions corresponding to the original data lose the heavy tail, presenting an approximate exponential decay but distinct from the random cases. We also note that the probabilities of long up- and down-trends significantly differ from each other, with long up-trends occurring more frequently than long down-trends.

Fig 9. Trend length cumulative probability distributions for mid-quote time series of the currency pair USD/JPY from 2015 to 2018.

Fig 9

Distributions for up-trends (red) and down-trends (blue) obtained from the segmentation of the mid-quote data, for up-trends (orange) and down-trends (green) obtained from the segmentation of the randomized mid-quote data with fixed zeros, and for up-trends (magenta) and down-trends (cyan) obtained from the segmentation of the totally randomized mid-quote data using patience levels: (a) τ = 60 (1min); (a) τ = 600 (10min); and (a) τ = 1800 (30min). Insets show log-log plots.

Fig 10 shows the distributions of absolute trend amplitudes |a|, not presenting major qualitative differences for different values of τ. The distributions for both randomization types decays exponentially, confirming that the sequences of zeros increments are less important for the amplitudes. The distributions corresponding to the original market data decay slower than the exponential ones of the random cases, with tails approximated by power-laws, which is in accordance with the results using the epsilon-drawdown method in financial markets [24]. The asymmetry between up- and down-trends is more explicit for the amplitudes: large down-trends (movements of depreciation of U.S. dollar against the Japanese yen), have higher probability than large trends in the opposite direction and can reach more extreme values of amplitude, e.g., ∼6 JPY per USD in the τ = 1800 (30min) case (Fig 10c). Such behavior is explained by the fact that the Japanese yen is seen as a safe-haven currency, a safe asset which protects investors during periods of uncertainty [32].

Fig 10. Absolute trend amplitude |a| cumulative probability distributions for mid-quote time series of the currency pair USD/JPY from 2015 to 2018.

Fig 10

Distributions for up-trends (red) and down-trends (blue) obtained from the segmentation of the mid-quote data, for up-trends (orange) and down-trends (green) obtained from the segmentation of the randomized mid-quote data with fixed zeros, and for up-trends (magenta) and down-trends (cyan) obtained from the segmentation of the totally randomized mid-quote data using patience levels: (a) τ = 60 (1min); (a) τ = 600 (10min); and (a) τ = 1800 (30min). Insets show log-log plots.

Trend shape clustering

The study of the probability distributions above are important for the understanding of the market dynamics, but quantities such as length and amplitude summarize the whole trend in a single number and ignore its internal structure; we cannot know, for example, if a down-trend falls uniformly or if it accelerates. Aiming at a more detailed picture of USD/JPY market trends, we proceed to the investigation of trend shape, i.e., the relative position of all points (or a sample of points) of the trend.

Here we group similar trend shapes using cluster analysis so that we are able to describe the different types that occur in the USD/JPY mid-quote time series; we are particularly interested in finding trend shapes that are rare in the randomized data and possibly related to exceptional events. For such task we need to choose a measure of distance between trends that reflect their shapes and a clustering method. For the distance between trends, we normalize the trends by setting unit length and unit amplitude (that is, we rescale the original trend horizontally by its length and vertically by its absolute amplitude) and sample a fixed number of points from the normalized trend at fixed positions (we take 100 equidistant points); we define the distance between trends as the Euclidean distance between the vectors formed by the sampled points from the corresponding normalized trends. For example, if we have two perfectly linear trends, the distance between them is zero independent of their lengths or amplitudes, confirming that they have exactly the same shape; on the other hand, we can have trends with same length and amplitude but with distance greater than zero because they have distinct shapes. For the clustering method, we select the agglomerative hierarchical clustering with complete-linkage criterion: starting from clusters formed by individual trends, at each time step we merge the two clusters with the shortest distance, where the distance between clusters X and Y is defined as the maximum distance between a trend in X and a trend in Y [3335].

We apply the described method to the trends obtained by the segmentation of the mid-quote time series of the currency pair USD/JPY from 2015 to 2018 using τ = 1800 (30min), that is, focusing on macrotrends. We work with the subset of trends with absolute amplitude |a|>0.5, filtering out small trends. In Fig 11 we present the dendrogram generated by the clustering process that shows the clusters of similar trend shapes and their relations. The first cluster in the left is the one containing all trends, which have as children clusters the cluster of all up-trends and all down-trends, which have their own children clusters until the last clusters in the far right corresponding to the individual trends. In the graphs, we show the normalized trends in the cluster by plotting all normalized trends in gray, the average in black and the standard deviation in pink. The clusters represented by red symbols are the ones whose trends have shapes that deviate from the randomized data case as explained next.

Fig 11. Dendrogram indicating the similarities between shapes of trends obtained by the segmentation of the mid-quote time series of the currency pair USD/JPY from 2015 to 2018 using patience level τ = 1800 (30min).

Fig 11

Only trends with absolute amplitude |a|>0.5 are considered. Each symbol represents a cluster of shapes and graphs show the normalized trends (gray lines), the average (black symbols) and the standard deviation (pink shade). Red symbols in the dendrogram indicate the clusters that deviate from the random case.

After grouping similar trend shapes, we look for the clusters deviating from the random case, i.e., the clusters containing trend shapes of rare occurrence in the randomized data. First, we take the randomized mid-quote data with fixed zeros and extract the trends using the segmentation with same patience level τ = 1800 (30min) and condition |a|>0.5. Next, for each cluster of the original data and each trend from the randomized data, we compute the distance between cluster and trend from randomized data (using the definition of distance between clusters) and count the number of such trends whose distance is shorter than the maximum distance between trends within the cluster. Finally, having for each cluster a number of trends from the original data and a number of trends with similar shapes from the randomized data, we apply the Fisher’s exact test to check if the actual proportion of trends from original and from randomized data in a cluster is incompatible with the proportion of the total trends of each category supposing the null hypothesis of randomly selecting trends to compose the cluster. The probability of grouping ndata from a total of Ndata trends and nrand from a total of Nrand trends assuming that all trends have the same probability of being chosen is given by the hypergeometric distribution [36]:

P(ndata,nrand,Ndata,Nrandom)=(Ndatandata)(Nrandnrand)(Ndata+Nrandndata+nrand). (11)

The total number of trends from the original data Ndata and from the randomized data Nrand are fixed by the results of the segmentation: Ndata = 1055 trends and Nrand = 1376 trends. The number of trends to be selected under the null hypothesis to compose each cluster is also fixed and equal to ndata + nrand. We then use as p-value the probability of the number ndata of trends selected from Ndata trends under the null hypothesis being greater or equal to the observed ndata:

p-value=ndatandataP(ndata,ndata+nrand-ndata,Ndata,Nrandom). (12)

We apply the Fisher’s exact test only to clusters where ndata > nrand; the others are regarded as non-deviant. Table 1 lists the clusters for which p-value is below 10−5, interpreted as the clusters that deviate from the random case. The same clusters are shown in red in the dendrogram of Fig 11 and detailed in Fig 12. We remind that the deviations from the random case that we discovered are related solely to the trend shapes, disregarding trend length or amplitude.

Table 1. Clusters of trend shapes that deviate from the random case.

Cluster Trend type ndata nrand p-value
A Down 174 82 4.742×10−17
B Down 146 87 3.986×10−10
C Down 134 94 6.940×10−7
D Down 28 0 5.753×10−11
E Down 25 0 7.345×10−10
F Down 15 0 3.447×10−6
G Up 70 30 3.749×10−8

Fig 12. Portion of dendrogram detailing the clusters of down-trend shapes that deviate from the random case.

Fig 12

Graphs show the normalized trends (gray lines), the average (black symbols) and the standard deviation (pink shade). Cluster labels correspond to the ones in Table 1.

For a more meticulous analysis, we turn our attention to cluster D, the largest one with no trend from the random case, i.e., no trend from the shuffled time series data has shape similar to the original USD/JPY market data trends in the cluster. The average trend shape of cluster D is characterized by a sharp fall in the end of the trend, with the last ∼10% of length of the trend accounting for ∼80% of its amplitude. Fig 13 depicts all 28 down-trends in cluster D with their original lengths and amplitudes and Table 2 details them (labels in the first column correspond to the ones in Fig 13): date of minimum (i.e., end of the trend), time of minimum, trend length, trend amplitude and associated event.

Fig 13. All 28 down-trends of the USD/JPY market data from 2015 to 2018 in cluster D.

Fig 13

Shape of trends in this cluster are marked by a sharp fall in the end of the trend, having ∼80% of its amplitude in the last ∼10% of its length (trends are limited by the gray lines). See Table 2 for trends details.

Table 2. Details of all 28 down-trends of the USD/JPY market data from 2015 to 2018 in cluster D.

Trend Date of minimum Time of minimum Length Amplitude Associated event
(a) 2015-03-12 12:30:39 8652 -0.5050 -
(b) 2015-04-13 12:42:09 4747 -1.0300 -
(c) 2015-08-24 11:44:51 3453 -0.6650 China’s Black Monday
(d) 2015-08-24 13:11:59 2402 -3.9075 China’s Black Monday
(e) 2015-09-01 20:42:16 2636 -0.5625 -
(f) 2015-10-07 03:00:06 4811 -0.5725 BOJ Announcement
(g) 2015-10-28 18:00:04 3494 -0.5400 Fed Announcement
(h) 2015-10-30 03:22:27 4560 -0.6775 BOJ Announcement
(i) 2016-01-06 01:22:23 7708 -0.8025 -
(j) 2016-02-29 22:30:58 1598 -0.5525 -
(k) 2016-04-28 03:03:49 1559 -3.0100 BOJ Announcement
(l) 2016-05-26 00:16:04 3258 -0.6800 -
(m) 2016-06-02 01:45:51 2188 -0.5175 -
(n) 2016-06-16 02:45:48 4628 -1.2175 BOJ Announcement
(o) 2016-06-23 06:10:37 1525 -0.6850 Brexit Referendum
(p) 2016-06-23 23:17:59 5172 -3.5975 Brexit Referendum
(q) 2016-06-24 02:43:48 4259 -5.9425 Brexit Referendum
(r) 2016-07-05 23:22:37 4481 -0.6900 -
(s) 2016-07-28 22:34:28 3468 -1.9025 BOJ Announcement(*)
(t) 2016-07-28 23:25:14 1281 -0.6825 BOJ Announcement(*)
(u) 2016-07-29 02:03:45 1264 -0.6375 BOJ Announcement(*)
(v) 2016-07-29 03:16:37 657 -1.4025 BOJ Announcement
(w) 2016-09-06 23:35:51 3888 -0.8725 -
(x) 2016-09-21 02:55:12 4343 -0.5250 BOJ Announcement
(y) 2016-11-09 00:14:21 4029 -0.6975 U.S. Election
(z) 2017-03-15 18:07:03 4631 -0.9700 Fed Announcement
(aa) 2017-09-14 22:02:31 4218 -0.7650 -
(ab) 2018-02-05 20:10:18 4211 -0.7775 -

Trend labels in the first column correspond to the ones in Fig 13.

(*) Those trends occurred hours before the BOJ Announcement associated with trend (v), but they are related to this event (see text).

By searching for the date and time of the trends in specialized media, it was possible to identify associated events for 17 of the 28 trends, including 3 trends ((o), (p) and (q)) connected with the already mentioned Brexit Referendum in 2016 [3739], highlighting trend (q) with extreme amplitude of ∼6 JPY per USD when the victory of the Leave side was consolidating (see down-trend distribution for mid-quote data in Fig 10c). Trends (c) and (d) correspond to the called China’s Black Monday on 2015 August 24, when the Shanghai main share index fell 8.49% affecting other financial markets [4042]. Trend (y) is linked with the 2016 United States elections won by Donald Trump [43, 44]. The remaining trends are related to monetary policy announcements from the central banking system of the United States, the Federal Reserve (Fed): trends (g) [45] and (z) [46, 47]; and from the Bank of Japan (BOJ): trends (f) [48], (h) [49, 50], (k) [5153], (n) [5456], (v) [5759] and (x) [6062]. In particular, the BOJ announcement on 2016 July 29 associated with trend (v) defined monetary easing actions to stimulate investments (partially as a response to the Brexit Referendum result) that in fact disappointed investors, who were expecting more aggressive measures and caused strong speculation before the announcement itself, probably responsible for trends (s), (t) and (u) [63, 64]. We then have that trends in cluster D with associated events are either related to an exceptional event, causing the yen appreciation which supports its status as safe-haven currency, or the reaction of the market to central banks announcements. Note, however, that no associated events were found for the remaining 11 trends in cluster D and there are probably other major events associated with different trend shapes, reminding us that this is still an incipient study and that the relationship between trend shapes and market events needs to be further investigated.

Final remarks

The epsilon-tau procedure proposed in this work extends previous methods to determine up- and down-trends in time series, particularly the epsilon-drawdown method; besides considering a tolerance level to decide the end of a trend, it introduces a patience level, a kind of tolerance limit in the time axis that controls the time scales of trends, highlighting microtrends if its value is small value and macrotrends if large.

We first studied the epsilon-tau procedure applied to discrete random walks. We derived exact expressions for marginal probability distributions of trend lengths and trend amplitudes, which, together with numerical results, confirmed the expected exponential decay when increments are independent. We explained how to use the epsilon-tau procedure to segment time series in alternating up- and down-trends by successively applying the method and the dependence of the segmentation result on the choice of the patience level value.

We then used the time series segmentation to analyze financial data represented by the USD/JPY mid-quote time series. The probability distributions of trend lengths and trend amplitudes for the market data were compared with the ones for randomized data. Specifically for amplitudes, the tails of the distributions for the market data are heavier than the ones for randomized data. We also observed an asymmetry between up- and down-trends: down-trends with large amplitude, corresponding to the appreciation of the JPY, happen more often than large up-trends and they can reach more extreme values. The status of safe-haven currency of the Japanese yen explains this asymmetry.

Finally, we carried out a more detailed analysis of the internal structure of the market macrotrends with the concept of trend shape. We grouped trends with similar shapes though the complete-linkage clustering and used the Fisher’s exact test to identify clusters containing shapes that rarely occur in the random case. We found a particular cluster whose average trend shape is characterized by a sharp fall in the end of the trend, with no similar shape in the randomized data. For 17 of its 28 down-trends, we could associated the sharp mid-quote drops with exceptional events in the studied period—China’s Black Monday in 2015, Brexit Referendum in 2016 and the 2016 U.S. elections—and with announcements from the Federal Reserve and Bank of Japan. This type of analysis shows the potential of using the epsilon-tau procedure for historical analysis of market trends, e.g., in which situations or what kind of events are responsible for trends with large amplitudes or uncommon shapes. Real-time market applications and uses in other fields remain for future works.

Supporting information

S1 Appendix. Trend length and trend amplitude marginal probability distributions from the epsilon-tau procedure for random walks.

(PDF)

Data Availability

The raw data used in this study was purchased from the EBS Service Company Limited, with no special access privileges. Due to the contract between EBS and us, the authors are not allowed to distribute the raw data. Following the same procedure as the authors, those researchers interested in analyzing similar data sets are recommended to contact the EBS Service Company Limited about the availability and purchase of the data (see https://www.cmegroup.com/tools-information/contacts-list/ebs-support.html).

Funding Statement

This study was supported by the Joint Collaborative Research Laboratory for MUFG AI Financial Market Analysis. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Author HT is employed by Sony Computer Science Laboratories, Inc, which provided support in the form of salaries for author HT, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of manuscript. The specific roles of this author are articulated in the ‘author contributions’ section.

References

Decision Letter 0

J E Trinidad Segovia

29 Jul 2020

PONE-D-20-15593

Segmentation of time series in up- and down-trends using the epsilon-tau procedure

PLOS ONE

Dear Dr. Yamashita Rios de Sousa,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, I feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, I invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Reviewers find this paper quite interesting and they suggest its publication after minor revision. However, one of them has major concerns that I share. I agree that the state of the art should consider some recently published papers. Apart from the suggested by the reviewers I would suggest also to revise https://doi.org/10.1371/journal.pone.0188814

My major concern (see comments of reviewer 1 and 2) is the limitation of this study to the case of the Brexit Referendum and the USDJPY currency pair. I would like to be sure that this is not an important limitation in the application of the results obtained in this paper.

However, I am sure that authors will be able to answer properly to all questions so my decision is minor revision.

Please submit your revised manuscript by Aug 17 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

J E. Trinidad Segovia

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please clarify in your Data availability statement whether others can obtain the same dataset.

We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

3. Please explain the rationale for the development of your tool in light of recent research in this area, clearly indicating which problem with existing tools you are addressing.

Please clearly report at the beginning of your methods or results section which were the key performance measures used to establish the validity and utility of your method. Please also report clearly which statistical analysis was used to establish robustness of performance measures.

Please note that PLOS ONE requires that experiments, statistics, and other analyses must be performed to a high technical standard and described in sufficient detail to allow for reproducibility of the study (http://journals.plos.org/plosone/s/criteria-for-publication#loc-3). To demonstrate the performance of the method, we would expect comparisons to be drawn between existing state-of-the-art methods.

4. Thank you for stating the following in the Financial Disclosure section:

"This study is supported by the Joint Collaborative Research Laboratory for MUFG AI Financial Market Analysis. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

We note that one or more of the authors are employed by a commercial company: Sony Computer Science Laboratories.

4.1. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form.

Please also include the following statement within your amended Funding Statement.

“The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.”

If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement.

4.2. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. 

Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Overall and interesting and well written paper.

It is very very rare to see a paper that does not compare to some strawman. You need to compare to something, or make a strong case as to why you do not need to.

“we define the distance between trends as the Euclidean distance 207 between the corresponding normalized trends using the sampled points” Please expand, this is not clear to me.

Would two time series with identical slope have a zero distance?

Could you get similar results with a much similar way? For example, if you did PLA segmentation (your ref [2]) would that give you similar results to fig 4 etc

“For 17 of its 28 308 down-trends, we could associated the sharp mid-quote drops with exceptional events in 309 the studied period – China’s Black Monday in 2015, Brexit Referendum in 2016 and the 310 2016 U.S. elections – and with announcements from the Federal Reserve and Bank of 311 Japan” This strikes me as a little post-hoc

“The raw data used in this study was purchased from EBS Service Company Limited, with no special access privileges. Due to the contract between EBS and us, the authors are not allowed to distribute the raw data. Those researchers interested in analyzing similar data sets are recommended to contact with EBS Service Company Limited about the availability and purchase of the data (see https://www.cmegroup.com/tools-information/contacts-list/ebs-support.html).”

I will let this slide, but normally I do not accept papers until the code and data is all freely available.

Reviewer #2: In this paper the authors described a time series segmentation method called the epsilon-tau method. According to the authors, this method was first suggested by Johansen and Sornette in 2010 in passing but never fully developed. In this method, a time series segment no longer than a patience tau is called an uptrend if the ending value exceeds the starting value by greater than a tolerance epsilon. By varying epsilon and tau, and applying the method to the one-second time series data of the US dollar-Japanese yen exchange rate between 2015 and 2018, the authors examined a group of 28 highly significant downtrends and found that 17 of them can be attributed to known events.

This is an impressive piece of work. Overall, it is well written, and I recommend for it to accepted for publication, after the authors address my following minor concerns:

(1) I feel that the description of the epsilon-tau procedure needs to be more detailed, because what is written is somewhat confusing. For example, if a segment of the time series has x(t_end) - x(t_start) > epsilon, but at time t_mid, x(t_mid) - x(t_start) < epsilon, do we still consider this one segment (from t_start to t_end), or more than one segment (t_start to t_mid, and t_mid to t_end)?

(2) After understanding that the segmentation method is based on identifying robust local trends in the time series, I feel that it is similar in spirit to the DNA walk method by Peng et al. (Peng, C.K., Buldyrev, S.V., Goldberger, A.L., Havlin, S., Sciortino, F., Simons, M. and Stanley, H.E., 1992. Fractal landscape analysis of DNA walks. Physica A: Statistical Mechanics and its Applications, 191(1-4), pp.25-29), and should therefore cite this and similar papers.

Reviewer #3: The authors from the paper ‘segmentation of time series in up- and down-trends using the epsilon-tau procedure’ present a research work on time series in financial markets (in particular the currency exchange market) where time segmentation and trend analysis is of interest with regard to historical market data research. The document is correctly written, and the mathematical model is well presented, where I do not find major issues. Thus, the paper is ready for publication after revision, which involves:

1.- State of the art: At the introduction, the authors promptly jump into the topic of time series segmentation in the field of finance, namely financial markets. Here it would be good to introduce past and present research from the field. A good summary on market dynamics can be found at:

Joseph L. McCauley, Dynamics of Markets: The New Financial Economics. Cambridge University Press (2009)

Also, most recent research such as:

J. Clara-Rahola, A. M. Puertas, M. A. Sánchez-Granero, J. E. Trinidad-Segovia, and F. J. de las Nieves, Diffusive and Arrestedlike Dynamics in Currency Exchange Markets. Phys. Rev. Lett. 118, 068301 (2017).

Jan Jurczyk, Thorsten Rehberg, Alexander Eckrot, Ingo Morgenstern, Measuring Critical Transitions in Financial Markets. Scientific Reports 7, 11564 (2017).

Should be considered as these papers depict the time evolution on financial markets from a scope where dynamic periods are identified where markets, such as the currency exchange one, display physical phases as arrested crystal or glass states, clustered and random ones where prices diffuse. In these works, such states relate to phase transitions and risk management in a similar way in which the authors from the paper propose a time segmentation in market signals, in particular, the currency exchange one.

2.- Authors consider events for time series analysis. Here they focus on the Brexit Referendum from June 2016. This is a good period of analysis as short and middle-term fluctuations where significant in many markets due to the events in the UK. However, I wonder:

a. Why do authors focus only on the USDJPY currency pair? As they mention in the paper is the GBPUSD is the pair that displayed the largest fluctuations, decreasing down to historical magnitudes. Also, in this case, it would be worthwhile to consider the dynamics of other significant pairs such as the EURGBP or the GBPJPY. I don’t understand why the focus on the USDJPY only and will acknowledge I authors can explain this point.

b. Authors select the Brexit referendum and the USDJPY in their study. However, this is a single event. I will acknowledge if trend analysis and segmentation where discussed in terms of other events that influenced financial markets and in particular the Currency Exchange Market. For example, Trump’s victory on 2016, 09/11 and maybe most important to this study, the EURUSD correction in magnitude indirectly forced by the ECB with the Quantitative Easing purchase program. Here, the BCE directed a decrease of the EURUSD pair as the EUR was too high vs. the USD at the second half of the Great Recession, which helped in European exports and liquidity in the system. I encourage the authors to check for this data in the time period between late 2014 and late 2016 as a long-term negative trend is clearly observed. I think that adding such contents in the analysis, even if latter on the paper deeps in the USDJPY, would help in broadening the scope in which the proposed methodology can be considered and applied.

3.- Authors choose key magnitudes such as a patience level \\tau=7200 seconds from figure 7(a), or the one in figure 11 \\tau=1800 seconds. I might have missed it, but I do not fully understand why the selection of such magnitudes. What is particular to them?

4.- Probabilities and cumulative probabilities with trend amplitude or trend length clearly display an exponential character (as plotted in log-lin graphs), or similar. Here it would be helpful to quantify such data. It looks like that a stretched or expanded exponential could be a good fit (~exp(-(t/tc)^p), with tc a critical variable), where parameters such as p are significant of the type of fluctuations found in the currency and the time interval (for example random or correlated ones).

5.- Finally, it would be interesting to know about the scaling in the model. As exposed before, authors choose a particular currency and a particular time-frame. However, it would be interesting to know if the model scales if a broader or smaller sample is chosen (thus, another event). This is significant as literature states that events that induce strong volatility in different markets (such as Brexit, 9/11, etc) have a limited time extension after which markets stabilize. Thus, such fluctuations are high but improbable events, that disappear in long term due to market self-correcting. Even more, if scaling occurs, I would appreciate some comments related to Efficient Markets and the Efficient Market Hypothesis.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Siew Ann CHEONG

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 1

J E Trinidad Segovia

28 Aug 2020

PONE-D-20-15593R1

Segmentation of time series in up- and down-trends using the epsilon-tau procedure

PLOS ONE

Dear Dr. Yamashita Rios de Sousa,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, I feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, I invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Reviewers consider that most of the major concerns have been attended in this current version, however there are some minor issues that need to be addressed before this manuscript could be finally accepted.

Please submit your revised manuscript by Oct 12 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

J E. Trinidad Segovia

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I am happy with the changes

I am happy with the changes

I am happy with the changes

I am happy with the changes

Reviewer #2: I am satisfied with the revisions made by the authors in response to mine and other reviewers' comments.

Reviewer #3: The authors either applied appropriate modifications to the paper or provided valid explanations in reply to questions and concerns. However, there is one issue that should be considered. As stated in the previous review, the manuscript is focused on the USDJPY currency pair. Here, despite authors claim that \\epsilon - \\tau segmentation can be employed in other currencies (or markets). I wonder about the output in cumulative probability (either depending on trend length or amplitude) in the case of markets where volatility can be much remarkable such as the case of the EURUSD, or the opposite, low volatility markets such as the EURCHF.

In the case of the EURUSD and in the last one-two months (June-July 2020), the pair has displayed important trends as result of the health and economic impact of the covid19 disease at different scales. For example, the last Fed announcement was responsible for a highly volatile situation, with trends displaying significant amplitudes at high frequencies. However, at long times the pair displayed remarkable up-trends at least till about two to three weeks ago. I wonder about the probability and cumulative output in the case of the EURUSD, mainly due to its short-term volatility. For example, at trend lengths large enough, could cumulative probabilities (or probabilities) resolve echoes due to high frequency activity? Note that trends are characterized by channels, in practice defined by parallel or divergent Bollinger Bands. As well I wonder in the case of low volatility markets.

Therefore, and in lack of any analysis other than the one performed on the USDJPY, it would be clarifying to the contents of the manuscript that the title indicates that the study is performed on the USDJPY currency pair. Besides this change, I find that the last version submitted is of interest to PLoS ONE. If the authors consider mentioning the USDJPY pair in the title as focus of the study, the paper is ready for acceptance.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 2

J E Trinidad Segovia

8 Sep 2020

Segmentation of time series in up- and down-trends using the epsilon-tau procedure with application to USD/JPY foreign exchange market data

PONE-D-20-15593R2

Dear Dr. Yamashita Rios de Sousa,

I am pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

J E. Trinidad Segovia

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

J E Trinidad Segovia

10 Sep 2020

PONE-D-20-15593R2

Segmentation of time series in up- and down-trends using the epsilon-tau procedure with application to USD/JPY foreign exchange market data

Dear Dr. Yamashita Rios de Sousa:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. J E. Trinidad Segovia

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Trend length and trend amplitude marginal probability distributions from the epsilon-tau procedure for random walks.

    (PDF)

    Attachment

    Submitted filename: Response_to_Reviewers.docx

    Attachment

    Submitted filename: Response_to_Reviewers.docx

    Data Availability Statement

    The raw data used in this study was purchased from the EBS Service Company Limited, with no special access privileges. Due to the contract between EBS and us, the authors are not allowed to distribute the raw data. Following the same procedure as the authors, those researchers interested in analyzing similar data sets are recommended to contact the EBS Service Company Limited about the availability and purchase of the data (see https://www.cmegroup.com/tools-information/contacts-list/ebs-support.html).


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES