Skip to main content
Entropy logoLink to Entropy
. 2019 Feb 26;21(3):219. doi: 10.3390/e21030219

Data Discovery and Anomaly Detection Using Atypicality for Real-Valued Data

Elyas Sabeti 1, Anders Høst-Madsen 2,3,*
PMCID: PMC7514700  PMID: 33266935

Abstract

The aim of using atypicality is to extract small, rare, unusual and interesting pieces out of big data. This complements statistics about typical data to give insight into data. In order to find such “interesting” parts of data, universal approaches are required, since it is not known in advance what we are looking for. We therefore base the atypicality criterion on codelength. In a prior paper we developed the methodology for discrete-valued data, and the current paper extends this to real-valued data. This is done by using minimum description length (MDL). We develop the information-theoretic methodology for a number of “universal” signal processing models, and finally apply them to recorded hydrophone data and heart rate variability (HRV) signal.

Keywords: atypicality, minimum description length, big data, codelength

1. Introduction

A central question in the era of “big data” is what to do with the enormous amount of information. One possibility is to characterize it through statistics, e.g., averages, or classify it using machine learning, in order to understand the general structure of the overall data. The perspective in this paper is the opposite, namely that most of the value in the information—in some applications—is in the parts that deviate from the average, that are unusual, atypical. Think of art: The valuable paintings or writings are those that deviate from the norms and brake the rules, that are atypical. Or groundbreaking scientific discoveries, which find new structure in data. Finding such unusual data is often done by painstaking human evaluation of data. The goal of our work is to find practical, automatic methods for localizing atypical parts of data.

When searching for atypical data, a key characteristic is that we do not know what we are looking for, we are looking for the “unknown unknowns”. We therefore need universal methods. In the paper [1] we developed a methodology, atypicality, that can be used to discover such data. The basic idea is that if some data can be encoded with a shorter codelength in itself, i.e., with a universal source coder, rather than using the optimum coder for typical data, then it is atypical. The purpose of the current paper is to generalize this to real-valued data. Lossless source coding does not generalize directly to real-valued data. Instead we can use minimum description length (MDL). In the current paper we develop an approach to atypicality based on MDL, and show its usefulness on a real dataset.

In this section before an extensive literature review of detection problems, we first describe the concepts of atypicality and how this framework can be used for data discovery. This arrangement is essential in order to compare the atypicality with the state of the art methods.

1.1. Anomaly Detection and Data Discovery Based on Description Length

A common way to define an outlier or anomaly in data is a sample that does not fit the statistics of typical data [2], e.g., if typical data is described by a pdf fT(x), and if fT(x)<τ for some threshold τ then x is an outlier. In this paper we approach the problem of anomaly detection, and in particular data discovery, from a different point of view. We consider sequences of data xl, and say that a sequence of data xl is atypical if there is some alternative model that ’fits’ the data better than the typical model. This point of view has been considered before in anomaly detection, e.g., [3]. Given a typical probability distribution, data that is unlikely could simply be, well, outliers, e.g., faulty measurements, and not of much interest in itself. Requiring data to fit an alternative model gives an indication that there is some interesting, new relationship in the data. We therefore think of this approach going beyond simply finding anomalous data, to finding interesting data, i.e., data discovery.

In our paper [1] we used universal source coding for anomaly detection; in [3,4,5] the authors used a type of universal empirical histogram. This kind of methodology is feasible when data is discrete. However, real-valued data is too rich for such universal descriptions. Models for real-valued data is almost always given as parametric models, either directly or indirectly. Our approach to atypicality for real-valued data, in the absence of universal coders, is to consider multiple ‘universal’ real-valued models given by parametric models. For example, it is well-known [6] that by the Wold decomposition (almost) all Gaussian stationary processes can be described in terms of a linear prediction model. Wavelets are also good for compressing (lossily) many signals and images. One can therefore expect these will also work well as alternative models. Most modeling and compression are based on a second order analysis, and therefore fit with Gaussian models. One could be interested in also finding atypical data that does not fit a Gaussian model; however, apart from iid (independently, identically distributed) models (similar to [3]), this is difficult to do, so the richness of non-Gaussian models is limited. We will therefore focus on Gaussian models in this paper; notice, however, this is not a limitation of atypicality, we have considered non-Gaussian models in [7].

Consider an atypicality setup where the typical model is given by a probability density function (pdf) fT(xl) and the atypical model is given by f(x|θ) with θ unknown. Asking if the atypical model is better can be thought of simply as a generalized likelihood ratio test (GLRT) hypothesis test [8].

minθf(xl|θ)fT(xl)τ.

However, in atypicality we would like to test the sequence with respect to a large class of alternative hypotheses—even the class of linear prediction models is infinite. So, assume we have a finite or countable infinite set of model classes Mi with corresponding pdfs fi(x|θi). A test could then be

miniminθifi(xl|θi)fT(xl)τ. (1)

However, this is clearly not very useful. More and more complex model will fit data better and better [9], so that the false alarm probability will be very large—model complexity has to be taken into account. One way to do this through Bayesian statistics assigning prior probabilities to both models and parameters, ending up in the test

PAiP(Mi)fi(xl|θi)fi(θi)dθi(1PA)fT(xl)1 (2)

where PA is the probability of a sequence being atypical and P(Mi) the probability of an alternative model Mi. The issue is that using (2) requires choosing a lot of prior distributions and being able to calculate marginal distributions fi(xl|θi)fi(θi)dθi. As explained in for example (3.4–3.5 [9]), these are not easy problems to tackle. Priors are often dictated by the need for the integral to be calculable, rather than actual prior information, and it still leaves parameters unknown (‘hyperparameters’). In addition, choosing prior distributions is anathema to the central idea of looking for unknown data in big data. The whole point is that we know very little about the data we are looking for.

This is where we can use description length. Suppose at first that data is discrete-valued. To each sequence xl we assign a codeword c(xl) with length L(xl). The codewords have to be prefix free and the lengths therefore have to satisfy the Kraft inequality [10]: xl2L(xl)1. If we let p(xl)=2L(xl) this defines a (sub)probability on the data, which can be used in a hypothesis test. One can think of description length and coding as a method to find probabilities. There is a key advantage in using description length, as explained in the following. In decoding, a decoder reads a sequence of bits sequentially and turns this into a copy of the source sequence; the codes must be prefix-free. Key here is that in the current step the decoder can only use what is decoded in prior steps. Therefore, when the source sequence is encoded, the encoder cannot use future samples to encode the current sample. We call this ’the principle of sequentiality’. It is the Kraft inequality in reverse: In one direction, as above, we can use the Kraft inequality to verify that a set of codelengths gives valid codes. In the other direction, when codes are decodable (in the pre-fix free sense), they must satisfy the Kraft inequality, and the corresponding probabilities must therefore be valid. An example is Lempel-Ziv coding [10,11,12], which does not explicitly rely on probabilities. It gives valid codewords because the coding is decodable with a sequential decoder.

To generalize the coding approach to real-valued data, lossless coding is needed. One can notice that lossless coding of real-valued data is used in many applications, for example lossless audio coding [13]. However, direct encoding of the reals represented as binary numbers, such as done in lossless audio coding, makes the methods too dependent on data representation rather than the underlying data. Instead we will use a more abstract model of (finite-precision) reals. We will assume a fixed point representation with a (large) finite number, r, bits after the period, and an unlimited number of bits prior to the period as in [14]. Assume that the actual data is distributed according to a pdf f(x). Then the number of bits required to represent x is given by

L(x)=logxx+2rf(t)dtlog(f(x)2r)=log(f(x))+r. (3)

As we are only interested in comparing codelengths the dependency on r cancels out. Suppose we want to decide between two models f1(x) and f2(x) for data. Then we decide f1(x) if limrlogxx+2rf1(t)dt+logxx+2rf2(t)dt>0, which is logf1(x)>logf2(x). Thus, for the typical codelength we can simply use LT(x)=logfT(x). One can also argue for this codelength more fundamentally from finite blocklength rate-distortion in the limit of low distortion [15], which makes it more theoretically well-founded. Notice that this codelength is not scaling invariant:

y=ax+bLt(y)=logf(x)+log|a| (4)

which means care has to be taken when transforms of data are considered. To code the atypical distributions, as the decoder does not know the values of the parameters, both data and parameters in parametric models have to be encoded for a decoder to be able to decode; this was the starting point in the original paper on MDL [14]. One could also use a Bayesian distribution fi(xl|θi)fi(θi)dθi from (2), which does not solve the issues with using Bayes. Instead we can use the principle of sequentiality of coding as follows. We replace fi(xl|θi)fi(θi)dθi in (2) with a codelength based on Rissanen’s predictive MDL [16].

Li(xl)=n=0l1logfi(xn+1|θ^i(xn)) (5)

where θ^(xi) is the maximum likelihood estimate of the parameter. Since this is sequentially decodable, it gives a valid codelength, and hence probability, without any prior distribution on θ. It does not work for the first sample, as there is no estimate. Instead we encode x1 with a default distribution. In general application of MDL choice of the default distribution can be tricky, but for atypicality we have a good default distribution: The typical distribution, giving the codelength

Li(xl)=n=1l1logfi(xn+1|θ^i(xn))logfT(x1). (6)

Notice that default distribution is the same for all models Mi: we do not have to choose a prior for each model. There are no prior assumptions involved, since we use the typical distribution. We still need the probabilities P(Mi); here we can use Rissanen’s universal coder of the integers [14]. The codelength for an integer i is c+log*i where c is a normalization constant in the Kraft inequality [14] and log*l=logl+loglogl+ with the sum continuing as long as the log is defined. We order the models according to complexity and encode the ordinal of a model. The description length test for the sequence xl to be atypical then becomes

log2Li(xl)log*iclogPAlogfT(xl)log(1PA). (7)

The appeal of coding becomes even more clear when we search for atypical subsequences of long sequences. Using coding this can be done as follows. The coder uses a special header, a codeword not a prefix of any codeword used for the actual data, to denote the start of a subsequence—the decoder will now know it needs to use the atypical decoder. It also encodes the length of the atypical subsequence using Rissanen’s universal coder for the integers [14], adding c+log*l to the code length, so that the decoder knows when to switch back to the typical coder. The whole sequence is sequentially decodable, thus has a valid probability, and we know from [1] that this gives a valid criterion, at least for iid sequences, in the sense that not the whole sequence will be classified as atypical; the key is the insistence on decodability. It would be difficult to do this directly using Bayesian analysis, as we would have to develop probability distributions for the total sequence for every combination of atypical subsequences in the long sequence. To be precise, for every set of potential subsequences S={xs1e1,xs2e2,} we would have to calculate p(xl|S)p(S), and then choose the S giving the largest probability, i.e., MAP.

To understand how (7) avoids the problem overfitting of (1), we notice that asymptotically for large l by [16]

Li(xl)f(xl|θ^(xl))+k2logl (8)

where k is the number of parameters in θ; this is true for many MDL and Bayesian methods, including Rissanen’s original approach [14]. Because (8) penalizes models with many parameters, overfitting is avoided even if we consider an infinite collection of models. While (8) is often used for model selection, it is not accurate enough for our purposes, and we use (5) directly. However, (8) is useful for discussion and analysis.

The above approach can be seen as a generalization to real-valued data of the approach in [1]

Definition 1.

A sequence is atypical if it can be described (coded) with fewer bits in itself rather than using the (optimum) code for typical sequences.

There is a further difference from Bayes (2), which is more philosophical than computational and practical. When we describe the problem as a hypothesis test problem as in (2), we are asking which hypothesis is correct (which is also the basis of Bayesian model selection [9]). However, in stating the problem as a description length problem, we are just asking if we can find a shorter description length, not if a model is correct. By considering a very large class of alternative models (most pronounced when we use universal source coding), none might fit very well, none might be even close to the actual model, but we might find one that fits better than the typical model, and that is sufficient for a sequence to be atypical. We have no idea how atypical data might look like, so we cast a very wide net.

1.2. Alternative Approaches

Atypicality has many applications: Anomaly detection, outlier detection, data discovery, novelty detection, transient detection, search for ‘interesting’ data etc. What all of these applications have in common is that we seek data that is unusual in some way, and atypicality is a general method for finding such data. Each of these applications have specific alternative methods, and we will discuss atypicality compared to other approaches in some of these applications.

There is a very large existing literature on anomaly detection [17,18,19,20,21,22,23,24,25,26]; The paper [17] gives an overview until 2009. What is characteristic of all methods, as far as we know, is that they look for data that do not fit the characteristics of normal data, either statistically or according to some other measure. From [17]: “At an abstract level, an anomaly is defined as a pattern that does not conform to expected normal behavior.” Atypicality takes a different approach. Atypicality looks for data where an alternative model fits the the data better. Atypicality will still find the first type of anomalies according to [17], but it will also be able to find a wider, more subtle class of anomalies. As a simple example, suppose the normal data is iid Gaussian with zero mean and variance σ2. The anomalous data is also Gaussian with zero mean and variance σ2, but the noise is colored. This is in no way anomalous according to the definition in [17]. However, by coding data with a linear predictive coder (see Section 3.2 later) atypicality will detect the anomalous sequence. In [27] we in fact prove that atypicality is exactly optimum for discrete data in the class of finite state machines. While we do not have a similar theorem for real-valued data, this indicates the advantages of atypicality for anomaly detection.

Another advantage of atypicality is that it can straightforwardly be applied to data of unknown/variable length, as discussed in Section 1.1. All existing anomaly detection algorithms we know of use fixed windows, so they cannot make decisions between long, slightly unusual sequences, and short, very unusual sequences; atypicality can. On the other hand, atypicality cannot find single, anomalous samples—outliers: To be able to find a new model for anomalous data, it needs a collection of samples. For this kind of application, more traditional methods must be used.

A type of detection problem closely related to anomaly detection is transient detection [28,29,30,31,32,33,34]. In many signal processing applications, it is of interest to detect short-duration statistical changes in observed data. For a parametric class of probability distribution fx|θ:θΘ and for an unknown ns and nd the following two hypotheses are considered:

H0:x1lfx|θ0H1:x1ns1fx|θ0,xnsnd1fx|θ1,xndlfx|θ0.

If θ0 and θ1 are known, the Page test is optimal for this in the sense that by using a GLRT; given an average wait between false alarms, it minimizes the worst-case average delay to detection [31]. However in many applications, there is either no information about θ1 or it varies from one transient signal to another. In this case, it is shown that Variable Threshold Page (VTP) gives a reliable result [29,31]. There are also other approaches of transient detection based on Nuttall’s power-law detector that are often used in the literature [29,30]. Other methods are [32,33,34]. In general atypicality will outperform this methods since it not only allows a more comprehensive class of models, but also it can take advantage of various powerful signal processing methods such as filterbanks and linear prediction to find transient signals with various statistics.

Finally, we will mention change point detection and quickest change detection [35,36,37,38,39,40,41,42]. The goal here is to find a point in time where the distribution of data changes from one to another. The difference from atypicality is that in atypicality, subsequences have both a start and end point. In principle one could use atypicality for change point detection, but since it is not optimized for this application, the comparison is not that relevant, and atypicality might not perform well. We refer to [35,36] for how to use MDL for change point detection.

2. Minimum Description Length Methods

Above we have argued for using (5) as a codelength. The issue with this method is how to initialize the recursion. In (6) this is solved by using the typical distribution for the first sample, but in general, with more than one parameter, θ^i(xi) may not be defined until i becomes larger than 1. The further issue is that even when θ^(xi) is defined, the estimate might be poor for small i, and using this in (5) can give very long codelengths, see Figure 1 below.

Figure 1.

Figure 1

Redundancy comparison between ordinary predictive minimum description length (O.P. MDL) and our proposed sufficient statistic method for μ=0 and σ2=4.

Our solution to the first issue is to encode with increasingly complex models as i increases; we therefore only have to use the default distribution for the very first sample. Since we are not interested in finding a specific model, this is not problematic in atypicality. Our solution to the second issue is rather than using the ML estimate for encoding as though it is the actual parameter value, we use it as an uncertain estimate of θ. We then take this uncertainty into account in the codelength. This is similar to the idea of using confidence intervals in statistical estimates [43]. Below we introduce two methods using this general principle. This is different to the sequentially normalized maximum likelihood method [44], which modifies the encoder itself.

2.1. Sufficient Statistic Method (SSM)

As explained above, our approach to predictive MDL is to introduce uncertainty in the estimate of θ. Our first methodology is best explained through a simple example. Suppose our model is N(μ,σ2), with σ known. The average x¯n is the ML estimate of μ at time n. We know that

x¯n=μ+z,zN0,σ2n.

We can re-arrange this as

μ=x¯nz.

Thus, given x¯n, we can think of μ as random Nx¯n,σ2n. Now

xn+1=μ+zn+1Nx¯n,σ2+σ2n

which we can use as a coding distribution for xn+1. This compares to Nx¯n,σ2 that we would use in traditional predictive MDL. Thus, we have taken into account that the estimate of μ is uncertain for n small. The idea of thinking of the non-random parameter μ as random is very similar to the philosophical argument for confidence intervals [43].

In order to generalize this example to more complex models, we take the following approach. Suppose t(xn) is a k-dimensional sufficient statistic for the k-dimensional θΘ. Also suppose there exists some function s and a k-dimensional (vector) random variable Y independent of θ so that

t(xn)=s(Y,θ). (9)

We now assume that for every (t,Y) in their respective support, (9) has a solution for θΘ so that we can write

θ=r(Y,t(xn)). (10)

The parameter θ is now a random variable (assuming r is measurable, clearly) with a pdf fxn(θ) This then gives a distribution on xn+1, i.e.,

f(xn+1|xn)=f(xn+1|θ)fxn(θ)dθ. (11)

The method has the following property:

Theorem 1.

The distribution of xn+1 is invariant to arbitrary parameter transformations.

This is a simple observation from the fact that (11) is an expectation, and that when θ is transformed, the distribution according to (10) is also transformed with the same function.

One concern is the way the method is described. Perhaps we could use different functions s and r and get a different result? In the following we will prove that the distribution of θ is independent of which s and r are used.

It is well-known [6,10] that if the random variable X has CDF F, then U=F(X) has a uniform distribution (on [0,1]). Equivalently, X=F1(U) for some uniform random variable U. We need to generalize this to n dimensions. Recall that for a continuous random variable [6]

Fi|i1,,1(xi|xi1,x1)=xif(t|xi1,,x1)dt=1f(xi1,,x1)xif(t,xi1,,x1)dt

whenever f(xi1,,x1)0. As an example, let n=2. Then the map (X1,X2)(F1(X1),F2|1(X2,X1)) is a map from R2 onto [0,1]2, and (F1(X1),F2|1(X2,X1)) has uniform distribution on [0,1]2. Here F1(X1) is continuous in X1 and F2|1(X2,X1) is continuous in X2

We can write X1=F11(U1). For fixed x1 we can also write X2=F2|11(U2|x1) for those x1 where F2|1 is defined, and where the inverse function is only with respect to the parameter before |. Then

X1X2=F11(U1)F2|11(U2|F11(U1))Fˇ1(U1,U2).

This gives the correct joint distribution on (X1,X2): The marginal distribution on X1 is correct, and the conditional distribution of X2 given X1 is also correct, and this is sufficient. Clearly Fˇ1 is not defined for all U1,U2; the relationship should be understood as being valid for almost all (X1,X2) and (U1,U2). We can now continue like this for X3,X4,,Xn. We will state this result as a lemma

Lemma 2.

For any continuous random variable X there exists an n-dimensional uniform random variable U, so that X=Fˇ1(U).

Theorem 2.

Consider a model t=s1(Y1;θ), with θ=r1(Y1;t) and an alternative model t=s2(Y2;θ), with θ=r2(Y2;t). We make the following assumptions:

  • 1. 

    The support of t is independent of θ and its interior is connected.

  • 2. 

    The extended CDF Fˇi of Yi is continuous and differentiable.

  • 3. 

    The function Yisi(Yi;θ) is one-to-one, continuous, and differentiable for fixed θ.

Then the distributions of θ given by r1 and r2 are identical.

Proof. 

By Lemma 2 write Y1=F11(U1), Y2=F21(U2). Let u be the k-dimensional uniform pdf, i.e., u(x)=1 for x[0,1]k and 0 otherwise, and let Yi=si1(t;θ) denote the solution of t=si(Yi;θ) with respect to Yi, which is a well-defined due to Assumption 3. We can then write the distribution of t in two ways as follows ([6]), due to the differentiability assumptions

f(t;θ)=u(F1(s11(t;θ))F1(s11(t;θ)t=u(F2(s21(t;θ))F2(s21(t;θ)t.

Due to Assumption 1 we can then that conclude F1(s11(t;θ)t=F2(s21(t;θ)t, or

F1(s11(t;θ)=F2(s21(t;θ))+k(θ).

But both F1 and F2 have range [0,1]k, and it follows that k(θ)=0. Therefore

t=s1(F11(U);θ)=s2(F21(U);θ).

If we then solve either for θ as a function of U (for fixed t), we therefore get exactly the same result, and therefore the same distribution. □

The assumptions of Theorem 2 are very restrictive, but we believe they are far from necessary. In [45] we proved uniqueness in the one-dimensional case under much weaker assumptions (e.g., no differentiability assumptions), but that proof is not easy to generalize to higher dimensions.

Corollary 3.

Let t1(xn) and t2(xn) be equivalent sufficient statistic for θ, and assume the equivalence map is a diffeomorphism. Then the distribution on θ given by the sufficient statistic approach is the same for t1 and t2.

Proof. 

We have t1=s1(Y1,θ) and t2=s2(Y2,θ). By assumption, there exists a one-to-one map a so that t1=a(t2), thus t1=a(s2(Y2,θ)). Since the distribution of θ is independent of how the problem is stated, t1 and t2 gives the same distribution on θ. □

2.2. Normalized Likelihood Method (NLM)

The issue with the sufficient statistic method is that a sufficient statistic of the same dimension of the parameter vector can be impossible to find. We will therefore introduce a simpler method. Let the likelihood function of the model be f(xl|θ). For a fixed xl we can consider this as a ‘distribution’ on θ; the ML estimate is of course the most likely value of this distribution. To account for uncertainty in the estimate, we can instead try use the total f(xl|θ) to give a distribution on θ, and then use this for prediction. In general f(xl|θ) is not a probability distribution as it does not integrate to 1 in θ. We can therefore normalize it to get a probability distribution

fxl(θ)=f(xl|θ)C(xl);C(xl)=f(xl|θ)dθ (12)

if f(xl;θ)dθ is finite. For comparison, the Bayes posteriori distribution is

f(θ|xl)=f(xl|θ)f(θ)f(xl|θ)f(θ)dθ.

If the support Θ of θ has finite area, (12) is just the Bayes predictor with uniform prior. If the support Θ of θ does not have finite area, we can get (12) as a limiting case when we take the limit of uniform distributions on finite Θn that converge towards Θ. This is the same way the ML estimator can be seen as a MAP estimator with uniform prior [46]. One can reasonably argue that if we have no further information about θ, a uniform distribution seems reasonable, and has indeed been used for MDL [47] as well as universal source coding ([10], Section 13.2). What the Normalized Likelihood Method does is simply extend this to the case when there is no proper uniform prior for θ.

The method was actually implicitly mentioned as a remark by Rissanen in ([48], Section 3.2), but to our knowledge was never further developed; the main contribution in this paper is to introduce the method as a practical method. From Rissanen we also know the coding distribution for xn:

f(xn+1|xn)=f(xn+1|θ)fxn(θ)dθ=Cxn+1Cxn. (13)

Let us assume C(xn) becomes finite for n>1 (this is not always the case, often n needs to be larger). The total codelength can then be written as

L(xl)=i=1l1logf(xi+1|xi)logfd(x1)=logC(xl)+logC(x2)logfd(x1), (14)

where fd(x) is the default distribution, which for application in atypicality can be taken as the typical distribution. One might see this simply as a (generalized) Bayesian method. However, in general C(xn) is not a valid probability, and as mentioned in ([9], Section 3.4) an improper prior cannot be used for Bayesian model selection. But when implemented sequentially, as indicated in (14) it does give a valid codelength, because of the principle of sequentiality, central to coding.

2.3. Examples

We will compare the different methods for a simple model. Assume our model is N(0,σ2) with σ unknown. The likelihood function is f(xn|σ2)=1(2πσ2)n/2exp12σ2i=1nxi2. For n=1 we have 0f(xn|σ2)dσ2=, but for n2

Cxn=f(xn|σ2)dσ2=1πn22Γn22nσ2^nn22

then

fnlm(xn+1|xn)=Γn12πΓn22nσ2^nn22n+1σ2^n+1n12

where σ2^n=1ni=1nxi2. Thus, for coding, the two first samples would be encoded with the default distribution, and after that the above distribution is used. For the SSM, we note that σ2^n is a sufficient statistic for σ2 and that z=nσ2σ2^nχ(n)2, i.e., σ2^n=s(z,σ2)=σ2nz, which we can be solved as σ2=r(z,σ2^n)=nzσ2^n, in the notation of (9)–(10). This is a transformation of the χ(n)2 distribution which can be easily found as [6]

fxn(σ2)=nσ2^nn22n2Γn2σ2n+22expn2σ2σ2^n.

Now we have

fssm(xn+1|xn)=f(xn+1|σ2)fxn(σ2)dσ2=Γn+12πΓn2nσ2^nn2n+1σ2^n+1n+12. (15)

For comparison, the ordinary predictive MDL is

f(xn+1|xn)=12πσ2^nexp12σ2^nxn+12 (16)

which is of a completely different form. To understand the difference, consider the codelength for x2:

L(x2)=logx12+x22|x1|+logπΓ(12)Γ(1)SSM,L(x2)=12log2πx12+x22x12predictiveMDL.

It can be seen that if x1 is small and x2 is large, the codelength for x2 is going to be large. But in the sufficient statistic method this is strongly attenuated due to the log in front of the ratio. Figure 1 shows this quantitatively in the redundancy sense. The redundancy is the difference between the codelength using true and estimated distributions. As can be seen, the CDF of the ordinary predictive MDL redundancy has a long tail, and this is taken care of by SSM.

3. Scalar Signal Processing Methods

In the following we will derive MDL for various scalar signal processing methods. We can take inspiration from signal processing methods generally used for source coding, such as linear prediction and wavelets; however, the methods have to be modified for MDL, as we use lossless coding, not lossy coding. As often in signal processing, the models are a (deterministic) signal in Gaussian noise. In a previous paper we have also considered non-Gaussian models [7]. All proofs are in Appendices.

3.1. Iid Gaussian Case

A natural extension of the examples considered in Section 2.1 is xnN(μ,σ2) with both μ and σ2 unknown. Define μ^n=1ni=1nxi and Sn2=1n1i=1nxiμ^n2. Then the sufficient statistic method is

f(xn+1|xn)=nπn+1Γn2Γn12×n1Sn2n12nSn+12n2. (17)

This is a special case of the vector Gaussian model considered later, so we will not provide a proof.

3.1.1. Linear Transformations

The iid Gaussian case is a fundamental building block for other MDL methods. The idea is to find a linear transformation so that we can model the result as iid, and then use the iid Gaussian MDL. For example, in the vector case, suppose xnN(μ,Σ) is (temporally) iid, and let yn=AxnN(Aμ,AΣAT). If we then assume that AΣAT is diagonal, we can use the iid Gaussian MDL on each component. Similarly, in the scalar case, we can use a filter instead of a matrix. Because of (4) we need to require A to be orthonormal: For any input we then have ynTyn=xnTATAxn=xnTxn, and in particular E[ynTyn]=E[xnTxn] independent of the actual Σ. We will see this approach in several cases in the following.

3.2. Linear Prediction

Linear prediction is a fundamental to random processes. Write

x^n+1|xn=k=0wkxnken+1=xn+1x^n+1|xn.

Then for most stationary random processes the resulting random process {en} is uncorrelated, and hence in the Gaussian case, iid, by the Wold decomposition [6]. It is therefore a widely used method for source coding, e.g., [13]. In practical coding, a finite prediction order M is used,

x^n+1|xn=k=1Mwkxnk+1,nM

Denote by τ the power of {en}. Consider the simplest case with M=1: There are two unknown parameters (w1,τ). However, the minimal sufficient statistic has dimension three [49]: k=1nxk2,k=1n1xk2,k=2nxkxk1. Therefore, we cannot use SSM; and even if we could, the distribution of the sufficient statistic is not known in closed form [49]. We therefore turn to the NLM.

We assume that en+1=xn+1x^n+1|xn is iid normally distributed with zero mean and variance τ,

f(xn|τ,w)=1(2πτ)(nM)/2×exp12τi=M+1nxik=1Mwkxik2. (18)

Define

r^(n)(k)=i=M+1nxixik.

Then a simple calculation shows that

i=M+1nei2=r^(n)(0)2wTp(n)+wTR(n)(M)w

where wT=[w1w2wM], p(n)T=[r^(n)(1)r^(n)(2)r^(n)(M)],

R(n)(M)=i=M+1nxiMi1xiMi1T (19)

and xiMi1=[xi1,xi2,,xiM]. Thus

f(xn|τ,w)=1(2πτ)(nM)/2×exp12τr^(n)(0)2wTp(n)+wTR(n)(M)w

giving (see Appendix A)

C(xn)=12πn2M2detR(n)Γn2M22τ^(n)(M)n2M22

and

fM(xn+1|xn)=detR(n)(M)detR(n+1)(M)Γn2M12Γn2M22×1πτ^(n)(M)n2M22τ^(n+1)(M)n2M12 (20)

with τ^(n)(M)=r^(n)(0)p(n)TR(n)1p(n).

The Equation (20) is defined for n2M+2: The vector xiMi1 is defined for iM+1, and R(n)(M) defined by (19) becomes full rank when the sum contains M terms. Before the order M linear predictor becomes defined, the data needs to be encoded with other methods. Since in atypicality we are not seeking to determine the model of data, just if a different model than the typical is better, we encode data with lower order linear predictors until the order M linear predictor becomes defined. So, the first sample is encoded with the default pdf. The second and third samples are encoded with the iid unknown variance coder (There is no issue in encoding some samples with SSM and others with NLM) (15). Then the order 1 linear predictor takes over, and so on.

3.3. Filterbanks and Wavelets

A popular approach to source coding is sub-band coding and wavelets [50,51,52]. The basic idea is to divide the signal into (perhaps overlapping) spectral sub-bands and then allocate different bitrates to each sub-band; the bitrate can be dependent on the power in the sub-band and auditory properties of the ear in for example audio coding. In MDL we need to do lossless coding, so this approach cannot be directly applied, but we can still use sub-band coding as explained in the following.

As we are doing lossless coding, we will only consider perfect reconstruction filterbanks [50,53]. Furthermore, in light of Section 3.1.1 we also consider only (normalized) orthogonal filterbanks [50,52].

The basic idea is that we split the signal into a variable number of sub-bands by putting the signal through the filterbank and downsampling. Then the output of each downsampled filter is coded with the iid Gaussian coder of Section 3.1 with an unknown mean and variance, which are specific to each sub-band. In order to understand how this works, consider a filterbank with two sub-bands. Assume that the signal is stationary zero mean Gaussian with power σ2, and let the power at the output of sub-band 1 be σ12 and of sub-band 2 be σ22. Because the filterbank is orthogonal, we have σ2=12σ12+σ22. To give some intuition to why a sub-band coder can give shorter codelengh, we use (8) to get the approximate codelengths

Ldirect=l2logσ2+l2log2π+loge+12loglLfilterbank=l4logσ12+l4log2π+loge+12logl+l4logσ22+l4log2π+loge+12logl=l2logσ12σ22+l2log2π+loge+logl.

Since σ12σ22σ2 (with equality only if σ12=σ22), the sub-band coder will result in shorter codelength for sufficiently large l if the signal is non-white.

The above analysis is a stationary analysis for long sequences. However, when considering shorter sequences, we also need to consider the transient. The main issue is that output power will deviate from the stationary value during the transient, and this will affect the estimated power σn2^ used in the sequential MDL. The solution is to transmit to the receiver the input to the filterbank during the transient, and only use the output of the filterbank once the filters have been filled up. It is easy to see that the system is still perfect reconstruction: Using the received input to the filterbank, the receiver puts this through the analysis filterbank. It now has the total sequence produced by the analysis filterbank, and it can then put that through the reconstruction filterbank. When using multilevel filterbanks, this has to be done at each level.

We assume the decoder knows which filters are used and the maximum depth D used. In principle the encoder could now search over all trees of level at most D. The issue is that there are an astonishing large number of such trees; for example for D=4 there are 676 such trees. Instead of choosing the best, we can use the idea of the CTW [1,54,55] and weigh in each node: Suppose after passing a signal xn of an internal node S through low-pass and high-pass filters and downsampler, xLn/2 and xHn/2 are produced in the children nodes of S. The weighted probability of xn in the internal node S will be

fwxn=12fxn+12fwxLn/2fwxHn/2

which is a good coding distribution for both a memoryless source and a source with memory [54,55].

4. Vector Case

We now assume that a vector sequence xn, xiRM is observed. The vector case allows for a more rich set of model and more interesting data discovery than the scalar case, for example atypical correlation between multiple sensors. It can also be applied to images [56], and to scalar data by dividing into blocks. That is in particular useful for the DFT, Section 4.4.

A specific concern is initialization. Applying sequential coding verbatim to the vector case means that the first vector x1 needs to be encoded with the default coder, but this means the default coder influences the codelength too much. Instead we suggest to encode the first vector as a scalar signal using the scalar Gaussian coder (unknown variance→unknown mean/variance). That way only the first component of the first vector needs to be encoded with the default coder.

4.1. Vector Gaussian Case with Unknown Mean

First assume μ is unknown but Σ is given. We define etr=exptrace and we have

fxn|μ=12πkndetΣn×exp12i=1nxiμTΣ1xiμ.

We first consider the NLM. By defining μ^n=1ni=1nxi and Σ^n=i=1nxixi (note that Σ^n is not the estimate of Σ) we have

Cxn=fxn|μdμ=12πkndetΣnexp12i=1nxiΣ1xi×expn2μTΣ1μ+nμ^nTΣ1μdμ=Cexp12i=1nxiΣ1xiμ^nTΣ1μ^n=Cetr12Σ^nnμ^nμ^nTΣ1

where C=12πkn1nkdetΣn1, hence we can write

fxn+1|xn=Cxn+1Cxn=nn+1k12πkdetΣ×etr12Σ^n+1n+1μ^n+1μ^n+1TΣ1etr12Σ^nnμ^nμ^nTΣ1. (21)

It turns out that in this case, the SSM gives the same result.

4.2. Vector Gaussian Case with Unknown Σ

Assume xnN0,Σ where the covariance matrix is unknown:

fxn|Σ=12πkndetΣnetr12Σ^nΣ1

where Σ^n=i=1nxixiT.

In order to find the MDL using SSM, notice that we can write

xn=Szn,znN(0,I)

where S=Σ12, that is S is some matrix that satisfies SST=Σ. A sufficient statistic for Σ is

Σ^n=i=1nxixiT=Si=1nziziTST=defSUST.

Let S^n=Σ^n12=SU12. Then we can solve S=S^nU12 and Σ=S^nU1S^nT. Since U1 has Inverse-Wishart distribution U1WM1I,n, one can write ΣWM1Σ^n,n. Using this distribution we calculate in Appendix B that

fxn+1|xn=1πM2detΣ^nn2detΣ^n+1n+12ΓMn+12ΓMn2 (22)

where ΓM is the multivariate gamma function [57].

On the other hand, using the normalized likelihood method we have

Cxn=ΓMn2M+122MM+12πkn2detΣ^nn2M+12,

from which

fxn+1|xn=Cxn+1Cxn=1πk2detΣ^nn2M+12detΣ^n+1n2M2ΓMn2M2ΓMn2M+12. (23)

4.3. Vector Gaussian Case with Unknown Mean and Σ

Assume xnNμ,Σ where both mean and covariance matrix are unknown:

fxn|μ,Σ=12πMndetΣn×exp12i=1nxiμTΣ1xiμ.

It is well-known [46] that sufficient statistics are μ^n=1ni=1nxi and Σ^n=n1Sn=i=1nxiμ^nxiμ^nT. Let S be a square root of Σ, i.e., SST=Σ. We can then write

μ^n=μ+1nSzΣ^n=SUST

where zN0,I and UWMI,n1, z and U are independent, and WM is the Wishart distribution. We solve the second equation with respect to S as in Section 4.2 and the first with respect to μ, to get

Σ=S^nU1S^nTWM1Σ^n,n1μ=μ^n1nSz=μ^n1nS^nU12zNμ^n,1nΣ

where S^n is a square root of Σ^n. We can explicitly write the distributions as

fxnμ|Σ=nM2πMdetΣexpn2μμ^nTΣ1μμ^nfxnΣ=detΣ^nn122Mn12ΓMn12detΣn+M2etr12Σ^nΣ1.

Using these distributions, in Appendix C we calculate

fxn+1|xn=1πM2nn+1MdetΣ^nn12detΣ^n+1n2ΓMn2ΓMn12

and for NLM

fxn+1|xn=1πM2nn+1MdetΣ^nn12M+12detΣ^n+1n2M+12×ΓMnM12ΓMnM22.

These are very similar to the case of known mean, Section 4.2. We require one more sample before the distributions become well-defined, and Σn is defined differently.

4.4. Sparsity and DFT

We can specify a general method as follows. Let Φ be an orthonormal basis of RM and write the signal model as

xn=i=1N(Ai+si,n)ϕj(i)+wn.

Here N is the number of basis vectors used, and j(i),i=1,,N their indices. The signal si,n is iid N(0,σi), the noise wn iid N(0,σ2I), and Ai,σi2,σ2 are unknown. If we let yn=ΦTxn and J the indices of the signal components then

yj(i),n=Ai+si,n+wj(i),n=Ai+s˜i,n,j(i)Jyj,n=wj,n,jJ.

Thus the yj(i),n can be encoded with the scalar Gaussian encoder of Section 3.1, while the yj,n can be encoded with a vector Gaussian encoder for N(0,σ2IMN) using the following equation that is achieved using the SSM:

fwn+1|wn=1πMN2ΓMNn+12ΓMNn2×nτ^nMNn2n+1τ^n+1MNn+12

where τ^n=1ni=1nwiTwi. Now we need to choose which coefficients j(i) to choose as signal components and inform the decoder. The set J can be communicated to the decoder by sending a sequence of 0,1 encoded with the universal encoder of ([10], Section 13.2) with MHNM+12logM bits. The optimum set can in general only be found by trying all sets J and choosing the one with shortest codelength, which is infeasible. A heuristic approach is to find the N components with maximum power when calculated over the whole blocklength l (the decoder does not need to know how J was chosen, only what J is, it is therefore fine to use the power at the end of the block). What still remains is how to choose N. It seems computationally feasible to start with N=1 and then increase N by 1 until the codelength no longer decreases, since most of the calculations for N can be reused for N+1.

We can apply this in particular when Φ is a DFT matrix. In light of Section 3.1.1 we need to use the normalized form of the DFT. The complication is that the output is complex, i.e., the M real inputs result in M complex outputs, or 2M real outputs. Therefore, care has to be taken with the symmetry properties of the output. Another option is to use DCT instead, which is well-developed and commonly used for compression.

5. Experimental Results

5.1. Transient Detection Using Hydrophone Recordings

As an example of the application of atypicality, we will consider transient detection [28]. In transient detection, a sensor records a signal that is pure noise most of the time, and the task is to find the sections of the signal that are not noise. In our terminology, the typical signal is noise, and the task is to find the atypical parts.

As data we used hydrophone recordings from a sensor in the Hawaiian waters outside Oahu, the Station ALOHA Cabled Observatory (ACO) [58]. The data used for this paper were collected (with sampling freuquency of 96 kHz which was then downsampled to 8 kHz) during a proof module phase of the project conducted between February 2007 and October 2008. The data was pre-processed by differentiation (y[n]=x[n]x[n1]) to remove a non-informative mean component.

The principal goal of this two years of data is to locate whale vocalization. Fin (22 m, up to 80 tons) and sei (12–18 m, up to 24.6 tons) whales are known by means of visual and acoustic surveys to be present in the Hawaiian Islands during winter and spring months, but migration patterns in Hawaii are poorly understood [58].

Ground truth has been established by manual detection, which is achieved using visual inspection of spectrogram by a human operator. 24 h of manual detections for both the 20 Hz and the 20–35 Hz variable calls were recorded for each the following dates (randomly chosen): 1 March 2007, 17 November 2007, 29 May 2008, 22 August 2008, 4 September 2008 and 9 February 2008 [58].

In order to analyze the performance of different detectors on such a data, first the measures ’Precision’ and ’Recall’ are defined as below

Recall=numberofcorrectdetectionstotalnumberofmanualdetectionsPrecision=numberofcorrectdetectionstotalnumberofalgorithmdetections

where Recall measures the probability of correctly obtained vocalizations over expected number of detections and Precision measures the probability of correctly detected vocalizations obtained by the detector. The Precision versus Recall curve show the detectors ability to obtain vocalizations as well as the accuracy of these detections [58].

In order to compare our atypicality method with alternative approaches in transient detection, we compare its performance with Variable Threshold Page (VTP) which outperforms other similar methods in detection of non-trivial signals [31].

For the atypicality approach, we need a typical and an atypical coder. The typical signal is pure noise, which, however, is not necessarily white: It consists of background noise, wave motion, wind and rain. We therefore used a linear predictive coder. The order of the linear predictive coder was globally set to 10 as a compromise between performance and computational speed. An order above 10 showed no significant decrease in codelength, while increasing computation time. The prediction coefficients were estimated for each 5 min segment of data. It seems unreasonable to expect the prediction coefficients to be globally constant due to for example variations in weather, but over short length segments they can be expected to be constant. Of course, a 5 min segment could contain atypical data and that would result in incorrect typical prediction coefficients. However, for this particular data we know (or assume) that atypical segments are of very short duration, and therefore will affect the estimated coefficients very little. This cannot be used for general data sets, only for data sets where there is a prior knowledge (or assumption) that atypical data are rare and short. Otherwise the typical coder should be trained on data known to be typical as in [1] or by using unsupervised atypicality, which we are developing for a future paper.

For the atypical coder, we implemented all the scalar methods of Section 3 in addition to the DFT, Section 4.4, with optimization over blocklength. Let X(n,l)=(xn,,xn+l1) be a subsequence of length l to be tested for atypicality, and suppose LTX(n,l) and LAX(n,l) are the typical codelength and atypical codelength of sequence X(n,l), respectively. Note that LAX(n,l)=logf(X(n,l))+log*l where f is any encoder of Section 3 and Section 4.4, and log*l is the number of bits to tell the decoder the length of the atypical subsequence, as discussed in Section 1.1, see also [1,59]. Then for every sample of data we calculate

ΔL(n)=maxlLT(X(n,l))LA(X(n,l)) (24)

and the atypicality criterion would be ΔL(n)>τ for some threshold (which does not need to be chosen prior to running the algorithm, since the larger ΔL(n) is the more atypical). Please note that the threshold τ can be seen as the length of the header the encoder uses to tell the decoder an atypical sequence is next. Calculating ΔL(n) requires examining every subsequence (perhaps up to a maximum length). Because the coders (e.g., (5)) are recursive, we can efficiently calculate LAX(n,l+1) from LAX(n,l), so the complexity is not prohibitive. Still, for a large dataset (i.e., big data), direct implementation of atypicality search is too computationally complex; so instead, similar to [59] we propose a tree-structured searching algorithm in which discovery of atypical sequence (in this case, whale vocalizations) can be performed in different stages. First in coarse search, a tree-structured division of data is considered such that at each level i, data is divided into non-overlapping blocks of length 2i, then for each block typical and atypical codelengths are compared. Obviously due to non-overlapping division some atypical sequences are missed, and the worse case is if an atypical sequence of length l is divided equally into two consecutive non-overlapping blocks of length 2i. However, each of these sequences of length l/2 might be detected at the level i1. The issue is that the complexity penalty per sample from (8) is about k2logll, which is decreasing in l. Thus, a sequence of length l may be atypical, but each of the length l2 halves may not be. This can be compensated by repeating every block once and encoding this double length block. By experimentation we have found that this gives a very low chance of missing an atypical subsequence. On the other hand, it does give false positives, because an exactly repeated block clearly has a strong (false) pattern. This is not a big issue, as these false positives are eliminated during the next stage.

After the coarse search, the next stage is fine search, in which the blocks flagged by coarse search are expanded and every subsequence of this expanded block is tested in an exhaustive search, which eliminates false positives. The final stage is segmentation, where the exact start and end point of atypical sequences are determined by minimizing the total codelength of the whole sequence of data. Figure 2 shows Precision vs. Recall curve for both atypicality and VTP.

Figure 2.

Figure 2

Precision vs. Recall probability for all six days that manual detections are available.

5.2. Anomaly Detection Using Holter Monitoring Data

As another example of atypicality application, we consider an anomaly detection problem. We consider data obtained by Holter Monitoring, i.e., a continuous tape recording of a patient’s ECG for 24 h. We use the MIT-BIH Normal Sinus Rhythm Database (nsrdb) which is provided by PhysioNet [60]. Even though the subjects included in this database were found to have had no significant persistent arrhythmias, there still existed arrhythmic beats and patterns to look for [60]. We apply atypicality to find interesting parts of the the dataset.

Since the data is assumed to be ‘Normal Sinus Rhythm’, a Gaussian model with unknown mean and variance is assumed for the typical data. For atypical encoding, we used the same methodology as in the previous section. As can be seen in the Figure 3, atypicality as an anomaly detector was able to find two major atypical segments, both of which contained multiple supraventricular beats and ventricular contraction (provided by HRV annotation files, PhysioNet [60]). Based on the data annotation these two segments were the only fractions in the data that contained abnormal beats and rhythms, which shows the efficacy of the atypicality framework. For comparison we included VTP as a transient detection method and the pruned exact linear time (PELT) method [37] as a change-point detection algorithm. As can be seen, VTP and PELT detected only one of the anomalous segments, while atypicality detected both.

Figure 3.

Figure 3

Detected atypical segments of Holter Monitoring heart rate variability (HRV): “S” stands for supraventricular arrhythmia and “V” stands for ventricular contraction based on annotation provided by PhysioNet [60].

6. Conclusions

Atypicality is a method for finding rare, interesting snippets in big data. It can be used for anomaly detection, data mining, transient detection, and knowledge extraction among other things. The current paper extended atypicality to real-valued data. It is important here to notice that discrete-valued and real-valued atypicality is one theory. Atypicality can therefore be used on data that are of mixed type. One advantage of atypicality is that it directly applies to sequences of variable length. Another advantage is that there is only one parameter that regulates atypicality, the single threshold parameter τ, which has the concrete meaning of the logarithm of the frequency of atypical sequences. This contrasts with other methods that have multiple parameters.

Atypicality becomes really interesting in combination with machine learning. First, atypicality can be used to find what is not learned in machine learning. Second, for many data sets, machine learning is needed to find the typical coder. In the experiments in this paper, we did not need machine learning because the typical data was pure noise. But in many other types of data, e.g., ECG (electrocardiogram), ‘normal’ data is highly complex, and the optimum coder has to be learned with machine learning. This is a topic for future research.

Appendix A. Linear Prediction

We showed

f(xn|τ,w)=1(2πτ)(nM)/2×exp12τr^(n)(0)2wTp(n)+wTR(n)(M)w,

therefore using NLM we have

C(xn)=f(xn|τ,w)dwdτ=AττnM2expr^(n)(0)2τe1τdτ

where A=1(2π)nM/2 and e1τ=wexp12τwTR(n)w2p(n)Twdw. Hence

C(xn)=Bττn2M2exp12τr^(n)(0)p(n)TR(n)1p(n)dτ=Bττn2M2exp12ττ^(n)(M)dτ=12πn2M2detR(n)Γn2M22τ^(n)(M)n2M22

where B=1(2π)n2M/2detR(n).

Appendix B. Vector Gaussian Case: Unknown Σ

We showed that Σ has Inverse-Wishart distribution ΣWM1Σ^n,n where Σ^n=i=1nxixi, hence

fxnΣ=detΣ^nn22nM2ΓMn2detΣn+M+12etr12Σ^nΣ1,

and since

fxn+1|Σ=12πMdetΣetr12Σ^n+1Σ^nΣ1,

therefore we have

fxn+1|xn=Σ>0fxn+1|ΣfxnΣdΣ=CΣ>0detΣn+M+22etr12Σ^n+1Σ1dΣ=(A)CY>0detYn2M2etr12Σ^n+1YdY=DV>0detVn2M2etrVdV=(B)DV>0detVn+12M+12etrVdV=1πM2detΣ^nn2detΣ^n+1n+12ΓMn+12ΓMn2

where C=detΣ^nn22Mn+12ΓMn2πM2 and D=detΣ^nn2detΣ^n+1n+121ΓMn2πM2, and in Equations (A) and (B) we changed the variable Σ=Y1 and Y=2Σ^n12VΣ^n12 respectively and Γma=V>0detVam+12etrVdV is the multivariate Gamma function.

Appendix C. Vector Gaussian Case: Unknown Mean and Σ

We showed that μNμ^n,1nΣ and ΣWM1Σ^n,n1 where μ^n=1ni=1nxi and Σ^n=i=1nxiμ^nxiμ^nT. Now using Bayes we can write the joint pdf as fxnμ,Σ=fxnμ|ΣfxnΣ. Define A=deffxn+1|xn=Σ>0fxn+1|μ,Σfxnμ,ΣdμdΣ,

A=BΣ>0detΣn+M+22e1Σe2ΣdΣ

where

e1Σ=etr12Σ^n+nμ^nμ^nT+xn+1xn+1TΣ1e2Σ=expn+12μTΣ1μ2μ^n+1Σ1μdμ=2πMdetΣn+1Mexpn+12μ^n+1TΣ1μ^n+1B=detΣ^nn12ΓMn12nM22Mn122πM.

Now since Σ^n+1=Σ^n+nμ^nμ^nT+xn+1xn+1Tn+1μ^n+1μ^n+1T, by defining C=defB2πMn+1M=nn+1MdetΣ^nn12ΓMn1212Mn122πM2 we can write

A=CΣ>0detΣn+M+12etr12Σ^n+1Σ1dΣ=CY>0detYn2M+12etr12Σ^n+1YdY=C2Mn2detΣ^n+1n2V>0detVn2M+12etrVdV=1πM2nn+1MdetΣ^nn12detΣ^n+1n2ΓMn2ΓMn12.

Author Contributions

Conceptualization, E.S. and A.H.-M.; Methodology, E.S. and A.H.-M.; software, E.S.; Validation, E.S.; Formal analysis, E.S. and A.H.-M.; Investigation, E.S.; resources, A.H.-M.; Data curation, E.S.; Writing—original draft preparation, E.S. and A.H.-M.; Writing—review and editing, E.S. and A.H.-M.; Visualization, E.S.; Supervision, A.H.-M.; Project administration, A.H.-M.; Funding acquisition, A.H.-M.

Funding

This work was supported in part by the NSF grants EECS-1546980, CCF-1434600 and the NSF Center for Science of Information (CSoI), and by Shenzhen Peacock Plan under Grant No. KQTD2015033114415450.

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Høst-Madsen A., Sabeti E., Walton C. Data Discovery and Anomaly Detection Using Atypicality: Theory. IEEE Trans. Inf. Theory. 2016 submitted. [Google Scholar]
  • 2.Chandola V., Banerjee A., Kumar V. Anomaly Detection for Discrete Sequences: A Survey. IEEE Trans. Knowl. Data Eng. 2012;24:823–839. doi: 10.1109/TKDE.2010.235. [DOI] [Google Scholar]
  • 3.Li Y., Nitinawarat S., Veeravalli V.V. Universal Outlier Hypothesis Testing. IEEE Trans. Inf. Theory. 2014;60:4066–4082. doi: 10.1109/TIT.2014.2317691. [DOI] [Google Scholar]
  • 4.Li Y., Nitinawarat S., Veeravalli V.V. Universal Outlier Detection; Proceedings of the Information Theory and Applications Workshop (ITA); San Diego, CA, USA. 10–15 February 2013; pp. 1–5. [Google Scholar]
  • 5.Li Y., Nitinawarat S., Veeravalli V.V. Universal Sequential Outlier Hypothesis Testing; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Honolulu, HI, USA. 29 June–4 July 2014; pp. 3205–3209. [Google Scholar]
  • 6.Grimmett G.R., Stirzaker D.R. Probability and Random Processes. 3rd ed. Oxford University Press; Oxford, UK: 2001. [Google Scholar]
  • 7.Sabeti E., Host-Madsen A. Atypicality for the Class of Exponential Family; Proceedings of the 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton); Monticello, IL, USA. 27–30 September 2016. [Google Scholar]
  • 8.Kay S.M. Fundamentals of Statistical Signal Processing, Volume II: Detection Theory. Prentice-Hall; Upper Sadle River, NJ, USA: 1993. [Google Scholar]
  • 9.Bishop C.M. Pattern Recognition and Machine Learning. Springer; Berlin, Germany: 2006. [Google Scholar]
  • 10.Cover T., Thomas J. Information Theory. 2nd ed. John Wiley; Hoboken, NJ, USA: 2006. [Google Scholar]
  • 11.Ziv J., Lempel A. A Universal Algorithm for Sequential Data Compression. IEEE Trans. Inf. Theory. 1977;23:337–343. doi: 10.1109/TIT.1977.1055714. [DOI] [Google Scholar]
  • 12.Ziv J., Lempel A. Compression of Individual Sequences via Variable-Rate Coding. IEEE Trans. Inf. Theory. 1978;24:530–536. doi: 10.1109/TIT.1978.1055934. [DOI] [Google Scholar]
  • 13.Ghido F., Tabus I. Sparse Modeling for Lossless Audio Compression. IEEE Trans. Audio Speech Lang. Proc. 2013;21:14–28. doi: 10.1109/TASL.2012.2211014. [DOI] [Google Scholar]
  • 14.Rissanen J. A Universal Prior for Integers and Estimation by Minimum Description Length. Ann. Stat. 1983;11:416–431. doi: 10.1214/aos/1176346150. [DOI] [Google Scholar]
  • 15.Kostina V. Data Compression With Low Distortion and Finite Blocklength. IEEE Trans. Inf. Theory. 2017;63:4268–4285. doi: 10.1109/TIT.2017.2676811. [DOI] [Google Scholar]
  • 16.Rissanen J. Stochastic Complexity and Modeling. Ann. Stat. 1986;14:1080–1100. doi: 10.1214/aos/1176350051. [DOI] [Google Scholar]
  • 17.Chandola V., Banerjee A., Kumar V. Anomaly Detection: A Survey. ACM Comput. Surv. 2009;41:15. doi: 10.1145/1541880.1541882. [DOI] [Google Scholar]
  • 18.Ranshous S., Shen S., Koutra D., Harenberg S., Faloutsos C., Samatova N.F. Anomaly Detection in Dynamic Networks: A Survey. WIREs Comput. Stat. 2015;7:223–247. doi: 10.1002/wics.1347. [DOI] [Google Scholar]
  • 19.Lee Y.J., Yeh Y.R., Wang Y.C.F. Anomaly Detection via Online Oversampling Principal Component Analysis. IEEE Trans. Knowl. Data Eng. 2013;25:1460–1470. doi: 10.1109/TKDE.2012.99. [DOI] [Google Scholar]
  • 20.Pimentel M.A., Clifton D.A., Clifton L., Tarassenko L. A Review of Novelty Detection. Signal Process. 2014;99:215–249. doi: 10.1016/j.sigpro.2013.12.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Esling P., Agon C. Time-Series Data Mining. ACM Comp. Surv. (CSUR) 2012;45:12. doi: 10.1145/2379776.2379788. [DOI] [Google Scholar]
  • 22.Li W., Mahadevan V., Vasconcelos N. Anomaly Detection and Localization in Crowded Scenes. IEEE Trans. Pattern Anal. Mach. Intell. 2014;36:18–32. doi: 10.1109/TPAMI.2013.111. [DOI] [PubMed] [Google Scholar]
  • 23.Jia Z., Shen C., Yi X., Chen Y., Yu T., Guan X. Big-Data Analysis of Multi-Source Logs for Anomaly Detection on Network-Based System; Proceedings of the 13th IEEE Conference on Automation Science and Engineering (CASE); Xi’an, China. 20–23 August 2017; pp. 1136–1141. [Google Scholar]
  • 24.Ahmed M., Mahmood A.N., Hu J. A Survey of Network Anomaly Detection Techniques. J. Netw. Comp. Appl. 2016;60:19–31. doi: 10.1016/j.jnca.2015.11.016. [DOI] [Google Scholar]
  • 25.Yoon M.K., Mohan S., Choi J., Christodorescu M., Sha L. Learning Execution Contexts from System Call Distribution for Anomaly Detection in Smart Embedded System; Proceedings of the Second International Conference on Internet-of-Things Design and Implementation; Pittsburgh, PA, USA. 18–21 April 2017; pp. 191–196. [Google Scholar]
  • 26.Sari A. A Review of Anomaly Detection Systems in Cloud Networks and Survey of Cloud Security Measures in Cloud Storage Applications. J. Inf. Secur. 2015;6:142. doi: 10.4236/jis.2015.62015. [DOI] [Google Scholar]
  • 27.Høst-Madsen A., Sabeti E., Walton C., Lim S.J. Universal Data Discovery Using Atypicality; Proceedings of the 3rd International Workshop on Pattern Mining and Application of Big Data (BigPMA 2016) at the 2016 IEEE International Conference on Big Data (Big Data 2016); Washington, DC, USA. 5–8 December 2016. [Google Scholar]
  • 28.Han C., Willett P., Chen B., Abraham D. A Detection Optimal Min-Max Test for Transient Signals. IEEE Trans. Inf. Theory. 1998;44:866–869. doi: 10.1109/18.661537. [DOI] [Google Scholar]
  • 29.Wang Z., Willett P. A Performance Study of Some Transient Detectors. IEEE Trans. Signal Proc. 2000;48:2682–2685. doi: 10.1109/78.863080. [DOI] [Google Scholar]
  • 30.Wang Z., Willett P.K. All-Purpose and Plug-In Power-Law Detectors for Transient Signals. Trans. Signal Proc. 2001;49:2454–2466. doi: 10.1109/78.960393. [DOI] [Google Scholar]
  • 31.Wang Z.J., Willett P. A Variable Threshold Page Procedure for Detection of Transient Signals. IEEE Trans. Signal Proc. 2005;53:4397–4402. doi: 10.1109/TSP.2005.857060. [DOI] [Google Scholar]
  • 32.Guépié B.K., Fillatre L., Nikiforov I. Sequential Detection of Transient Changes. Seq. Anal. 2012;31:528–547. doi: 10.1080/07474946.2012.719443. [DOI] [Google Scholar]
  • 33.Egea-Roca D., López-Salcedo J.A., Seco-Granados G., Poor H.V. Performance Bounds for Finite Moving Average Tests in Transient Change Detection. IEEE Trans. Signal Proc. 2018;66:1594–1606. doi: 10.1109/TSP.2017.2788416. [DOI] [Google Scholar]
  • 34.Guépié B.K., Fillatre L., Nikiforov I. Detecting a Suddenly Arriving Dynamic Profile of Finite Duration. IEEE Trans. Inf. Theory. 2017;63:3039–3052. [Google Scholar]
  • 35.Hirai S., Yamanishi K. Detecting Changes of Clustering Structures Using Normalized Maximum Likelihood Coding; Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Beijing, China. 12–16 August 2012; pp. 343–351. [Google Scholar]
  • 36.Yamanishi K., Miyaguchi K. Detecting Gradual Changes from Data Stream Using MDL-Change Statistics; Proceedings of the IEEE International Conference on Big Data (Big Data); Washington, DC, USA. 5–8 December 2016; pp. 156–163. [Google Scholar]
  • 37.Killick R., Fearnhead P., Eckley I.A. Optimal Detection of Changepoints with a Linear Computational Cost. J. Am. Stat. Assoc. 2012;107:1590–1598. doi: 10.1080/01621459.2012.737745. [DOI] [Google Scholar]
  • 38.Zou S., Fellouris G., Veeravalli V.V. Quickest Change Detection under Transient Dynamics: Theory and Asymptotic Analysis. IEEE Trans. Inf. Theory. 2018:1. doi: 10.1109/TIT.2018.2877972. [DOI] [Google Scholar]
  • 39.Molloy T.L., Ford J.J. Minimax Robust Quickest Change Detection in Systems and Signals with Unknown Transients. IEEE Trans. Autom. Control. 2018:1. doi: 10.1109/TAC.2018.2872198. [DOI] [Google Scholar]
  • 40.Veeravalli V.V., Banerjee T. Quickest Change Detection. Acad. Press Library Signal Proc. 2013;3:209–256. [Google Scholar]
  • 41.Fuh C.D., Tartakovsky A.G. Asymptotic Bayesian Theory of Quickest Change Detection for Hidden Markov Models. IEEE Trans. Inf. Theory. 2019;65:511–529. doi: 10.1109/TIT.2018.2843379. [DOI] [Google Scholar]
  • 42.Lavielle M. Using Penalized Contrasts for the Change-Point Problem. Signal Proc. 2005;85:1501–1510. doi: 10.1016/j.sigpro.2005.01.012. [DOI] [Google Scholar]
  • 43.Larsen R.J., Marx M. An Introduction to Mathematical Statistics and Its Applications. Volume 2 Prentice-Hall; Englewood Cliffs, NJ, USA: 1986. [Google Scholar]
  • 44.Roos T., Rissanen J. On Sequentially Normalized Maximum Likelihood Models; Proceedings of the Workshop on Information Theoretic Methods in Science and Engineering (WITMSE-08); Tampere, Finland. 18 August 2008. [Google Scholar]
  • 45.Sabeti E., Host-Madsen A. Enhanced MDL with Application to Atypicality; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Aachen, Germany. 25–30 June 2017. [Google Scholar]
  • 46.Scharf L.L. Statistical Signal Processing: Detection, Estimation, and Time Series Analysis. Addison-Wesley; Boston, MA, USA: 1990. [Google Scholar]
  • 47.Grunwald P.D. The Minimum Description Length Principle. MIT Press; Cambridge, MA, USA: 2007. [Google Scholar]
  • 48.Rissanen J. Stochastic Complexity in Statistical Inquiry. Volume 15 World Scientific; Singapore: 1998. [Google Scholar]
  • 49.Forchini G. The Density of the Sufficient Statistics for a Gaussian AR(1) Model in Terms of Generalized Functions. Stat. Probab. Let. 2000;50:237–243. doi: 10.1016/S0167-7152(00)00111-5. [DOI] [Google Scholar]
  • 50.Mallat S. A Wavelet Tour of Signal Processing: The Sparse Way. Academic Press; Cambridge, MA, USA: 2008. [Google Scholar]
  • 51.Vetterli M., Kovacevic J. Wavelets and Subband Coding. Volume 995 Prentice Hall; Englewood Cliffs, NJ, USA: 1995. [Google Scholar]
  • 52.Vetterli M., Herley C. Wavelets and Filter Banks: Theory and Design. IEEE Trans. Signal Process. 1992;40:2207–2232. doi: 10.1109/78.157221. [DOI] [Google Scholar]
  • 53.Mitra S.K., Kuo Y. Digital Signal Processing: A Computer-Based Approach. Volume 2 McGraw-Hill New York; New York, NY, USA: 2006. [Google Scholar]
  • 54.Willems F.M.J., Shtarkov Y., Tjalkens T. The Context-Tree Weighting Method: Basic Properties. IEEE Trans. Inf. Theory. 1995;41:653–664. doi: 10.1109/18.382012. [DOI] [Google Scholar]
  • 55.Willems F., Shtarkov Y., Tjalkens T. Reflections on “The Context Tree Weighting Method: Basic properties”. Newslett. IEEE Inf. Theory Soc. 1997;47:1. [Google Scholar]
  • 56.Sabeti E., Høst-Madsen A. How interesting images are: An Atypicality Approach For Social Networks; Proceedings of the IEEE International Conference on Big Data (Big Data); Washington, DC, USA. 5–8 December 2016. [Google Scholar]
  • 57.Muirhead R.J. Aspects of Multivariate Statistical Theory. Volume 197 John Wiley & Sons; Hoboken, NJ, USA: 2009. [Google Scholar]
  • 58.Silver K. Master’s Thesis. University of Hawaii; Honolulu, HI, USA: Nov 12, 2014. A Passive Acoustic Automated Detector for Sei and Fin Whale Calls. [Google Scholar]
  • 59.Host-Madsen A., Sabeti E. Atypical Information Theory for Real-Valued Data; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Hong Kong, China. 14–19 June 2015; pp. 666–670. [Google Scholar]
  • 60.Goldberger A.L., Amaral L.A.N., Glass L., Hausdorff J.M., Ivanov P.C., Mark R.G., Mietus J.E., Moody G.B., Peng C.K., Stanley H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation. 2000;101:e215–e220. doi: 10.1161/01.CIR.101.23.e215. [DOI] [PubMed] [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES