Abstract
The aim of using atypicality is to extract small, rare, unusual and interesting pieces out of big data. This complements statistics about typical data to give insight into data. In order to find such “interesting” parts of data, universal approaches are required, since it is not known in advance what we are looking for. We therefore base the atypicality criterion on codelength. In a prior paper we developed the methodology for discrete-valued data, and the current paper extends this to real-valued data. This is done by using minimum description length (MDL). We develop the information-theoretic methodology for a number of “universal” signal processing models, and finally apply them to recorded hydrophone data and heart rate variability (HRV) signal.
Keywords: atypicality, minimum description length, big data, codelength
1. Introduction
A central question in the era of “big data” is what to do with the enormous amount of information. One possibility is to characterize it through statistics, e.g., averages, or classify it using machine learning, in order to understand the general structure of the overall data. The perspective in this paper is the opposite, namely that most of the value in the information—in some applications—is in the parts that deviate from the average, that are unusual, atypical. Think of art: The valuable paintings or writings are those that deviate from the norms and brake the rules, that are atypical. Or groundbreaking scientific discoveries, which find new structure in data. Finding such unusual data is often done by painstaking human evaluation of data. The goal of our work is to find practical, automatic methods for localizing atypical parts of data.
When searching for atypical data, a key characteristic is that we do not know what we are looking for, we are looking for the “unknown unknowns”. We therefore need universal methods. In the paper [1] we developed a methodology, atypicality, that can be used to discover such data. The basic idea is that if some data can be encoded with a shorter codelength in itself, i.e., with a universal source coder, rather than using the optimum coder for typical data, then it is atypical. The purpose of the current paper is to generalize this to real-valued data. Lossless source coding does not generalize directly to real-valued data. Instead we can use minimum description length (MDL). In the current paper we develop an approach to atypicality based on MDL, and show its usefulness on a real dataset.
In this section before an extensive literature review of detection problems, we first describe the concepts of atypicality and how this framework can be used for data discovery. This arrangement is essential in order to compare the atypicality with the state of the art methods.
1.1. Anomaly Detection and Data Discovery Based on Description Length
A common way to define an outlier or anomaly in data is a sample that does not fit the statistics of typical data [2], e.g., if typical data is described by a pdf , and if for some threshold then x is an outlier. In this paper we approach the problem of anomaly detection, and in particular data discovery, from a different point of view. We consider sequences of data , and say that a sequence of data is atypical if there is some alternative model that ’fits’ the data better than the typical model. This point of view has been considered before in anomaly detection, e.g., [3]. Given a typical probability distribution, data that is unlikely could simply be, well, outliers, e.g., faulty measurements, and not of much interest in itself. Requiring data to fit an alternative model gives an indication that there is some interesting, new relationship in the data. We therefore think of this approach going beyond simply finding anomalous data, to finding interesting data, i.e., data discovery.
In our paper [1] we used universal source coding for anomaly detection; in [3,4,5] the authors used a type of universal empirical histogram. This kind of methodology is feasible when data is discrete. However, real-valued data is too rich for such universal descriptions. Models for real-valued data is almost always given as parametric models, either directly or indirectly. Our approach to atypicality for real-valued data, in the absence of universal coders, is to consider multiple ‘universal’ real-valued models given by parametric models. For example, it is well-known [6] that by the Wold decomposition (almost) all Gaussian stationary processes can be described in terms of a linear prediction model. Wavelets are also good for compressing (lossily) many signals and images. One can therefore expect these will also work well as alternative models. Most modeling and compression are based on a second order analysis, and therefore fit with Gaussian models. One could be interested in also finding atypical data that does not fit a Gaussian model; however, apart from iid (independently, identically distributed) models (similar to [3]), this is difficult to do, so the richness of non-Gaussian models is limited. We will therefore focus on Gaussian models in this paper; notice, however, this is not a limitation of atypicality, we have considered non-Gaussian models in [7].
Consider an atypicality setup where the typical model is given by a probability density function (pdf) and the atypical model is given by with unknown. Asking if the atypical model is better can be thought of simply as a generalized likelihood ratio test (GLRT) hypothesis test [8].
However, in atypicality we would like to test the sequence with respect to a large class of alternative hypotheses—even the class of linear prediction models is infinite. So, assume we have a finite or countable infinite set of model classes with corresponding pdfs . A test could then be
(1) |
However, this is clearly not very useful. More and more complex model will fit data better and better [9], so that the false alarm probability will be very large—model complexity has to be taken into account. One way to do this through Bayesian statistics assigning prior probabilities to both models and parameters, ending up in the test
(2) |
where is the probability of a sequence being atypical and the probability of an alternative model . The issue is that using (2) requires choosing a lot of prior distributions and being able to calculate marginal distributions . As explained in for example (3.4–3.5 [9]), these are not easy problems to tackle. Priors are often dictated by the need for the integral to be calculable, rather than actual prior information, and it still leaves parameters unknown (‘hyperparameters’). In addition, choosing prior distributions is anathema to the central idea of looking for unknown data in big data. The whole point is that we know very little about the data we are looking for.
This is where we can use description length. Suppose at first that data is discrete-valued. To each sequence we assign a codeword with length . The codewords have to be prefix free and the lengths therefore have to satisfy the Kraft inequality [10]: . If we let this defines a (sub)probability on the data, which can be used in a hypothesis test. One can think of description length and coding as a method to find probabilities. There is a key advantage in using description length, as explained in the following. In decoding, a decoder reads a sequence of bits sequentially and turns this into a copy of the source sequence; the codes must be prefix-free. Key here is that in the current step the decoder can only use what is decoded in prior steps. Therefore, when the source sequence is encoded, the encoder cannot use future samples to encode the current sample. We call this ’the principle of sequentiality’. It is the Kraft inequality in reverse: In one direction, as above, we can use the Kraft inequality to verify that a set of codelengths gives valid codes. In the other direction, when codes are decodable (in the pre-fix free sense), they must satisfy the Kraft inequality, and the corresponding probabilities must therefore be valid. An example is Lempel-Ziv coding [10,11,12], which does not explicitly rely on probabilities. It gives valid codewords because the coding is decodable with a sequential decoder.
To generalize the coding approach to real-valued data, lossless coding is needed. One can notice that lossless coding of real-valued data is used in many applications, for example lossless audio coding [13]. However, direct encoding of the reals represented as binary numbers, such as done in lossless audio coding, makes the methods too dependent on data representation rather than the underlying data. Instead we will use a more abstract model of (finite-precision) reals. We will assume a fixed point representation with a (large) finite number, r, bits after the period, and an unlimited number of bits prior to the period as in [14]. Assume that the actual data is distributed according to a pdf . Then the number of bits required to represent x is given by
(3) |
As we are only interested in comparing codelengths the dependency on r cancels out. Suppose we want to decide between two models and for data. Then we decide if , which is . Thus, for the typical codelength we can simply use . One can also argue for this codelength more fundamentally from finite blocklength rate-distortion in the limit of low distortion [15], which makes it more theoretically well-founded. Notice that this codelength is not scaling invariant:
(4) |
which means care has to be taken when transforms of data are considered. To code the atypical distributions, as the decoder does not know the values of the parameters, both data and parameters in parametric models have to be encoded for a decoder to be able to decode; this was the starting point in the original paper on MDL [14]. One could also use a Bayesian distribution from (2), which does not solve the issues with using Bayes. Instead we can use the principle of sequentiality of coding as follows. We replace in (2) with a codelength based on Rissanen’s predictive MDL [16].
(5) |
where is the maximum likelihood estimate of the parameter. Since this is sequentially decodable, it gives a valid codelength, and hence probability, without any prior distribution on . It does not work for the first sample, as there is no estimate. Instead we encode with a default distribution. In general application of MDL choice of the default distribution can be tricky, but for atypicality we have a good default distribution: The typical distribution, giving the codelength
(6) |
Notice that default distribution is the same for all models : we do not have to choose a prior for each model. There are no prior assumptions involved, since we use the typical distribution. We still need the probabilities ; here we can use Rissanen’s universal coder of the integers [14]. The codelength for an integer i is where c is a normalization constant in the Kraft inequality [14] and with the sum continuing as long as the log is defined. We order the models according to complexity and encode the ordinal of a model. The description length test for the sequence to be atypical then becomes
(7) |
The appeal of coding becomes even more clear when we search for atypical subsequences of long sequences. Using coding this can be done as follows. The coder uses a special header, a codeword not a prefix of any codeword used for the actual data, to denote the start of a subsequence—the decoder will now know it needs to use the atypical decoder. It also encodes the length of the atypical subsequence using Rissanen’s universal coder for the integers [14], adding to the code length, so that the decoder knows when to switch back to the typical coder. The whole sequence is sequentially decodable, thus has a valid probability, and we know from [1] that this gives a valid criterion, at least for iid sequences, in the sense that not the whole sequence will be classified as atypical; the key is the insistence on decodability. It would be difficult to do this directly using Bayesian analysis, as we would have to develop probability distributions for the total sequence for every combination of atypical subsequences in the long sequence. To be precise, for every set of potential subsequences we would have to calculate , and then choose the giving the largest probability, i.e., MAP.
To understand how (7) avoids the problem overfitting of (1), we notice that asymptotically for large l by [16]
(8) |
where k is the number of parameters in ; this is true for many MDL and Bayesian methods, including Rissanen’s original approach [14]. Because (8) penalizes models with many parameters, overfitting is avoided even if we consider an infinite collection of models. While (8) is often used for model selection, it is not accurate enough for our purposes, and we use (5) directly. However, (8) is useful for discussion and analysis.
The above approach can be seen as a generalization to real-valued data of the approach in [1]
Definition 1.
A sequence is atypical if it can be described (coded) with fewer bits in itself rather than using the (optimum) code for typical sequences.
There is a further difference from Bayes (2), which is more philosophical than computational and practical. When we describe the problem as a hypothesis test problem as in (2), we are asking which hypothesis is correct (which is also the basis of Bayesian model selection [9]). However, in stating the problem as a description length problem, we are just asking if we can find a shorter description length, not if a model is correct. By considering a very large class of alternative models (most pronounced when we use universal source coding), none might fit very well, none might be even close to the actual model, but we might find one that fits better than the typical model, and that is sufficient for a sequence to be atypical. We have no idea how atypical data might look like, so we cast a very wide net.
1.2. Alternative Approaches
Atypicality has many applications: Anomaly detection, outlier detection, data discovery, novelty detection, transient detection, search for ‘interesting’ data etc. What all of these applications have in common is that we seek data that is unusual in some way, and atypicality is a general method for finding such data. Each of these applications have specific alternative methods, and we will discuss atypicality compared to other approaches in some of these applications.
There is a very large existing literature on anomaly detection [17,18,19,20,21,22,23,24,25,26]; The paper [17] gives an overview until 2009. What is characteristic of all methods, as far as we know, is that they look for data that do not fit the characteristics of normal data, either statistically or according to some other measure. From [17]: “At an abstract level, an anomaly is defined as a pattern that does not conform to expected normal behavior.” Atypicality takes a different approach. Atypicality looks for data where an alternative model fits the the data better. Atypicality will still find the first type of anomalies according to [17], but it will also be able to find a wider, more subtle class of anomalies. As a simple example, suppose the normal data is iid Gaussian with zero mean and variance . The anomalous data is also Gaussian with zero mean and variance , but the noise is colored. This is in no way anomalous according to the definition in [17]. However, by coding data with a linear predictive coder (see Section 3.2 later) atypicality will detect the anomalous sequence. In [27] we in fact prove that atypicality is exactly optimum for discrete data in the class of finite state machines. While we do not have a similar theorem for real-valued data, this indicates the advantages of atypicality for anomaly detection.
Another advantage of atypicality is that it can straightforwardly be applied to data of unknown/variable length, as discussed in Section 1.1. All existing anomaly detection algorithms we know of use fixed windows, so they cannot make decisions between long, slightly unusual sequences, and short, very unusual sequences; atypicality can. On the other hand, atypicality cannot find single, anomalous samples—outliers: To be able to find a new model for anomalous data, it needs a collection of samples. For this kind of application, more traditional methods must be used.
A type of detection problem closely related to anomaly detection is transient detection [28,29,30,31,32,33,34]. In many signal processing applications, it is of interest to detect short-duration statistical changes in observed data. For a parametric class of probability distribution and for an unknown and the following two hypotheses are considered:
If and are known, the Page test is optimal for this in the sense that by using a GLRT; given an average wait between false alarms, it minimizes the worst-case average delay to detection [31]. However in many applications, there is either no information about or it varies from one transient signal to another. In this case, it is shown that Variable Threshold Page (VTP) gives a reliable result [29,31]. There are also other approaches of transient detection based on Nuttall’s power-law detector that are often used in the literature [29,30]. Other methods are [32,33,34]. In general atypicality will outperform this methods since it not only allows a more comprehensive class of models, but also it can take advantage of various powerful signal processing methods such as filterbanks and linear prediction to find transient signals with various statistics.
Finally, we will mention change point detection and quickest change detection [35,36,37,38,39,40,41,42]. The goal here is to find a point in time where the distribution of data changes from one to another. The difference from atypicality is that in atypicality, subsequences have both a start and end point. In principle one could use atypicality for change point detection, but since it is not optimized for this application, the comparison is not that relevant, and atypicality might not perform well. We refer to [35,36] for how to use MDL for change point detection.
2. Minimum Description Length Methods
Above we have argued for using (5) as a codelength. The issue with this method is how to initialize the recursion. In (6) this is solved by using the typical distribution for the first sample, but in general, with more than one parameter, may not be defined until i becomes larger than 1. The further issue is that even when is defined, the estimate might be poor for small i, and using this in (5) can give very long codelengths, see Figure 1 below.
Our solution to the first issue is to encode with increasingly complex models as i increases; we therefore only have to use the default distribution for the very first sample. Since we are not interested in finding a specific model, this is not problematic in atypicality. Our solution to the second issue is rather than using the ML estimate for encoding as though it is the actual parameter value, we use it as an uncertain estimate of . We then take this uncertainty into account in the codelength. This is similar to the idea of using confidence intervals in statistical estimates [43]. Below we introduce two methods using this general principle. This is different to the sequentially normalized maximum likelihood method [44], which modifies the encoder itself.
2.1. Sufficient Statistic Method (SSM)
As explained above, our approach to predictive MDL is to introduce uncertainty in the estimate of . Our first methodology is best explained through a simple example. Suppose our model is , with known. The average is the ML estimate of at time n. We know that
We can re-arrange this as
Thus, given , we can think of as random . Now
which we can use as a coding distribution for . This compares to that we would use in traditional predictive MDL. Thus, we have taken into account that the estimate of is uncertain for n small. The idea of thinking of the non-random parameter as random is very similar to the philosophical argument for confidence intervals [43].
In order to generalize this example to more complex models, we take the following approach. Suppose is a k-dimensional sufficient statistic for the k-dimensional . Also suppose there exists some function and a k-dimensional (vector) random variable independent of so that
(9) |
We now assume that for every in their respective support, (9) has a solution for so that we can write
(10) |
The parameter is now a random variable (assuming is measurable, clearly) with a pdf This then gives a distribution on , i.e.,
(11) |
The method has the following property:
Theorem 1.
The distribution of is invariant to arbitrary parameter transformations.
This is a simple observation from the fact that (11) is an expectation, and that when is transformed, the distribution according to (10) is also transformed with the same function.
One concern is the way the method is described. Perhaps we could use different functions and and get a different result? In the following we will prove that the distribution of is independent of which and are used.
It is well-known [6,10] that if the random variable X has CDF F, then has a uniform distribution (on ). Equivalently, for some uniform random variable U. We need to generalize this to n dimensions. Recall that for a continuous random variable [6]
whenever . As an example, let . Then the map is a map from onto , and has uniform distribution on . Here is continuous in and is continuous in
We can write . For fixed we can also write for those where is defined, and where the inverse function is only with respect to the parameter before |. Then
This gives the correct joint distribution on : The marginal distribution on is correct, and the conditional distribution of given is also correct, and this is sufficient. Clearly is not defined for all ; the relationship should be understood as being valid for almost all and . We can now continue like this for . We will state this result as a lemma
Lemma 2.
For any continuous random variable there exists an n-dimensional uniform random variable , so that .
Theorem 2.
Consider a model with and an alternative model with We make the following assumptions:
- 1.
The support of is independent of θ and its interior is connected.
- 2.
The extended CDF of is continuous and differentiable.
- 3.
The function is one-to-one, continuous, and differentiable for fixed θ.
Then the distributions of θ given by and are identical.
Proof.
By Lemma 2 write , . Let u be the k-dimensional uniform pdf, i.e., for and 0 otherwise, and let denote the solution of with respect to , which is a well-defined due to Assumption 3. We can then write the distribution of in two ways as follows ([6]), due to the differentiability assumptions
Due to Assumption 1 we can then that conclude , or
But both and have range , and it follows that . Therefore
If we then solve either for as a function of (for fixed ), we therefore get exactly the same result, and therefore the same distribution. □
The assumptions of Theorem 2 are very restrictive, but we believe they are far from necessary. In [45] we proved uniqueness in the one-dimensional case under much weaker assumptions (e.g., no differentiability assumptions), but that proof is not easy to generalize to higher dimensions.
Corollary 3.
Let and be equivalent sufficient statistic for θ, and assume the equivalence map is a diffeomorphism. Then the distribution on θ given by the sufficient statistic approach is the same for and .
Proof.
We have and . By assumption, there exists a one-to-one map a so that , thus . Since the distribution of is independent of how the problem is stated, and gives the same distribution on . □
2.2. Normalized Likelihood Method (NLM)
The issue with the sufficient statistic method is that a sufficient statistic of the same dimension of the parameter vector can be impossible to find. We will therefore introduce a simpler method. Let the likelihood function of the model be . For a fixed we can consider this as a ‘distribution’ on ; the ML estimate is of course the most likely value of this distribution. To account for uncertainty in the estimate, we can instead try use the total to give a distribution on , and then use this for prediction. In general is not a probability distribution as it does not integrate to 1 in . We can therefore normalize it to get a probability distribution
(12) |
if is finite. For comparison, the Bayes posteriori distribution is
If the support of has finite area, (12) is just the Bayes predictor with uniform prior. If the support of does not have finite area, we can get (12) as a limiting case when we take the limit of uniform distributions on finite that converge towards . This is the same way the ML estimator can be seen as a MAP estimator with uniform prior [46]. One can reasonably argue that if we have no further information about , a uniform distribution seems reasonable, and has indeed been used for MDL [47] as well as universal source coding ([10], Section 13.2). What the Normalized Likelihood Method does is simply extend this to the case when there is no proper uniform prior for .
The method was actually implicitly mentioned as a remark by Rissanen in ([48], Section 3.2), but to our knowledge was never further developed; the main contribution in this paper is to introduce the method as a practical method. From Rissanen we also know the coding distribution for :
(13) |
Let us assume becomes finite for (this is not always the case, often n needs to be larger). The total codelength can then be written as
(14) |
where is the default distribution, which for application in atypicality can be taken as the typical distribution. One might see this simply as a (generalized) Bayesian method. However, in general is not a valid probability, and as mentioned in ([9], Section 3.4) an improper prior cannot be used for Bayesian model selection. But when implemented sequentially, as indicated in (14) it does give a valid codelength, because of the principle of sequentiality, central to coding.
2.3. Examples
We will compare the different methods for a simple model. Assume our model is with unknown. The likelihood function is . For we have , but for
then
where . Thus, for coding, the two first samples would be encoded with the default distribution, and after that the above distribution is used. For the SSM, we note that is a sufficient statistic for and that , i.e., , which we can be solved as , in the notation of (9)–(10). This is a transformation of the distribution which can be easily found as [6]
Now we have
(15) |
For comparison, the ordinary predictive MDL is
(16) |
which is of a completely different form. To understand the difference, consider the codelength for :
It can be seen that if is small and is large, the codelength for is going to be large. But in the sufficient statistic method this is strongly attenuated due to the log in front of the ratio. Figure 1 shows this quantitatively in the redundancy sense. The redundancy is the difference between the codelength using true and estimated distributions. As can be seen, the CDF of the ordinary predictive MDL redundancy has a long tail, and this is taken care of by SSM.
3. Scalar Signal Processing Methods
In the following we will derive MDL for various scalar signal processing methods. We can take inspiration from signal processing methods generally used for source coding, such as linear prediction and wavelets; however, the methods have to be modified for MDL, as we use lossless coding, not lossy coding. As often in signal processing, the models are a (deterministic) signal in Gaussian noise. In a previous paper we have also considered non-Gaussian models [7]. All proofs are in Appendices.
3.1. Iid Gaussian Case
A natural extension of the examples considered in Section 2.1 is with both and unknown. Define and . Then the sufficient statistic method is
(17) |
This is a special case of the vector Gaussian model considered later, so we will not provide a proof.
3.1.1. Linear Transformations
The iid Gaussian case is a fundamental building block for other MDL methods. The idea is to find a linear transformation so that we can model the result as iid, and then use the iid Gaussian MDL. For example, in the vector case, suppose is (temporally) iid, and let . If we then assume that is diagonal, we can use the iid Gaussian MDL on each component. Similarly, in the scalar case, we can use a filter instead of a matrix. Because of (4) we need to require to be orthonormal: For any input we then have , and in particular independent of the actual . We will see this approach in several cases in the following.
3.2. Linear Prediction
Linear prediction is a fundamental to random processes. Write
Then for most stationary random processes the resulting random process is uncorrelated, and hence in the Gaussian case, iid, by the Wold decomposition [6]. It is therefore a widely used method for source coding, e.g., [13]. In practical coding, a finite prediction order M is used,
Denote by the power of . Consider the simplest case with : There are two unknown parameters . However, the minimal sufficient statistic has dimension three [49]: . Therefore, we cannot use SSM; and even if we could, the distribution of the sufficient statistic is not known in closed form [49]. We therefore turn to the NLM.
We assume that is iid normally distributed with zero mean and variance ,
(18) |
Define
Then a simple calculation shows that
where , ,
(19) |
and . Thus
giving (see Appendix A)
and
(20) |
with .
The Equation (20) is defined for : The vector is defined for , and defined by (19) becomes full rank when the sum contains M terms. Before the order M linear predictor becomes defined, the data needs to be encoded with other methods. Since in atypicality we are not seeking to determine the model of data, just if a different model than the typical is better, we encode data with lower order linear predictors until the order M linear predictor becomes defined. So, the first sample is encoded with the default pdf. The second and third samples are encoded with the iid unknown variance coder (There is no issue in encoding some samples with SSM and others with NLM) (15). Then the order 1 linear predictor takes over, and so on.
3.3. Filterbanks and Wavelets
A popular approach to source coding is sub-band coding and wavelets [50,51,52]. The basic idea is to divide the signal into (perhaps overlapping) spectral sub-bands and then allocate different bitrates to each sub-band; the bitrate can be dependent on the power in the sub-band and auditory properties of the ear in for example audio coding. In MDL we need to do lossless coding, so this approach cannot be directly applied, but we can still use sub-band coding as explained in the following.
As we are doing lossless coding, we will only consider perfect reconstruction filterbanks [50,53]. Furthermore, in light of Section 3.1.1 we also consider only (normalized) orthogonal filterbanks [50,52].
The basic idea is that we split the signal into a variable number of sub-bands by putting the signal through the filterbank and downsampling. Then the output of each downsampled filter is coded with the iid Gaussian coder of Section 3.1 with an unknown mean and variance, which are specific to each sub-band. In order to understand how this works, consider a filterbank with two sub-bands. Assume that the signal is stationary zero mean Gaussian with power , and let the power at the output of sub-band 1 be and of sub-band 2 be . Because the filterbank is orthogonal, we have . To give some intuition to why a sub-band coder can give shorter codelengh, we use (8) to get the approximate codelengths
Since (with equality only if ), the sub-band coder will result in shorter codelength for sufficiently large l if the signal is non-white.
The above analysis is a stationary analysis for long sequences. However, when considering shorter sequences, we also need to consider the transient. The main issue is that output power will deviate from the stationary value during the transient, and this will affect the estimated power used in the sequential MDL. The solution is to transmit to the receiver the input to the filterbank during the transient, and only use the output of the filterbank once the filters have been filled up. It is easy to see that the system is still perfect reconstruction: Using the received input to the filterbank, the receiver puts this through the analysis filterbank. It now has the total sequence produced by the analysis filterbank, and it can then put that through the reconstruction filterbank. When using multilevel filterbanks, this has to be done at each level.
We assume the decoder knows which filters are used and the maximum depth D used. In principle the encoder could now search over all trees of level at most D. The issue is that there are an astonishing large number of such trees; for example for there are 676 such trees. Instead of choosing the best, we can use the idea of the CTW [1,54,55] and weigh in each node: Suppose after passing a signal of an internal node S through low-pass and high-pass filters and downsampler, and are produced in the children nodes of S. The weighted probability of in the internal node S will be
which is a good coding distribution for both a memoryless source and a source with memory [54,55].
4. Vector Case
We now assume that a vector sequence , is observed. The vector case allows for a more rich set of model and more interesting data discovery than the scalar case, for example atypical correlation between multiple sensors. It can also be applied to images [56], and to scalar data by dividing into blocks. That is in particular useful for the DFT, Section 4.4.
A specific concern is initialization. Applying sequential coding verbatim to the vector case means that the first vector needs to be encoded with the default coder, but this means the default coder influences the codelength too much. Instead we suggest to encode the first vector as a scalar signal using the scalar Gaussian coder (unknown variance→unknown mean/variance). That way only the first component of the first vector needs to be encoded with the default coder.
4.1. Vector Gaussian Case with Unknown Mean
First assume is unknown but is given. We define and we have
We first consider the NLM. By defining and (note that is not the estimate of ) we have
where , hence we can write
(21) |
It turns out that in this case, the SSM gives the same result.
4.2. Vector Gaussian Case with Unknown
Assume where the covariance matrix is unknown:
where .
In order to find the MDL using SSM, notice that we can write
where , that is is some matrix that satisfies . A sufficient statistic for is
Let . Then we can solve and . Since has Inverse-Wishart distribution , one can write . Using this distribution we calculate in Appendix B that
(22) |
where is the multivariate gamma function [57].
On the other hand, using the normalized likelihood method we have
from which
(23) |
4.3. Vector Gaussian Case with Unknown Mean and
Assume where both mean and covariance matrix are unknown:
It is well-known [46] that sufficient statistics are and . Let be a square root of , i.e., . We can then write
where and , and are independent, and is the Wishart distribution. We solve the second equation with respect to as in Section 4.2 and the first with respect to , to get
where is a square root of . We can explicitly write the distributions as
Using these distributions, in Appendix C we calculate
and for NLM
These are very similar to the case of known mean, Section 4.2. We require one more sample before the distributions become well-defined, and is defined differently.
4.4. Sparsity and DFT
We can specify a general method as follows. Let be an orthonormal basis of and write the signal model as
Here N is the number of basis vectors used, and their indices. The signal is iid , the noise iid , and are unknown. If we let and J the indices of the signal components then
Thus the can be encoded with the scalar Gaussian encoder of Section 3.1, while the can be encoded with a vector Gaussian encoder for using the following equation that is achieved using the SSM:
where . Now we need to choose which coefficients to choose as signal components and inform the decoder. The set J can be communicated to the decoder by sending a sequence of encoded with the universal encoder of ([10], Section 13.2) with bits. The optimum set can in general only be found by trying all sets J and choosing the one with shortest codelength, which is infeasible. A heuristic approach is to find the N components with maximum power when calculated over the whole blocklength l (the decoder does not need to know how J was chosen, only what J is, it is therefore fine to use the power at the end of the block). What still remains is how to choose N. It seems computationally feasible to start with and then increase N by 1 until the codelength no longer decreases, since most of the calculations for N can be reused for .
We can apply this in particular when is a DFT matrix. In light of Section 3.1.1 we need to use the normalized form of the DFT. The complication is that the output is complex, i.e., the M real inputs result in M complex outputs, or real outputs. Therefore, care has to be taken with the symmetry properties of the output. Another option is to use DCT instead, which is well-developed and commonly used for compression.
5. Experimental Results
5.1. Transient Detection Using Hydrophone Recordings
As an example of the application of atypicality, we will consider transient detection [28]. In transient detection, a sensor records a signal that is pure noise most of the time, and the task is to find the sections of the signal that are not noise. In our terminology, the typical signal is noise, and the task is to find the atypical parts.
As data we used hydrophone recordings from a sensor in the Hawaiian waters outside Oahu, the Station ALOHA Cabled Observatory (ACO) [58]. The data used for this paper were collected (with sampling freuquency of 96 kHz which was then downsampled to 8 kHz) during a proof module phase of the project conducted between February 2007 and October 2008. The data was pre-processed by differentiation ( to remove a non-informative mean component.
The principal goal of this two years of data is to locate whale vocalization. Fin (22 m, up to 80 tons) and sei (12–18 m, up to 24.6 tons) whales are known by means of visual and acoustic surveys to be present in the Hawaiian Islands during winter and spring months, but migration patterns in Hawaii are poorly understood [58].
Ground truth has been established by manual detection, which is achieved using visual inspection of spectrogram by a human operator. 24 h of manual detections for both the 20 Hz and the 20–35 Hz variable calls were recorded for each the following dates (randomly chosen): 1 March 2007, 17 November 2007, 29 May 2008, 22 August 2008, 4 September 2008 and 9 February 2008 [58].
In order to analyze the performance of different detectors on such a data, first the measures ’Precision’ and ’Recall’ are defined as below
where Recall measures the probability of correctly obtained vocalizations over expected number of detections and Precision measures the probability of correctly detected vocalizations obtained by the detector. The Precision versus Recall curve show the detectors ability to obtain vocalizations as well as the accuracy of these detections [58].
In order to compare our atypicality method with alternative approaches in transient detection, we compare its performance with Variable Threshold Page (VTP) which outperforms other similar methods in detection of non-trivial signals [31].
For the atypicality approach, we need a typical and an atypical coder. The typical signal is pure noise, which, however, is not necessarily white: It consists of background noise, wave motion, wind and rain. We therefore used a linear predictive coder. The order of the linear predictive coder was globally set to 10 as a compromise between performance and computational speed. An order above 10 showed no significant decrease in codelength, while increasing computation time. The prediction coefficients were estimated for each 5 min segment of data. It seems unreasonable to expect the prediction coefficients to be globally constant due to for example variations in weather, but over short length segments they can be expected to be constant. Of course, a 5 min segment could contain atypical data and that would result in incorrect typical prediction coefficients. However, for this particular data we know (or assume) that atypical segments are of very short duration, and therefore will affect the estimated coefficients very little. This cannot be used for general data sets, only for data sets where there is a prior knowledge (or assumption) that atypical data are rare and short. Otherwise the typical coder should be trained on data known to be typical as in [1] or by using unsupervised atypicality, which we are developing for a future paper.
For the atypical coder, we implemented all the scalar methods of Section 3 in addition to the DFT, Section 4.4, with optimization over blocklength. Let be a subsequence of length l to be tested for atypicality, and suppose and are the typical codelength and atypical codelength of sequence , respectively. Note that where f is any encoder of Section 3 and Section 4.4, and is the number of bits to tell the decoder the length of the atypical subsequence, as discussed in Section 1.1, see also [1,59]. Then for every sample of data we calculate
(24) |
and the atypicality criterion would be for some threshold (which does not need to be chosen prior to running the algorithm, since the larger is the more atypical). Please note that the threshold can be seen as the length of the header the encoder uses to tell the decoder an atypical sequence is next. Calculating requires examining every subsequence (perhaps up to a maximum length). Because the coders (e.g., (5)) are recursive, we can efficiently calculate from , so the complexity is not prohibitive. Still, for a large dataset (i.e., big data), direct implementation of atypicality search is too computationally complex; so instead, similar to [59] we propose a tree-structured searching algorithm in which discovery of atypical sequence (in this case, whale vocalizations) can be performed in different stages. First in coarse search, a tree-structured division of data is considered such that at each level i, data is divided into non-overlapping blocks of length , then for each block typical and atypical codelengths are compared. Obviously due to non-overlapping division some atypical sequences are missed, and the worse case is if an atypical sequence of length l is divided equally into two consecutive non-overlapping blocks of length . However, each of these sequences of length might be detected at the level . The issue is that the complexity penalty per sample from (8) is about , which is decreasing in l. Thus, a sequence of length l may be atypical, but each of the length halves may not be. This can be compensated by repeating every block once and encoding this double length block. By experimentation we have found that this gives a very low chance of missing an atypical subsequence. On the other hand, it does give false positives, because an exactly repeated block clearly has a strong (false) pattern. This is not a big issue, as these false positives are eliminated during the next stage.
After the coarse search, the next stage is fine search, in which the blocks flagged by coarse search are expanded and every subsequence of this expanded block is tested in an exhaustive search, which eliminates false positives. The final stage is segmentation, where the exact start and end point of atypical sequences are determined by minimizing the total codelength of the whole sequence of data. Figure 2 shows Precision vs. Recall curve for both atypicality and VTP.
5.2. Anomaly Detection Using Holter Monitoring Data
As another example of atypicality application, we consider an anomaly detection problem. We consider data obtained by Holter Monitoring, i.e., a continuous tape recording of a patient’s ECG for 24 h. We use the MIT-BIH Normal Sinus Rhythm Database (nsrdb) which is provided by PhysioNet [60]. Even though the subjects included in this database were found to have had no significant persistent arrhythmias, there still existed arrhythmic beats and patterns to look for [60]. We apply atypicality to find interesting parts of the the dataset.
Since the data is assumed to be ‘Normal Sinus Rhythm’, a Gaussian model with unknown mean and variance is assumed for the typical data. For atypical encoding, we used the same methodology as in the previous section. As can be seen in the Figure 3, atypicality as an anomaly detector was able to find two major atypical segments, both of which contained multiple supraventricular beats and ventricular contraction (provided by HRV annotation files, PhysioNet [60]). Based on the data annotation these two segments were the only fractions in the data that contained abnormal beats and rhythms, which shows the efficacy of the atypicality framework. For comparison we included VTP as a transient detection method and the pruned exact linear time (PELT) method [37] as a change-point detection algorithm. As can be seen, VTP and PELT detected only one of the anomalous segments, while atypicality detected both.
6. Conclusions
Atypicality is a method for finding rare, interesting snippets in big data. It can be used for anomaly detection, data mining, transient detection, and knowledge extraction among other things. The current paper extended atypicality to real-valued data. It is important here to notice that discrete-valued and real-valued atypicality is one theory. Atypicality can therefore be used on data that are of mixed type. One advantage of atypicality is that it directly applies to sequences of variable length. Another advantage is that there is only one parameter that regulates atypicality, the single threshold parameter , which has the concrete meaning of the logarithm of the frequency of atypical sequences. This contrasts with other methods that have multiple parameters.
Atypicality becomes really interesting in combination with machine learning. First, atypicality can be used to find what is not learned in machine learning. Second, for many data sets, machine learning is needed to find the typical coder. In the experiments in this paper, we did not need machine learning because the typical data was pure noise. But in many other types of data, e.g., ECG (electrocardiogram), ‘normal’ data is highly complex, and the optimum coder has to be learned with machine learning. This is a topic for future research.
Appendix A. Linear Prediction
We showed
therefore using NLM we have
where and . Hence
where .
Appendix B. Vector Gaussian Case: Unknown Σ
We showed that has Inverse-Wishart distribution where , hence
and since
therefore we have
where and , and in Equations (A) and (B) we changed the variable and respectively and is the multivariate Gamma function.
Appendix C. Vector Gaussian Case: Unknown Mean and
We showed that and where and . Now using Bayes we can write the joint pdf as . Define ,
where
Now since , by defining we can write
Author Contributions
Conceptualization, E.S. and A.H.-M.; Methodology, E.S. and A.H.-M.; software, E.S.; Validation, E.S.; Formal analysis, E.S. and A.H.-M.; Investigation, E.S.; resources, A.H.-M.; Data curation, E.S.; Writing—original draft preparation, E.S. and A.H.-M.; Writing—review and editing, E.S. and A.H.-M.; Visualization, E.S.; Supervision, A.H.-M.; Project administration, A.H.-M.; Funding acquisition, A.H.-M.
Funding
This work was supported in part by the NSF grants EECS-1546980, CCF-1434600 and the NSF Center for Science of Information (CSoI), and by Shenzhen Peacock Plan under Grant No. KQTD2015033114415450.
Conflicts of Interest
The authors declare no conflict of interest.
References
- 1.Høst-Madsen A., Sabeti E., Walton C. Data Discovery and Anomaly Detection Using Atypicality: Theory. IEEE Trans. Inf. Theory. 2016 submitted. [Google Scholar]
- 2.Chandola V., Banerjee A., Kumar V. Anomaly Detection for Discrete Sequences: A Survey. IEEE Trans. Knowl. Data Eng. 2012;24:823–839. doi: 10.1109/TKDE.2010.235. [DOI] [Google Scholar]
- 3.Li Y., Nitinawarat S., Veeravalli V.V. Universal Outlier Hypothesis Testing. IEEE Trans. Inf. Theory. 2014;60:4066–4082. doi: 10.1109/TIT.2014.2317691. [DOI] [Google Scholar]
- 4.Li Y., Nitinawarat S., Veeravalli V.V. Universal Outlier Detection; Proceedings of the Information Theory and Applications Workshop (ITA); San Diego, CA, USA. 10–15 February 2013; pp. 1–5. [Google Scholar]
- 5.Li Y., Nitinawarat S., Veeravalli V.V. Universal Sequential Outlier Hypothesis Testing; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Honolulu, HI, USA. 29 June–4 July 2014; pp. 3205–3209. [Google Scholar]
- 6.Grimmett G.R., Stirzaker D.R. Probability and Random Processes. 3rd ed. Oxford University Press; Oxford, UK: 2001. [Google Scholar]
- 7.Sabeti E., Host-Madsen A. Atypicality for the Class of Exponential Family; Proceedings of the 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton); Monticello, IL, USA. 27–30 September 2016. [Google Scholar]
- 8.Kay S.M. Fundamentals of Statistical Signal Processing, Volume II: Detection Theory. Prentice-Hall; Upper Sadle River, NJ, USA: 1993. [Google Scholar]
- 9.Bishop C.M. Pattern Recognition and Machine Learning. Springer; Berlin, Germany: 2006. [Google Scholar]
- 10.Cover T., Thomas J. Information Theory. 2nd ed. John Wiley; Hoboken, NJ, USA: 2006. [Google Scholar]
- 11.Ziv J., Lempel A. A Universal Algorithm for Sequential Data Compression. IEEE Trans. Inf. Theory. 1977;23:337–343. doi: 10.1109/TIT.1977.1055714. [DOI] [Google Scholar]
- 12.Ziv J., Lempel A. Compression of Individual Sequences via Variable-Rate Coding. IEEE Trans. Inf. Theory. 1978;24:530–536. doi: 10.1109/TIT.1978.1055934. [DOI] [Google Scholar]
- 13.Ghido F., Tabus I. Sparse Modeling for Lossless Audio Compression. IEEE Trans. Audio Speech Lang. Proc. 2013;21:14–28. doi: 10.1109/TASL.2012.2211014. [DOI] [Google Scholar]
- 14.Rissanen J. A Universal Prior for Integers and Estimation by Minimum Description Length. Ann. Stat. 1983;11:416–431. doi: 10.1214/aos/1176346150. [DOI] [Google Scholar]
- 15.Kostina V. Data Compression With Low Distortion and Finite Blocklength. IEEE Trans. Inf. Theory. 2017;63:4268–4285. doi: 10.1109/TIT.2017.2676811. [DOI] [Google Scholar]
- 16.Rissanen J. Stochastic Complexity and Modeling. Ann. Stat. 1986;14:1080–1100. doi: 10.1214/aos/1176350051. [DOI] [Google Scholar]
- 17.Chandola V., Banerjee A., Kumar V. Anomaly Detection: A Survey. ACM Comput. Surv. 2009;41:15. doi: 10.1145/1541880.1541882. [DOI] [Google Scholar]
- 18.Ranshous S., Shen S., Koutra D., Harenberg S., Faloutsos C., Samatova N.F. Anomaly Detection in Dynamic Networks: A Survey. WIREs Comput. Stat. 2015;7:223–247. doi: 10.1002/wics.1347. [DOI] [Google Scholar]
- 19.Lee Y.J., Yeh Y.R., Wang Y.C.F. Anomaly Detection via Online Oversampling Principal Component Analysis. IEEE Trans. Knowl. Data Eng. 2013;25:1460–1470. doi: 10.1109/TKDE.2012.99. [DOI] [Google Scholar]
- 20.Pimentel M.A., Clifton D.A., Clifton L., Tarassenko L. A Review of Novelty Detection. Signal Process. 2014;99:215–249. doi: 10.1016/j.sigpro.2013.12.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Esling P., Agon C. Time-Series Data Mining. ACM Comp. Surv. (CSUR) 2012;45:12. doi: 10.1145/2379776.2379788. [DOI] [Google Scholar]
- 22.Li W., Mahadevan V., Vasconcelos N. Anomaly Detection and Localization in Crowded Scenes. IEEE Trans. Pattern Anal. Mach. Intell. 2014;36:18–32. doi: 10.1109/TPAMI.2013.111. [DOI] [PubMed] [Google Scholar]
- 23.Jia Z., Shen C., Yi X., Chen Y., Yu T., Guan X. Big-Data Analysis of Multi-Source Logs for Anomaly Detection on Network-Based System; Proceedings of the 13th IEEE Conference on Automation Science and Engineering (CASE); Xi’an, China. 20–23 August 2017; pp. 1136–1141. [Google Scholar]
- 24.Ahmed M., Mahmood A.N., Hu J. A Survey of Network Anomaly Detection Techniques. J. Netw. Comp. Appl. 2016;60:19–31. doi: 10.1016/j.jnca.2015.11.016. [DOI] [Google Scholar]
- 25.Yoon M.K., Mohan S., Choi J., Christodorescu M., Sha L. Learning Execution Contexts from System Call Distribution for Anomaly Detection in Smart Embedded System; Proceedings of the Second International Conference on Internet-of-Things Design and Implementation; Pittsburgh, PA, USA. 18–21 April 2017; pp. 191–196. [Google Scholar]
- 26.Sari A. A Review of Anomaly Detection Systems in Cloud Networks and Survey of Cloud Security Measures in Cloud Storage Applications. J. Inf. Secur. 2015;6:142. doi: 10.4236/jis.2015.62015. [DOI] [Google Scholar]
- 27.Høst-Madsen A., Sabeti E., Walton C., Lim S.J. Universal Data Discovery Using Atypicality; Proceedings of the 3rd International Workshop on Pattern Mining and Application of Big Data (BigPMA 2016) at the 2016 IEEE International Conference on Big Data (Big Data 2016); Washington, DC, USA. 5–8 December 2016. [Google Scholar]
- 28.Han C., Willett P., Chen B., Abraham D. A Detection Optimal Min-Max Test for Transient Signals. IEEE Trans. Inf. Theory. 1998;44:866–869. doi: 10.1109/18.661537. [DOI] [Google Scholar]
- 29.Wang Z., Willett P. A Performance Study of Some Transient Detectors. IEEE Trans. Signal Proc. 2000;48:2682–2685. doi: 10.1109/78.863080. [DOI] [Google Scholar]
- 30.Wang Z., Willett P.K. All-Purpose and Plug-In Power-Law Detectors for Transient Signals. Trans. Signal Proc. 2001;49:2454–2466. doi: 10.1109/78.960393. [DOI] [Google Scholar]
- 31.Wang Z.J., Willett P. A Variable Threshold Page Procedure for Detection of Transient Signals. IEEE Trans. Signal Proc. 2005;53:4397–4402. doi: 10.1109/TSP.2005.857060. [DOI] [Google Scholar]
- 32.Guépié B.K., Fillatre L., Nikiforov I. Sequential Detection of Transient Changes. Seq. Anal. 2012;31:528–547. doi: 10.1080/07474946.2012.719443. [DOI] [Google Scholar]
- 33.Egea-Roca D., López-Salcedo J.A., Seco-Granados G., Poor H.V. Performance Bounds for Finite Moving Average Tests in Transient Change Detection. IEEE Trans. Signal Proc. 2018;66:1594–1606. doi: 10.1109/TSP.2017.2788416. [DOI] [Google Scholar]
- 34.Guépié B.K., Fillatre L., Nikiforov I. Detecting a Suddenly Arriving Dynamic Profile of Finite Duration. IEEE Trans. Inf. Theory. 2017;63:3039–3052. [Google Scholar]
- 35.Hirai S., Yamanishi K. Detecting Changes of Clustering Structures Using Normalized Maximum Likelihood Coding; Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Beijing, China. 12–16 August 2012; pp. 343–351. [Google Scholar]
- 36.Yamanishi K., Miyaguchi K. Detecting Gradual Changes from Data Stream Using MDL-Change Statistics; Proceedings of the IEEE International Conference on Big Data (Big Data); Washington, DC, USA. 5–8 December 2016; pp. 156–163. [Google Scholar]
- 37.Killick R., Fearnhead P., Eckley I.A. Optimal Detection of Changepoints with a Linear Computational Cost. J. Am. Stat. Assoc. 2012;107:1590–1598. doi: 10.1080/01621459.2012.737745. [DOI] [Google Scholar]
- 38.Zou S., Fellouris G., Veeravalli V.V. Quickest Change Detection under Transient Dynamics: Theory and Asymptotic Analysis. IEEE Trans. Inf. Theory. 2018:1. doi: 10.1109/TIT.2018.2877972. [DOI] [Google Scholar]
- 39.Molloy T.L., Ford J.J. Minimax Robust Quickest Change Detection in Systems and Signals with Unknown Transients. IEEE Trans. Autom. Control. 2018:1. doi: 10.1109/TAC.2018.2872198. [DOI] [Google Scholar]
- 40.Veeravalli V.V., Banerjee T. Quickest Change Detection. Acad. Press Library Signal Proc. 2013;3:209–256. [Google Scholar]
- 41.Fuh C.D., Tartakovsky A.G. Asymptotic Bayesian Theory of Quickest Change Detection for Hidden Markov Models. IEEE Trans. Inf. Theory. 2019;65:511–529. doi: 10.1109/TIT.2018.2843379. [DOI] [Google Scholar]
- 42.Lavielle M. Using Penalized Contrasts for the Change-Point Problem. Signal Proc. 2005;85:1501–1510. doi: 10.1016/j.sigpro.2005.01.012. [DOI] [Google Scholar]
- 43.Larsen R.J., Marx M. An Introduction to Mathematical Statistics and Its Applications. Volume 2 Prentice-Hall; Englewood Cliffs, NJ, USA: 1986. [Google Scholar]
- 44.Roos T., Rissanen J. On Sequentially Normalized Maximum Likelihood Models; Proceedings of the Workshop on Information Theoretic Methods in Science and Engineering (WITMSE-08); Tampere, Finland. 18 August 2008. [Google Scholar]
- 45.Sabeti E., Host-Madsen A. Enhanced MDL with Application to Atypicality; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Aachen, Germany. 25–30 June 2017. [Google Scholar]
- 46.Scharf L.L. Statistical Signal Processing: Detection, Estimation, and Time Series Analysis. Addison-Wesley; Boston, MA, USA: 1990. [Google Scholar]
- 47.Grunwald P.D. The Minimum Description Length Principle. MIT Press; Cambridge, MA, USA: 2007. [Google Scholar]
- 48.Rissanen J. Stochastic Complexity in Statistical Inquiry. Volume 15 World Scientific; Singapore: 1998. [Google Scholar]
- 49.Forchini G. The Density of the Sufficient Statistics for a Gaussian AR(1) Model in Terms of Generalized Functions. Stat. Probab. Let. 2000;50:237–243. doi: 10.1016/S0167-7152(00)00111-5. [DOI] [Google Scholar]
- 50.Mallat S. A Wavelet Tour of Signal Processing: The Sparse Way. Academic Press; Cambridge, MA, USA: 2008. [Google Scholar]
- 51.Vetterli M., Kovacevic J. Wavelets and Subband Coding. Volume 995 Prentice Hall; Englewood Cliffs, NJ, USA: 1995. [Google Scholar]
- 52.Vetterli M., Herley C. Wavelets and Filter Banks: Theory and Design. IEEE Trans. Signal Process. 1992;40:2207–2232. doi: 10.1109/78.157221. [DOI] [Google Scholar]
- 53.Mitra S.K., Kuo Y. Digital Signal Processing: A Computer-Based Approach. Volume 2 McGraw-Hill New York; New York, NY, USA: 2006. [Google Scholar]
- 54.Willems F.M.J., Shtarkov Y., Tjalkens T. The Context-Tree Weighting Method: Basic Properties. IEEE Trans. Inf. Theory. 1995;41:653–664. doi: 10.1109/18.382012. [DOI] [Google Scholar]
- 55.Willems F., Shtarkov Y., Tjalkens T. Reflections on “The Context Tree Weighting Method: Basic properties”. Newslett. IEEE Inf. Theory Soc. 1997;47:1. [Google Scholar]
- 56.Sabeti E., Høst-Madsen A. How interesting images are: An Atypicality Approach For Social Networks; Proceedings of the IEEE International Conference on Big Data (Big Data); Washington, DC, USA. 5–8 December 2016. [Google Scholar]
- 57.Muirhead R.J. Aspects of Multivariate Statistical Theory. Volume 197 John Wiley & Sons; Hoboken, NJ, USA: 2009. [Google Scholar]
- 58.Silver K. Master’s Thesis. University of Hawaii; Honolulu, HI, USA: Nov 12, 2014. A Passive Acoustic Automated Detector for Sei and Fin Whale Calls. [Google Scholar]
- 59.Host-Madsen A., Sabeti E. Atypical Information Theory for Real-Valued Data; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Hong Kong, China. 14–19 June 2015; pp. 666–670. [Google Scholar]
- 60.Goldberger A.L., Amaral L.A.N., Glass L., Hausdorff J.M., Ivanov P.C., Mark R.G., Mietus J.E., Moody G.B., Peng C.K., Stanley H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation. 2000;101:e215–e220. doi: 10.1161/01.CIR.101.23.e215. [DOI] [PubMed] [Google Scholar]