Abstract
The exceedance point process approach of Hsing et al. is extended to multivariate stationary sequences and some weak convergence results are obtained. It is well known that under general mixing assumptions, high level exceedances typically have a limiting Compound Poisson structure where multiple events are caused by the clustering of exceedances. In this paper we explore (a) the precise effect of such clustering on the limit, and (b) the relationship between point process convergence and the limiting behavior of maxima. Following this, the notion of multivariate extremal index is introduced which is shown to have properties analogous to its univariate counterpart. Two examples of bivariate moving average sequences are presented for which the extremal index is calculated in some special cases.
Keywords: dependence function, exceedance, extremal index, multivariate, point process, stationary
1. Introduction
Extreme value theory for multivariate iid sequences has been studied for quite some time now but attention to the dependent case has been relatively recent. For univariate sequences it is known that local dependence causes extreme values to occur in clusters, which in turn results in a stochastically smaller distribution for the maximum than if the observations were independent. We begin with a brief review of these results, which we shall later extend to the multivariate case.
Let {ξn} be a univariate stationary sequence. Write Mn = max{ξ1,…,ξn} and for τ>0, let {un(τ)} denote a sequence satisfying limn→∞np{ξ1>un(τ)}=τ. Under quite general mixing assumptions there exist constants 0≤ θ′≤ θ″≤1 such that
for all τ. (See Ref. [1], although the idea actually dates back to Refs. [2–4].) Thus if P{Mn≤un(τ0)} converges for some τ0, then θ′=θ″(=θ, say) and hence limn→∞ P{Mn≤un (τ)}=c−θr for all τ>0. The common value θ is then called the extremal index of {ξn}. We shall assume θ to be positive whenever it exists, since the case θ=0 corresponds to a degenerate limiting distribution for Mn. Note that θ=1 for iid sequences. Let be an iid sequence with , called the associated iid sequence, and write . If {ξn} has extremal index θ and limn→∞ P{Mn≤vn(t)} = H(t) for a suitable family of normalizing constants {vn(t)}, then it follows (upon identifying e−θr with H(t)) that where
| (1) |
The extremal index is thus a measure of the effect of dependence on the limiting distribution of Mn. The stochastically smaller limiting distribution of Mn is in fact a direct result of the clustering of extremes, as explained below. See Ref. [5] for details.
For fixed τ>0 let the exceedance point process be defined by
where IA denotes the indicator function of the event A. Then for a broad class of weakly dependent sequences, the limit in distribution of Nn, if it exists, is a Compound Poisson process with intensity θτ and multiplicity distribution π on {1,2,…}. The Poisson events may in fact be regarded as the positions of “cxceedance clusters” while the multiplicities correspond to cluster sizes. More explicitly, one may divide the n observations into kn blocks of roughly equal size and regard exceedances within each block as forming a single “cluster”, so that the cluster sizes are given by Nn(Ji), i=l,…,kn,where . For a suitable choice of kn depending on the mixing rate of {ξn}, one then has
and
so that in particular, limn→∞ kn P{Nn(Jt)>0}=θτ. Hence
while on the other hand, lintn→∞ ENn[0,1]=limn→∞ nP{ξ1>un(τ)}=τ. The cluster size distribution and the extremal index are therefore related by
| (2) |
Now let {ξn=(ξnl,…,ξnd), n∈ℤ} be a multivariate stationary sequence where d≥1 is a fixed integer, and write Mn=(Mnl,…, Mnd) where Mnj = max{ξni,…,ξnj}, j−1,…,d. The study of multivariate extremes began in the early 1950s, focusing mainly on the limiting behavior of Mn under a linear normalization, when the observations are iid. The resulting class of limiting distributions was characterized in Ref. [6] and domains of attraction criteria were given in Ref. [7]. See also Ref. [8]. Chapter 5, for an account of the literature surrounding this theory. For stationary sequences satisfying a general mixing assumption, it is known (see Refs. [9, 10], and Theorem 1.1 below) that the class of limiting distributions of Mn is the same as for iid sequences. In this paper we explore the precise effect of dependence on the limiting distribution by extending the univariate theory described above to the multivariate case. Essentially, this involves studying the inter-relationship between the two dependence structures present, one due to dependence over time and the other due to the dependence between the various components of the multivariate observations. The ideas become most transparent when presented in terms of so-called dependence functions [8]. Here we adopt the slightly modified definition found in Ref. [9]. A distribution function D on [0,1]d it called a dependence function if Dj(Dj{u))=Dl(u), u∈|0,1|, j=1,…,d, where the subscript j signifies they jth marginal. The dependence function of a distribution F on IRd is defined by Df(u)=P{F1(X1)≤ ul,…,Fd(Xd)≤ud}, u=(ul,…,ud)∈ [0,1]d, where (X1,…,Xd) is a random vector with distribution F. More generally, any dependence function satisfying F(x)=D(F1(x1),…,Fd(Xd)) could be defined to be a dependence function of F, although the present choice is a natural one.
Write T=(0,1)d\{l} where l=(l,…,1)∈IRd, and for t=(tl,…,td)∈T, let vn(t)=(vnl(tl),…,vnd(td)) where vnj(tj) satisfies limn→∞ nP{ξlj>vnj(tj)}=− logtj. Let Hndenote the distribution function of Mn(i.e., Hn(x)=P{Mn≤x}), with marginals Hnj, j=l,…,d. Then (see Refs. [8, 11]).
| (3) |
if and only if
The limiting behavior of Mn can therefore be separated into two parts, one pertaining to the convergence of the marginals (a univariate problem) and the other to the convergence of the dependence functions. Here we focus attention exclusively on the latter. It should be noted that the choice of normalising constants does not affect the dependence function of the limit distribution H, but only alters the marginals (sec Ref. [9], Lemma 3.2). Since our main interest is in the dependence function, the present choice of normalising constants is appropriate in view of the fact that it results in Uniform[0,1] marginals for the limit distribution when {ξn} is iid, so that in particular DH=H. According to Theorem 3.3 of Ref. [9], the class of all possible limits H in Eq. (3) (for iid {ξn}) is precisely the class of extreme dependence functions, that is those that satisfy
| (4) |
for each n>1 and t=(tl,…,td)∈[0,1]d. Theorem 1.1 below shows that the same is true also if {ξn} is a stationary sequence satisfying the following mixing condition.
For t∈T, let
and for 1≤l≤n−1 define
The mixing condition Δ(vn(t}) is then said to hold if for some sequence {ln} satisfying lnln→0. This is the multivariate version of the mixing condition used in Ref. [5] and is slightly stronger than the D(un) condition in Ref. [9]. Henceforth {ξn} will be assumed to satisfy Δ(vn(t)), for some or all t, as required.
Theorem 1.1. Let {ξn} satisfy Δ(vn(t)) for all t∈T and suppose that P{Mn≤vn(t))→H H(t), non-degenerate. Then Dn is an extreme dependence function and hence, in particular, H(tc)=Hc(t) for each t∈[0.1]d and c>0 (where ).
Proof: The first part is an immediate consequence of Theorem 4.2 of Ref. [9] while the second part follows from the definition of extreme dependence functions upon noting that (by the univariate theory described above), the marginals of H are of the form where θj is the extremal index of {ξnj}, the jth-component sequence of {ξn}.
In the next section we apply the exceedance point process approach to multivariate extremes and obtain some weak convergence results. The multivariate extremal index is then defined (in Sec. 3), based on the multivariate analogue of Eq. (1). It is seen to be a function of only d − 1 variables and its properties naturally extend those of the univariate extremal index. Finally in Sec. 4 we consider two examples of bivariate moving average sequences for which the computation of the extremal index is demonstrated.
2. Exceedance Point Processes
Fix t∈T and let , and put δi=(δil,…,δid). The multivariate exceedance point process is then defined by
| (5) |
Assume that {ξn} satisfies Δ(vn(t)). If also Nn→d N0, then it may be shown (as in the univariate case) that the limit N0 is a point process on [0,1] which is of Compound Poisson type. More precisely, the Laplace Transform of N0 is given by
| (6) |
Here N0j denotes the jth-component of N0, v is a positive constant, π is a probability distribution on and fj’s are non-negative functions on [0,1].
Let {kn} be any sequence of positive integers satisfying
| (7) |
Set rn=[n/kn] (the largest integer not exceeding n/kn) and put Jni=[0,rn/n]. Define the probability distribution π, on by
The following theorem which gives a useful characterization of the convergence of Nn is an immediate consequence of the results in Sec. 5 of Ref. [12].
Theorem 2.1. Nn→d No if and only if πn→w π and P {Mn≤vn(t)}→ e−v, and in that case the Laplace Transform of N0 is given by Eq. (6).
Next we consider the iid case in some detail and obtain an interesting connection with Theorem 5.3.1 of Ref. [8].
Proposition 2.2. Let {ξn} be iid and for fixed t∈T let Nn be defined by Eq. (5). If Nn→d N0 then the multiplicity distribution π in Eq. (6) is supported on the set S−{0,1}d\{0}.
Proof: Observe that Δ(vn(t)) is trivially satisfied since αn.1=0 so that we may take ln≡1 and kn=n. Then , which is clearly supported on S. The result is now immediate since S is a closed set and πn→H π by Theorem 2.1.
Making the dependence on t explicit, we now write , , v=v(t). In addition we shall require the following notation from Ref. [8]. For 1≤k≤d, let j(k)=(j1,…,jk) denote a vector with integer-valued components 1≤j1<j2<…<jk≤d, and for x=(x1,…,xd)∈IRd write xj(k)=(xj1,…,xjk). Define the “survival function”
and write . For each j(k), let yj(k) denote the element in S={0,1 }d\{0} whose jth component equals 1 if and only if j=ji for some i=1,…,k. (This defines a natural 1-1 correspondence between S and the j(k)’s.)
Theorem 2.3. Let {ξn} be iid. Then for some fixed t∈T if and only if
| (8) |
for each j(k).1≤k≤d. In that case has Laplace Transform given by Eq. (6) with
| (9) |
and with π(t) determined by the relations
| (10) |
Proof: Write , so that . If Eq. (8) holds for each k, then
| (11) |
and hence .
Next observe that for each j(k). 1≤k≤d,
| (12) |
where . Moreover, this relationship is invertible in the sense that each of the probabilities πn(y), y∈S, can be expressed as a linear combination of the Gj(k)(vn(t))’s. Therefore by Eq. (8). limn→∞tπn(y)=π(t)(y) (say) exists and satisfies Eq. (10). Hence by Theorem 2.1 where has the specified parameters. Conversely if then πn and P{Mn≤vn(t)} converge (by Theorem 2.1 again), and hence Eq. (8) follows by virtue of Eqs. (11) and (12) □.
Corollary 2.4. Let {ξn} be iid. Then for each t∈T if and only if P{Mn≤vn(t)}→w H(t). Moreover H and {v(t),π(t)}t∈T determine each other.
Proof: (Sketch) The first part follows from Theorem 2.3 above and Theorem 5.3.1 of Ref. [8] which states that P{Mn≤vn(t)}→w H(t) if and only if Eq. (8) holds for each t∈T. Note that H(t)=e−v(t) so that H and the v(t)’s can be obtained from each other. Also the π(t)’s can be obtained from the v(t)’s by first inverting Eq. (9) to get the hj(k)(t)’s and then inverting Eq. (10). (The inversion of Eq. (9) is carried out inductively using the fact that the weak convergence of Hn(vn(t)) implies that of all lower dimensional marginals.) □
Analogous results for the dependent case take on a slightly different form. Let {ξn} be a stationary sequence satisfying Δ(vn(t)) for each t∈T. As before let rR=[n/kn] where {kn} is any sequence satisfying Eq. (7). and define
Theorem 2.5. Let {ξn}be a stationary sequence satisfying Δ(vn(t)) for each t∈T. Then P{MR≤vn (t))→w H(t) if and only if
for each j(k), 1≤k≤d and t∈T, and in that case
Proof: Observe that the mixing condition Δ(vn(t)) implies that {ξnj(k)} satisfies Δ(vnj(k)(t)) for each j(k) (with obvious notation). Hence it may be shown as in the univariate case (see Lemma 2.1 of Ref. [1]) that
| (13) |
for each j(k). The result may therefore be proved in exactly the same way as Theorem 5.3.1 of Ref. [8]. □
Remark: Under the hypothesis of Theorem 2.5, if , t∈T, with parameters v(t) and π(t) then , as in the iid case. However it is not possible in general to recover the π(t)’s from H since the clustering of exceedances may cause the support of π(t) to extend beyond S. References [9, 10] give sufficient conditions (analogous to the D1(un) condition of Ref. [13]) under which clustering docs not occur, so that Corollary 2.4 can be extended to stationary sequences satisfying this condition.
A distribution function F on IRd is said to be independent if . If {ξn} is iid and P{Mn≤vn(t)}→w H(t), then it follows from Corollary 5.3.1 of Ref. [8] that H is independent if and only if the marginals of H are pairwise independent. The analogous result for the dependent case is stated below. The proof (which is omitted) is essentially the sanie as for the iid case, but uses Theorem 2.5 instead of Theorem 5.3.1 of Ref. [8].
Corollary 2.6. Let {ξn} be a stationary sequence satisfying Δ(vn(t)) for each t∈T and suppose that P {Mn≤vn(t)}→w H(t). Then H is independent if and only if for each 1≤j<l≤d, t∈T, i.e., if and only if for each j(2) and each t∈T.
It is shown in Ref. [14] that H is independent it for some t∈(0,1)d. Although the result in [14] only stated for iid sequences under a linear normalization, the proof essentially rests on the defining property of extreme dependence functions, namely Eq. (1). Consequently the result extends lo the present more general situation allowing dependence and non-lineal normalizations. Corollary 2.6 can therefore be improved as follows.
Corollary 2.7. Let {ξn} be as in Corollary 2.6 and suppose that P{Mn≤vn(t)}→w H(t). Then the following are equivalent:
H is independent,
for somet∈(0,1)d,
for each j(2).for some t∈(0,1)d.
It should be noted that Refs. [9, 10] give some interesting sufficient conditions for H to be independent when {ξn} is a stationary sequence. A natural question to ask in the present context is whether H is independent whenever Ĥ is. Proposition 3.4 gives a necessary and sufficient condition for this in terms of the extremal index, but the answer in general is negative and a counter-example can be found in [10]. It seems more plausible that the converse may be true, i.e., that Ĥ is independent whenever H is. In fact however, this too is not the case, as shown by an interesting counter-example in [15].
We conclude this section by stating a result which extends Theorem 5.1 of [5] and is proved similarly.
Theorem 2.8. Let {ξn} be a stationary sequence satisfying Δ(vn(t)) for each t∈T and suppose that for some t∈T. Then for each c>0 and fur thermore, and (where ).
3. The Multivariate Extremal Index
Let {ξn} be a stationary sequence and the associated iid sequence. Suppose that P{Mn≤vn(t)}→w H(t) and . The multivariate extremal index of {ξn} is then defined by the relation H(t)−Ĥθ(t)(t) (see Eq. (1)), or more explicitly
Obscene that θ(t) is well defined since Ĥ has Uniform[0,1] marginals and hence, 0<Ĥ(t)<1 on T. The following results describe some basic properties of the multivariate extremal index.
Proposition 3.1. Assume that {ξn}satisfies Δ(vn(t)) for each t∈T and has extremal index θ(t). Then
θ(t)=θ(tc) for eacht∈T and c>0, and
for each j=1,…,d, {ξnj} has extremal index θj=θ(t) wheret∈T has all coordinates equal to I except the jth.
(Note that by (i), θ(t) is constant along the contours L1={tc:c>0}, t∈T, and hence θj in (ii) is well defined.)
Proof: Recall mat (by Theorem 1.1) H(tc)=Hc(t) and Ĥ(tc)=Ĥc(t) so that (i) follows from the definition of the extremal index. Next, for t∈T with all coordinates but the jth equal to 1, P{Mn≤vn(t)}=P{Mnj≤vnj(tj)} and hence
Therefore by Theorem 2.2 of Ref. [1], {ξnj} has extremal index θj (say) so that . Now H(t)−Ĥθ(t)(t) by definition of the extremal index, and for the present choice of t this is the same as , whence it follows that θ(t)=θj for all such t.
For t∈T, let denote the one-dimensional point process obtained from via the map y→l(y≠0) from {0,1}d to {0,1}, i.c., , B∈ℬ. Thus has unit mass at i/n if and only if ξi≰vn(t). Assume that {ξn} satisfies Δ(vn(t)) and with Jnl as in Sec. 2. let
Proposition 3.2. Assume that {ξn}satisfies Δ(vn(t)) for each t∈T and has extremal index θ(t). Then .
Proof: Observe that
Now and limn→∞ P{Mn≤vn(t)}=H(t) (by assumption), so that limn→∞ nP{ξl≰vn(t)}=−log Ĥ(t) and (by Eq. (13)) . Therefore , as required. □
Remark: Proposition 3.2 is simply the multivariate version of Eq. (1) and shows how the extremal index is related to the clustering of “exceedances.” Indeed, according to the present viewpoint, an exceedance occurs at time i if ξ≰vn(t), i.c., if ξnj>vnj(tj) for at least one j. Thus Propositions 3.1 and 3.2 show that while the degree of clustering may depend on t, it is constant on each Lt. Note also the connection to Theorem 2.8.
The next result gives the relation between the dependence functions of H and Ĥ, which is seen to involve the extremal index in an intricate manner.
Proposition 3.3. If {ξn} has extremal index θ(t), t∈T. then
| (14) |
where θj is the extremal index of {ξn}, j=1,…,d.
Proof: By definition of the dependence function, DH(t)=P{Hl(Xl)≤tl,…,Hd(Xd)≤td} where (Xl,…, Xd) ts a random vector with distribution H. Therefore, since .
or required. □
Remarks
Note that s=tc (for some c>0) if and only if log sj/log sd=log tj/log td−aj (say), j=1,…,d − 1. Therefore we may write Lt=La where a=(a1,…,ad−1), and hence by the remark following Proposition 3.2, θ(t)=θ(a), i.c., the extremal index is a function of d − 1 variables only.
By Proposition 3.3, . Also, if t∈La then , where a*=(a1θ1/θd,…,aa−1θd−1/θd). Thus DH is obtained by translating the values of DĤ(=Ĥ) on La onto .
While the above results illustrate some of the basic properties of the multivariate extremal index, they are far from complete. For instance; it would be useful to identify the set of all “admissible” θ(•) for a given Ĥ, that is the set of all θ(•) such that DH(•) defined by Eq. (14) is a probability distribution on [0,1]d. It would also be of interest to study the properties of θ(•) when one or both of Ĥ and H are independent. In this context we have the following simple result.
Proposition 3.4. If Ĥ is independent, then Ĥ is independent if and only if
In particular, if both Ĥ and H are independent then θ(t) is a convex combination of the θj’s.
Proof: If Ĥ is independent, then . The conclusion follows immediately from Corollary 2.7 (iii) upon taking logarithms and noting that if H is independent, then .
The extremal index can be given the following more general formulation. Let and μ be the probability measures on (0,1)d corresponding to Ĥ and H, respectively. Thus for instance,
where vn(A)={vn(s) : s∈A}, A⊂(0,1))d. We now define via the relationship , or more directly , for subsets A⊂(0,1)d such that and μ(A)>0.
Note that for t∈T. Thus if {θ(t): t∈T} is known along with either of H or Ĥ, then it is possible at least in theory to obtain . In practice, however, it may not be possible to obtain in a tractable form, but frequently one is only interested in certain special sets, typically rectangles of the form , and for such sets the computation is easy.
The definition of Mn as the vector of component wise maxima actually corresponds to regarding ξi as an extreme observation if ξij>vnJ(tj) for some j. More generally, one may define ξi to be an extreme value if ξi∈vn(A) for some A⊂(0,1)d, in which case has an interpretation as a measure of the clustering of such extremes. Note that the original definition of extremes corresponds to letting A=((0,t1)×…×(0,td)c. Alternately one may consider taking A−(t1,l)×…×(td,l) which corresponds to defining ξ1 as an extreme observation if ξij>vnj(tj) for all j. Yet another choice is .
4. Examples
We conclude with two examples, both involving bivariate stationary sequences.
Example 4.1 Let {ηn} be an iid sequence, and put ξn1=ηa, and ξn2=max{ηn−1, ηn}. Let F denote the distribution of ξn=(ξn1, ξn2) with marginals F1 and F2. Then and
If vnj(tj) satisfies then so that . Moreover vn1(t1)≥vn2(t2) if and only if , and so
The marginals of H are therefore H1(t1)=t1 and so that θ1=1 and θ2=1/2, and the dependence function of H is . For the associated iid sequence on the other hand, it is easily verified that
from which it follows that
We next consider a moving average sequence studied in Ref. [16].
Example 4.2, Let {Zk=(Zk1,Zk2)1}, −∞<k<∞, be a sequence of iid random vectors in IR2. We assume the existence of a sequence of positive constants an→∞, and a measure v on IR2 which is finite on sets of the form {x : ‖x‖>r}, r>0 (where ‖⋅‖ denotes the Euclidean norm in IR2), such that nP{an 1Z0∈•}→v v(•). (Here ‘→v’ denotes vague convergence of measures on IR2 with respect to the metric , where for i=1,2,r, and θi denote the polar coordinates of xj, and a∨b=max{a,b}.) The measure v is necessarily of the form v({x : ‖x‖>r,θ(x)∈A})=r−aS(A) for r>0 and A⊂(0,2π), where S(•) is a probability measure on (0,2π) and α>0. Hence in particular [17].
| (15) |
for all c>0 and all sets A with v(A)<∞.
Define the bivariate moving average process , where is a sequence of real 2×2 matrices satisfying , for some δ∈(0,α). δ≤1. For x=(x1,x2)′∈IR2, write Axj= {z : Cjz∈((−∞,x1)×(−∞,x2))c}, where Ac denotes the complement of a set A⊂IR2. Then [16].
where and . The extremal index is therefore . It follows from the definition of Axj and Eq. (15) that this is in fact a function of x1/x2. Note that the extremal index defined above differs from that in Sec. 3 in that it is defined on IR2 rather than [0,1]2. However the two definitions arc equivalent as may be seen by means of a suitable transformation from IR2 to [0,1]2.
The actual calculation of θ(x) may be quite difficult in general, but possible to carry out under appropriate simplifying assumptions.
Case (i). If Cj=cjC where , and the cj’s are non-negative constants, then and , where B(x)={z : Cz∈((−∞,x1)× (−∞,x2))c} and c=max{cj : j≥0}. Therefore by Eq. (15), and so that .
Case (ii). If the Cj’s are diagonal, i.e., Cj=diag[Cj1,Cj2] with cji≥0, i=1,2, then Axj={z : cjiz1>x1 or cj2z2>x2} and where ci= max{cji : j≥0}, i=1,2. In particular, taking x2=∞ and using Eq. (15) as in Case (i), we have and , so that the extremal index of .
Case (iii). Let D denote the support of v. If D⊂ {z : z1=0 or z2=0} (which is the case if the coordinates of Z0 are independent), then we may write
| (16) |
for suitable constants a1≥0 and a2≥0. Once again, assuming the cjkl’s to be non-negative and writing cki=max{cjki : j≥0} for k,l=1,2, we have (writing a∧b=min{a,b})
and
so that using Eq. (16)
Thus putting x2=∞, we have and similarly, .
If also cj,12=cj,12=0 for each j, (that is if the Cj’s are diagonal), then
and in particular, and . Note that in this case the limiting distributions of Mn and are both independent, and hence (in accordance with Proposition 3.4) θ(x) is a convex combination of θ1 and θ2.
The non-negativeness of the Cj’s assumed abive is not crucial and may be related, although at the cost of more involved calculations.
Acknowledgments
It is a pleasure to thank Professor M. R. Leadbetter for his encouragement and guidance during the course of this research. Part of this work was carried out during a brief visit to the University of Bern (supported by the Swiss National Science Foundation), and I am very grateful to Professor J. Hüsler for several useful discus sions. Research supported by the Air Force Office of Scientific Research Contract No. F49620 850 0144.
Biography
About the author: S. Nandagopalan is an Assistant Professor of Statistics at Colorado State University.
5. References
- 1.Leadbetter MR. Extremes and local dependence in a sationary sequence. Z Wahrsch, verw, Gebiete. 1983;65:291–306. [Google Scholar]
- 2.Newell GF. Asymptolic extremes for independent random variables. Ann Math Statist. 1964;35:1322–1325. [Google Scholar]
- 3.Loynes RM. Extreme values in uniformly mixing stochastic processes. Ann Math Statist. 1965;36:993–999. [Google Scholar]
- 4.O’Brien GL. The maximum term of uniformly mixing stationary processes. Z Wahrsch verw Gebiete. 1974;30:57–63. [Google Scholar]
- 5.Hsing T, Hüsler J, Leadbetter MR. On the exceedance point process for a stationary sequence. Probah Theory Rel Fields. 1988;78:97–112. [Google Scholar]
- 6.de Haan L, Resnick S. Limit theory for multivariate sample extremes. Z Wahrsch verw Gebiete. 1977;40:317–337. [Google Scholar]
- 7.Marshall AW, Olkin I. Domains of attraction of muttivari-ate extremes value distributions. Ann Probab. 1983;11:168–177. [Google Scholar]
- 8.Galambos J. The asymptotic theory of extreme order statistics. Wiley; New York: 1987. [Google Scholar]
- 9.Hsing T. Extreme value theory for multivariate stationary sequences. J Multivariate Anal. 1989;29:274–291. [Google Scholar]
- 10.Hürler J. Multivariate extreme values in stationary random sequence? Stochastic Proc Appl. 1990;35:99–108. [Google Scholar]
- 11.Deheuvels F. Caractèrisation compléte des lois extrêmes mulit-variées et de la convergence des types extrêmes. Publ Instit Statist Univ Paris. 1978;23:1–36. [Google Scholar]
- 12.Nandagopalan S, Leadbetter MR, Hüsler J. Limit theorems for multi-dimensional random measures, 92-14. Department of Statistics, Colorado State University; Nov, 1992. [Google Scholar]
- 13.Leadbetter MR. On extreme values in stationary sequences. Z Wahrsch verw Gebiete. 1974;1974;28:298–303. [Google Scholar]
- 14.Takahashi R. Some properties of multivariate extreme value distributions and multivariate tail equivalence. Ann Inst Statist Math. 1987;39:637–647. [Google Scholar]
- 15.N Catkan. University of Bern, Private communication, (1993).
- 16.Davis PA, Marengo J, Resnick S. Extremal properties of a class of multivariate moving averages, L1-4; Proceedings of the 45th Session of the I.S.I.; Amsterdam. 1985. pp. 1–14. [Google Scholar]
- 17.Resnick S. Extreme values, regular variation, and point processes. Springer-Verlag; New York: 1987. [Google Scholar]
