Abstract
van de Geer and Lederer (Probab. Theory Related Fields 157(1-2), 225–250, 2013) introduced a new Orlicz norm, the Bernstein-Orlicz norm, which is connected to Bernstein type inequalities. Here we introduce another Orlicz norm, the Bennett-Orlicz norm, which is connected to Bennett type inequalities. The new Bennett-Orlicz norm yields inequalities for expectations of maxima which are potentially somewhat tighter than those resulting from the Bernstein-Orlicz norm when they are both applicable. We discuss cross connections between these norms, exponential inequalities of the Bernstein, Bennett, and Prokhorov types, and make comparisons with results of Talagrand (Ann. Probab., 17(4), 1546–1570, 1989, 1991), and Boucheron et al. (2013).
AMS (2000) subject classification: Primary: 60E15, 60F10; Secondary: 60G50, 33E20
Phrases: Bennett’s inequality, Exponential bound, Maximal inequality, Orlicz norm, Poisson, Prokhorov’s inequality
1 Orlicz Norms and Maximal Inequalities
Let Ψ be an increasing convex function from [0, ∞) onto [0, ∞). Such a function is called a Young-Orlicz modulus by Dudley (1999), and a Young modulus by de la Peña and Giné (1999). Let X be a random variable. The Orlicz norm ‖X‖Ψ is defined by
where the infimum over the empty set is ∞. By Jensen’s inequality it is easily shown that this does define a norm on the set of random variables for which ‖X‖Ψ is finite. The most important functions Ψ for a variety of applications are those of the form Ψ(x) = exp(xp)−1 ≡ Ψp(x) for p ≥ 1, and in particular Ψ1 and Ψ2 corresponding to random variables which are “sub-exponential” or “sub-Gaussian” respectively. See Krasnoseľskiĭ and Rutickiĭ (1961), Dudley (1999), Arcones and Giné (1995), de la Peña and Giné (1999), & van der Vaart and Wellner (1996) for further background on Orlicz norms, and see Rao and Ren (1991), Krasnoseľskiĭ and Rutickiĭ (1961), & Hewitt and Stromberg (1975) for more information about Birnbaum-Orlicz spaces.
The following useful lemmas are from van der Vaart and Wellner (1996), pages 95-97, and Arcones and Giné (1995) (see also de la Peña and Giné (1999), pages 188-190), respectively.
Lemma 1.1
Let Ψ be a convex, nondecreasing, nonzero function with Ψ(0) = 0 and lim supx,y→∞ Ψ(x)Ψ(y)/Ψ(cxy) < ∞ for some constant c. Then, for any random variables X1, …, Xm,
| (1.1) |
where K is a constant depending only on Ψ.
Lemma 1.2
Let Ψ be a Young Modulus satisfying
Then for some constant M depending only on Ψ and every sequence of random variables {Xk: k ≥ 1},
| (1.2) |
The inequality (1.1) shows that if Orlicz norms for individual random variables are under control, then the Ψ–Orlicz norm of the maximum of the Xi’s is controlled by a constant times Ψ−1(m) times the maximum of the individual Orlicz norms. The inequality (1.2) shows a stronger related Orlicz norm control of the supremum of an entire sequence Xk divided by Ψ−1(k) if the supremum of the individual Orlicz norms is finite. Lemma 1.2 implies Lemma 1.1 for Young functions of exponential type (such as Ψp(x) = exp(xp) − 1 with p ≥ 1), but it does not hold for power type Young functions such as Ψ(x) = xp, p ≥ 1. These latter Young functions continue to be covered by Lemma 1.1. Arcones and Giné (1995) carefully define Young moduli Ψp(x) = exp(xp) − 1 for all p > 0 and use Lemma 1.2 to establish laws of the iterated logarithm for U-statistics.
A general theme is that if Ψa ≤ Ψb and we have control of the individual Ψb Orlicz norms, then Lemma 1.1 or Lemma 1.2 applied with Ψ = Ψb will yield a better bound than with Ψ = Ψa in the sense that .
Here we are interested in functions Ψ of the form
| (1.3) |
where h is a nondecreasing convex function with h(0) = 0 not of the form xp. In fact, the particular functions h of interest here are (scaled versions of):
for the particular h(x) ≡ x(log x − 1) + 1. The functions h0 and h1 are related to Bernstein exponential bounds and refinements thereof due to Birgé and Massart (1998), while the function h2 is related to Bennett’s inequality (Bennett, 1962), and h4 is related to Prokhorov’s inequality (Prokhorov, 1959).
van de Geer and Lederer (2013) studied the family of Orlicz norms defined in terms of scaled versions of h1, and called called them Bernstein-Orlicz norms. Our primary goal here is to compare and contrast the Orlicz norms defined in terms of h0, h1, h2, and h4. We begin in the next section by reviewing the Bernstein-Orlicz norm(s) as defined by van de Geer and Lederer (2013). Section 3 gives corresponding results for what we call the Bennett-Orlicz norm(s) corresponding to the function h2. In Section 4 we give further comparisons and two applications.
2 The Bernstein-Orlicz Norm
For a given number L > 0, van de Geer and Lederer (2013) have defined the Bernstein-Orlicz norm with
| (2.1) |
It is easily seen that
The following three lemmas of van de Geer and Lederer (2013) should be compared with the development on page 96 of van der Vaart and Wellner (1996).
Lemma 2.1
Let . Then
or, equivalently, with ,
or
| (2.2) |
Lemma 2.2
Suppose that for some τ and L > 0 we have
Equivalently, the inequality (2.2) holds. Then .
Example 2.1
Suppose that X ~ Poisson(ν). Then it is well-known (see e.g. Boucheron et al. (2013), page 23), that
where h2(x) = h(1 + x) = (x + 1) log(x + 1) − x. Thus the inequality involving h1 holds with 9ν = 2/L2 and 1/(3ν) = L/τ. Thus , . We conclude from Lemma 2.2 that
Pisier (1983) and Pollard (1990) showed how to bound the Orlicz norm of the maximum of random variables with bounded Orlicz norms; see also de la Peña and Giné (1999), section 4.3, and van der Vaart and Wellner (1996), Lemma 2.2.2, page 96. The following bound for the expectation of the maximum was given by van de Geer and Lederer (2013); also see Boucheron et al. (2013), Theorem 2.5, pages 32-33.
Lemma 2.3
Let τ and L be positive constants, and let Z1, …, Zm be random variables satisfying . Then
| (2.3) |
Corollary 2.1
For m ≥ 2
In particular when Zj ~ Poisson(ν) for 1 ≤ j ≤ m
Proof
This follows from Lemma 2.3 since for x ≥ 1. The Poisson(ν) special case then follows from Example 2.1.
It will be helpful to relate Ψ1(·; L) to several functions appearing frequently in the theory of exponential bounds as follows: for x ≥ 0, we define
| (2.4) |
It is easily shown (see e.g. Boucheron et al. (2013) Exercise 2.8, page 47) that
| (2.5) |
A trivial restatement of the inequality on the left above and some algebra and easy inequalities yield
| (2.6) |
The latter inequalities imply that the Orlicz norms based on h0 and h1 are equivalent up to constants.
One reason the functions h0 and h1 are so useful is that they both have explicit inverses: from Boucheron, Lugosi, and Massart (2013), page 29, for h1 and direct calculation for h0,
To relate the inequalities in Lemmas 2.1 and 2.2 to more standard inequalities (with names) we note that
This implies immediately that the inequality in Lemma 2.2 can be rewritten as
Here is a formal statement of a proposition relating exponential tail bounds in the traditional Bernstein form in terms of h0 to tail bounds in terms of the (larger) function h1.
Proposition 2.1
Suppose that a random variable Z satisfies
| (2.7) |
for numbers A, B > 0. Then the hypothesis of Lemma 2.2 holds with L and τ given by L2 = 2B2/A and τ = 23/2A1/2:
| (2.8) |
| (2.9) |
Proof
This follows from Eq. 2.6 and elementary manipulations.
The classical route to proving inequalities of the form given in Eq. 2.7 for sums of independent random variables is via Bernstein’s inequality; see for example (van der Vaart and Wellner, 1996) Lemmas 2.2.9 and 2.2.11, pages 102 and 103, or Boucheron et al. (2013), Theorem 2.10, page 37. But the recent developments of concentration inequalities via Stein’s method yields inequalities of the form given in Eq. 2.7 for many random variables Z which are not sums of independent random variables: see, for example, Ghosh and Goldstein (2011a), Ghosh and Goldstein (2011b), & Goldstein and Iṡlak (2014). The point of the previous proposition is that (up to constants) these inequalities in terms of h0 can be re-expressed in terms of the (larger) function h1.
3 Bennett’s Inequality and the Bennett-Orlicz Norm
We begin with a statement of a version of Bennett’s inequality for sums of bounded random variables; see Bennett (1962), Shorack and Wellner (1986), & Boucheron et al. (2013). Let h(x) ≡ x(log x−1)+1 and h2(x) ≡ h(1+x). This function arises in Bennett’s inequality for bounded random variables and elsewhere; see e.g. Bennett (1962), Shorack and Wellner (1986), & Boucheron et al. (2013), page 35 (but note that their h is our h2 = h(1 + ·)). As noted in Example 1 above, the function h also appears in exponential bounds for Poisson random variables: see Shorack and Wellner (1986) page 485, and Boucheron et al. (2013) page 23.
Proposition 3.1
(Bennett) (i) Let X1, …, Xn be independent with max1≤j≤n(Xj − μj) ≤ b, E(Xj) = μj, . Let , . Then with ψ(x) ≡ 2h(1 + x)/x2,
| (3.1) |
for all z > 0.
(ii) If, in addition, max1≤j≤n |Xj − μj| ≤ b, then
Using the inequality h(1 + x) ≥ 9h1(x/3), it follows that
Thus an inequality of the form of that in Lemma 2.1 holds with and . Thus and . It follows from Lemma 2.2 that
or
But this bound has not taken advantage of the fact the the first bound above involves the function h (or h2) rather than h1. It would seem to be of potential interest to develop an Orlicz norm based on the function h2 ≡ h(1 + ·) rather than the function h1. Motivated by the first inequality in Proposition 3.1, we define for each L > 0 a new Orlicz norm based on the function h2 as follows.
Since h2 is convex, h2(0) = 0, and h2 is increasing on [0, ∞), it follows that Ψ2(·; L) defines a valid Orlicz norm (as defined in Section 1) for each L:
| (3.2) |
We call the Bennett-Orlicz norm of X. Note that with ψ(Lx) ≡ x−2(2/L2)h2(Lx),
We first relate Ψ2(·; L) to Ψ1(x; L) and to the usual Gaussian Orlicz norm defined by Ψ2(x) = exp(x2) − 1.
Proposition 3.2
Ψ2(x; L) ≤ exp(x2) − 1 = Ψ2(x) for all x ≥ 0.
Ψ2 (x; L) ≥ Ψ1(x; L/3) for x ≥ 0.
Proof
(i) follows since ψ(x) ≡ 2x−2h(1 + x) ≤ 1 for all x ≥ 0; see Shorack and Wellner (1986), Proposition 11.1.1, page 441. To show that (ii) holds, note that by Eq. 2.1
Thus the claimed inequality in (ii) is equivalent to
or equivalently
But the inequality in the last display holds in view of Eq. 2.5.
Note that while h1 and Ψ1(·; L) have explicit inverses given in terms of and log(1 + v) by Eqs. 2.7 and 2.3, inverses of the functions h2 and Ψ2(·; L) can only be written in terms of Lambert’s function (also called the product log function) W satisfying W (z) exp(W (z)) = z; see Corless et al. (1996). But this slight difficulty is easily overcome by way of several nice inequalities for W. By use of W and the inequalities developed in the Appendix, Section 6, we obtain the following proposition concerning .
Proposition 3.3
for y ≥ 0.
- Furthermore, with W denoting the Lambert W function,
- If (L2/2) log(1 + y) ≥ 1, then
- If (L2/2) log(1 + y) ≥ 5, then
- If (L2/2) log(1 + y) ≤ 9/4, then
Proof
(i) follows immediately from Proposition 3.2. (ii) follows from the definition of Ψ2(·; L) and direct computation for the first part; the second part follows from Lemma 6.1. The inequality in (iii) follows from (ii) and Lemma 6.2. The first inequality in (iv) follows from (iii) since log(y − 1) ≥ (1/2) log y for y ≥ 4. The second inequality in (iv) follows by noting that
if L2/2 ≥ 1. (v) follows from (ii) and Lemma 6.3, part (iv).
Lemmas 2.1 and 2.2 by van de Geer and Lederer (2013) as stated in Section 2 should be compared with the development on page 96 of van der Vaart and Wellner (1996). We now show that the following analogues of Lemmas 2.1–2.3 hold for .
Lemma 3.1
Let . Then
where h2(x) ≡ h(1 + x) and is the inverse of h2 (so that )
Proof
Let y > 0. Since Ψ2(x;L) = exp((2/L2)h2(Lx)) − 1 = et − 1 implies h2(Lx) = L2t/2, it follows that for any we have
Lemma 3.2
Suppose that for some τ > 0 we have
Equivalently,
Then .
Proof
Let α, β > 0. We compute
Choosing this yields
Hence we conclude that .
Corollary 3.1
If X ~ Poisson(ν), then .
- If X1, …, Xn are i.i.d. Bernoulli(p), then
If X ~ N(0, 1), then for every L > 0. By taking the limit on L ↘ 0 and noting that Ψ2·(z; L) → Ψ2(z) ≡ exp(z2) − 1 as L ↘ 0 this yields . In this case it is known that . (See van der Vaart and Wellner (1996), Exercise 2.2.1, page 105.)
Now for an inequality paralleling Lemma 2.3 for the Bernstein-Orlicz norm:
Lemma 3.3
Let τ and L be constants, and let Z1, …, Zm be random variables satisfying . Then
Furthermore,
for all m such that log(1 + m) ≥ 5 (or m ≥ e5 − 1).
Remark 3.1
The point of this last bound is that it gives an explicit trade-off between the Gaussian component (the term ) and the Poisson component (the term log(1 + m)/log log(1 + m)) governed by a Bennett type inequality. In contrast, the bounds obtained by van de Geer and Lederer (2013) yield a trade-off between the Gaussian world and the sub-exponential world governed by a Bernstein type inequality.
Proof
We write Ψ2,L ≡ Ψ2(·; L). Let c>τ. Then by Jensen’s inequality
Therefore,
| (3.3) |
| (3.4) |
The remaining claims follow from Proposition 3.3.
Here are analogues of Lemmas 4 and 5 of van de Geer and Lederer (2013).
Lemma 3.4
Let Z1, …, Zm be random variables satisfying
| (3.5) |
for some L and τ. Then, for all t > 0
Proof
For any a > 0 and t > 0 concavity of together with imply that
Therefore, by using a union bound and Lemma 3.1
Lemma 3.5
Let Z1, …, Zm be random variables satisfying (3.5). Then
Proof
Let
Then Lemma 3.4 implies that
Then the conclusion follows from Lemma 3.2.
4 Prokhorov’s “Arcsinh” Exponential Bound and Orlicz Norms
Another important exponential bound for sums of independent bounded random variables is due to Prokhorov (1959). As will be seen below, Prokhorov’s bound involves another function h4 (rather than h2 of Bennett’s inequality) given by
| (4.1) |
Suppose that X1, …, Xn are independent random variables with E(Xj) = μj and |Xj − μj| ≤ b for some b > 0. Let Sn = X1 + ⋯ + Xn, and set , . Prokhorov’s “arcsinh” exponential bound is as follows:
Proposition 4.1
(Prokhorov) If the Xj’s satisfy the above assumptions, then
Equivalently, with and h4(x) ≡ (x/2)arcsinh(x/2),
| (4.2) |
See e.g. Prokhorov (1959), Stout (1974), de la Peña and Giné (1999), Johnson et al. (1985), & Kruglov (2006). Johnson et al. (1985) use Prokhorov’s inequality to control Orlicz norms for functions Ψ of the form Ψ(x) = exp(ψ(x)) with ψ(x) ≡ x log(1+x) and use the resulting inequalities to show that the optimal constants Dp in Rosenthal’s inequalities grow as p/log(p).
Kruglov (2006) gives an improvement of Prokhorov’s inequality which involves replacing h4 by
Note that Prokhorov’s inequality is of the same form as Bennett’s inequality (3.1) in Proposition 3.1, but with Bennett’s h2 replaced by Prokhorov’s h4.
Thus we want to compare Prokhorov’s inequality (and Kruglov’s improvement thereof) to Bennett’s inequality. As can be seen from the above development, this boils down to comparison of the functions h2, h4, and h5. The following lemma makes a number of comparisons and contrasts between the functions h2, h4, and h5.
Lemma 4.1
(Comparison of h2, h4, and h5)
-
(i)(a)
h2(x) ≥ h5(x) ≥ h4(x) for all x ≥ 0.
-
(i)(b)
for all y ≥ 0.
-
(ii)(a)
h2(x) ≥ (x/2) log(1 + x) ≥ (x/2) log(1 + x/2) for all x ≥ 0.
-
(ii)(b)
h4(x) ≥ (x/2) log(1 + x/2) for all x ≥ 0.
-
(ii)(c)
h5(x) ≥ (x/2) log(1 + x/2) for all x ≥ 0.
-
(iii)(a)
h2(x) ~ 2−1x2 as x ↘ 0; h2(x) ~ x log(x) as x → ∞.
-
(iii)(b)
h4(x) ~ 4−1x2 as x ↘ 0; h4(x) ~ (1/2)x log(x) as x → ∞.
-
(iii)(c)
h5(x) ~ 4−1x2 as x ↘ 0; h5(x) ~ x log(x) as x → ∞.
-
(iii)(d)
h2(x) − h4(x) ~ x2/4 as x ↘ 0; h2(x) − h4(x) ~ (1/2)x log x as x→ ∞.
-
(iii)(e)
h2(x) − h5(x) ~ x2/4 as x ↘ 0; h2(x) − h5(x) ~ log x as x→ ∞.
-
(iv)(a)h2(x) = 2−1x2ψ2(x) where
-
(iv)(b)h4(x) = 4−1x2ψ4(x) where
-
(iv)(c)h5(x) = 4−1x2ψ5(x) where
Proof
(i) We first prove that h2(x) ≥ h4(x). Let g(x) = h2(x) − h4(x); thus
Then g(0) = 0 and
also has g′(0) = 0. Note that and hence . Thus
| (4.3) |
and hence
and it suffices to show that the right side is ≥ 0 for all x. Thus we let
Let . Then and we compute
so that and the numerator, j, is easily seen to be non-negative since (1 + x2)3/2 ≥ 1 + x2 implies 2(1 + x2)3/2 ≥ 2(1 + x2) ≥ 1 + 2x for all x ≥ 0. Thus h2(x) ≥ h4(x).
Kruglov (2006) shows that h5(x) ≥ h4(x). Now we show that h2(x) ≥ h5(x). Note that with g(x) ≡ h2(x) − h5(x),
has g′(x) = 0 and g′(x) ≥ 0 (as was shown above in (4.3)). Thus .
-
(i)(b)
The inequalities for the inverse functions follow immediately from the inequalities for the functions themselves in (i)(a).
-
(ii)(a)To show that the first inequality holds, consider
Then g(0) = 0 and
Thus g′ (0) = 0 and . The second inequality in (ii)(a) is trivial. -
(ii)(b)
This follows easily from for all v ≥ 0.
-
(ii)(c)
This follows from (i)(a) and (ii)(b).
-
(iii)(a)
This follows from ψ2(x) ≡ ψ(x) → 1 as x ↘ 0; see Proposition 11.1.1, page 441, Shorack and Wellner (1986).
-
(iii)(b)Now
with , and
with . Therefore
and -
(iii)(c)Now
where and is decreasing. Thus for some 0 ≤ x ≤ x* and we conclude that 4x−2h5(x) → 1 as x ≥ x* ↘ 0. -
(iv)(a)
The first part is a restatement of (ii)(a). The second part follows from Eq. 2.6: h2(x) = h(1 + x) ≥ 9h0(x) = x2/(2(1 + x/3)), and the claim follows by definition of ψ2.
-
(iv)(b)The first inequality is a restatement of (ii)(b). The second inequality follows since where is decreasing, so
To prove the third inequality, note that
holds if 1 + x2/8 ≥ c(1 + x2/4), or if 1 − c ≥ (x2/4)(c − 1/2). Then rearrange and take c = (1 − δ) for δ ∈ (0, 1/2). -
(iv)(c)The first inequality follows from (ii)(c). The second inequality follows by arguing as in (iv)(b), but now without the complicating second factor: note that
since is decreasing.
Discussion
Even though Kruglov’s inequality improves on Prokhorov’s inequality, (ia) of Lemma 4.1 shows that Bennett’s inequality dominates both Kruglov’s improvement of Prokhorov’s inequality and Prokhorov’s inequality itself: h2 ≥ h5 ≥ h4.
(ii) of Lemma 4.1 shows that all three of the inequalities, Bennett, Kruglov, and Prokhorov, are based on functions h2, h5, and h4 which are bounded below by (x/2) log(1+x/2) for all x ≥ 0. On the other hand, (ii)(d) shows that both h2 and h5 are very nearly equivalent for large x, but that although h4 grows at the same x log x rate as h2 and h5, h4 is smaller by a multiplicative factor of 1/2 as x → ∞.
(iii)(a-c) of Lemma 4.1 shows that h2(x) ~ x2/2 as x ↘ 0 while hk(x) ~ x2/4 for both h5 and h4; thus h2(x) is larger at x = 0 by a factor of 2. Furthermore, the difference h2 − h4 is of order (1/2)x log x as x→∞, while the difference h2 − h5 is only of order log x as x → ∞.
(iv) of Lemma 4.1 re-expresses the behavior of the Kruglov and Prokhorov inequalities for small values of x in terms of the corresponding ψk functions. The upshot of all of these comparisons is that Bennett’s inequality dominates both the Kruglov and Prokhorov inequalities. Figures 1–2 give graphical versions of these comparisons as well as comparisons to the Bernstein type h–functions h0 and h1.
Figure 1.

Comparison of the hk functions h0, h1, h2, h4, and h5. The plot shows the functions hk. The function h0 is plotted in magenta (tiny dashing), h1 in blue (medium dashing), h2 in red (no dashing), h4 in purple (large dashing), and h5 is plotted in black (medium dashing). For values of the argument larger than ≈ 1.4 h2 > h5 > h4 ≫ h1 > h0 (and all are below h2), while for values of the argument smaller than ≈ 1, h2 > h1 > h0 ≫ h5 > h4
Figure 2.

The plot depicts (with the same colors and dashing as in Fig. 1) the ratios x ↦ hk(x)/(x2/2) ≡ ψk(x) for k ∈ {0, 1, 2, 4, 5}. This figure illustrates our finding that the Prokhorov type h–functions are smaller by a factor of 1/2 at x = 0, while they again dominate the Bernstein type h–functions for larger values of x, with the cross-overs occurring again between 1 and 1.4
5 Comparisons with Some Results of Talagrand
Our goal in this section is to give comparisons with some results of Talagrand (1989, 1994), especially his Theorem 3.5, page 45, and Proposition 6.5, page 58.
Talagrand (1994) defines a function φL,S as follows:
Because of the square-root on the log term, this can be regarded as corresponding to a “sub - Bennett” type exponential bound. One of the interesting properties of φL,S established by Talagrand (1994) is given in the following lemma:
Lemma 5.1
There is a number K(L) depending on L only such that
This is Lemma 3.6 of Talagrand (1994) page 47. Talagrand uses this Lemma to develop a Kiefer-type inequality: see also van der Vaart and Wellner (1996), Corollary A.6.3. In the basic Kiefer type inequality for Binomial random variables, van der Vaart and Wellner (1996), Corollary A.6.3, it follows that
for log(1/p) − 1 ≥ 11; i.e. for p ≤ e−12.
A similar fact holds for any exponential bound of the Bennett type under a certain boundedness hypothesis. Suppose that
and that P (|Z| ≥ v) = 0 for all v ≥ C. Then, since ψ is decreasing, for z ≤ C
where the log term can be made arbitrarily large by choosing τ sufficiently small. Here the second inequality follows from the fact that
| (5.1) |
Proof. of Eq. 5.1
Proof. of Eq. 5.1: Since ψ(x) = 2x−2h(1 + x) where h(x) = x(log x − 1) + 1, we can write, with ,
where both terms are clearly non-negative.
Now we consider another basic inequality due to Talagrand (1994). Suppose that
satisfies the following three properties:
C ⊂ D implies that θ(C) ≤ θ(D) for C, .
θ(C ∪ D) ≤ θ(C) + θ(D).
θ(C) ≤ |C| = #(C).
Then if X1, …, Xn are i.i.d. P non-atomic on and Z ≡ θ({X1, …, Xn}), for some universal constant K2 we have, for z ≥ K2E(Z),
As noted by Talagrand (1994), this follows from an isoperimetric inequality established in Talagrand (1989), but it is also a consequence of results of Talagrand (1991, 1995). Here we simply note that it can be rephrased as a Bennett type inequality: for all z ≥ K2E(Z)
This follows by simply checking that
for z ≥ K2E(Z).
Also see Ledoux (2001), Theorem 7.5, page 142 and Corollary 7.8, page 148; Massart (2000), and Boucheron et al. (2013), Theorem 6.12, page 182.
One further remark seems to be in order: Talagrand (1989) Theorem 2 and Proposition 12, shows that Orlicz norms of the Bennett type are “too large” to yield nice generalizations of the classical Hoffmann-Jørgensen inequality in the setting of sums of independent bounded sequences in a general Banach space. This follows by noting that Talagrand’s condition (2.11) fails for the Bennett-Orlicz norm Ψ2(·, L) as defined in Eq. 3.2.
Acknowledgments
I owe thanks to Evan Greene and Johannes Lederer for several helpful conversations and suggestions. Thanks are also due to Richard Nickl for a query concerning Prokhorov’s inequality.
Jon A. Wellner was supported in part by NSF Grants DMS-1104832 and DMS-1566514, and NI-AID grant 2R01 AI291968-04
Appendix 1: Lambert’s Function W; Inverses of h and h2
Let h(x) ≡ x(log x − 1) + 1 and h2(x) ≡ h(1 + x) for x ≥ 0. The function h is convex, decreasing on [0, 1], increasing on [1, ∞), with h(1) = 0; see Shorack and Wellner (1986), page 439. The Lambert, or product log function, W (see e.g. Corless et al. (1996) and satisfies W (x)eW(x) = x for x ≥ −1/e. As noted by Boucheron et al. (2013), problem 2.18, the inverse functions h−1 (for the function h: [1, ∞) → [0, ∞)) and (for the function h2: [0, ∞) → [0, ∞)) can be expressed in terms of the function W. Here are some facts about W:
Fact 1
W: [−1/e, ∞) ↦ ℝ is multi-valued on [−1/e, 0) with two branches W0 and W−1 where W0(x) > 0, W−1(x) < 0, and W0(−1/e) = −1 = W−1(−1/e).
Fact 2
W0 is monotone increasing on [−1/e, ∞) with W (0) = 0 and W′ (0) = 1.
See Roy and Olver (2010), section 4.13, page 111; and Corless et al. (1996).
In the following we simply write W for W0. The following lemma shows that the inverses of the functions h and h2 can be expressed in terms of W.
Lemma 6.1
(h and h2 inverses in terms of W)
- For y ≥ 0
(6.1) - For y ≥ 0
(6.2)
Proof
If h−1 is as in the display we have, since h(x) = x(log x − 1) + 1,
Thus Eq. 6.1 holds. Then Eq. 6.2 follows immediately.
In view of Lemma 6.1, the following lower bounds on the function W will be useful in deriving upper bounds on h−1 and .
Lemma 6.2
(A lower bound for W) For z > 0
| (6.3) |
Proof
We first prove (6.3) for z ≥ 1/e. Since W(z) is increasing for z ≥ 0, the claimed inequality is equivalent to
for ez ≥ 1 where y ≡ (ez)1/2. But then the last display is equivalent to
or
Now g(1) = 0, g(e) = 0, and g′(y) = 2y − e − e log y has g′(1) = 2 − e < 0, g′(e) = 0, and g′(y) > 0 for y > e with g″(y) = 2 − e/y, we find that g″(e) = 2 − e/e = 1 > 0. Thus the claimed bound holds for z ≥ 1/e. For 0 ≤ z < 1/e the bound holds trivially since W (z) ≥ 0 while 2−1 log(ez) < 0.
Combining Lemma 6.1 with the lower bounds for W given in Lemma 6.2 yields the following upper bounds for h−1 and . The second and third parts of the following lemma are motivated by the fact that h2(x) = h(1 + x) ≡ (x2/2)ψ(x) where ψ(x) ↗ 1 as x ↘ 0; see Shorack and Wellner (1986), Proposition 4.4.1, page 441.
Lemma 6.3
(Upper bounds for h−1 and )
- For y > 1 + e
(6.4) - For y > 1 + e,
(6.5) - For 0 ≤ y ≤ 9c−2(c2/2 − 1)2 with ,
In particular, with c = 2, the bound holds for 0 ≤ y ≤ 9/4, and with c = 2.2, the bound holds for 0 ≤ y ≤ 1 + e.(6.6) - For 0 < y < ∞,
(6.7)
Proof
Follows from (i) of Lemma 6.1 together with Lemma 6.2. Note that g(x) ≡ x/log(x) ≥ e and g is increasing for x ≥ e.
follows from (ii) of Lemma 6.1 and Lemma 6.2.
- To show that Eq. 6.6 holds, note that the inequality is equivalent to , and hence, by taking , to the inequality
where ψ(x) ≡ (2/x2)h(1 + x) ≥ 1/(1 + x/3) by Lemma 4.1 (iva) (or by (10) of Proposition 11.4.1, Shorack and Wellner (1986) page 441). But then we have
where the last inequality holds if 0 ≤ x ≤ 3(c2/2 − 1). Hence the inequality in (iii) holds for 0 ≤ y ≡ x2/c2 ≤ 9(c2/2 − 1)2/c−2. Finally, (iv) holds by combining the bounds in (ii) and (iii).
Appendix 2: General versions of Lemmas 1-5
Now consider Young functions of the form Ψ = eψ − 1 where ψ is assumed to be convex and nondecreasing with ψ(0) = 0. (Note that we have changed notation in this section: the functions h and hj for j ∈ {0, 1, 2, 4, 6} in Sections 1–6 are denoted here by ψ.) Our goal in this section is to give general versions of Lemmas 1 - 5 of van de Geer and Lederer (2013) and Section 3 above. The advantage of this formulation is that the resulting lemmas apply to all the special cases treated in Sections 2 and 3 and more.
Lemma 7.1
Suppose that τ ≡ ‖Z‖Ψ < ∞. Then for all t > 0
For the general version of Lemma 2 we consider a scaled version of Ψ as follows:
| (7.1) |
Lemma 7.2
Suppose that for some τ > 0 and L > 0
Then .
Lemma 7.3
Suppose that Ψ is non-decreasing, convex, with Ψ(0)=0. Suppose that Z1, …, Zm are random variables with max1≤j≤m ‖Zj‖Ψ≡τ <∞. Then
Lemma 7.4
Suppose that Ψ is non-decreasing, convex, with Ψ(0)=0. Suppose that Z1, …, Zm are random variables with max1≤j≤m ‖Zj‖Ψ≡τ <∞. Then
Lemma 7.5
Suppose that Ψ is non-decreasing, convex, with Ψ(0)=0. Suppose that Z1, …, Zm are random variables with max1≤j≤m‖Zj‖Ψ ≡τ <∞. Then
Proof of Lemma 7.1
For all c > ‖Z‖Ψ
Thus letting c ↘ τ yields
Proof of Lemma 7.2
Let α, β > 0. We compute
by choosing .
Proof of Lemma 7.3
Let c > τ. Then by Jensen’s inequality and convexity of Ψ
Letting c ↘ τ yields
Proof of Lemma 7.4
For any u > 0 and v > 0 concavity of ψ−1 implies that
Therefore, by using this with u = log(1 + m) and v = t, a union bound, and Lemma 7.1,
Proof of Lemma 7.5
By Lemma 7.4
so the hypothesis of Lemma 7.2 holds for
with and τ replaced by . Thus the conclusion of Lemma 7.2 holds for Z with these choices of L and .
References
- Arcones MA, Giné E. On the law of the iterated logarithm for canonical U-statistics and processes. Stochastic Process Appl. 1995;58(2):217–245. [Google Scholar]
- Bennett G. Probability inequalities for the sum of independent random variables. Journal of the American Statistical Association. 1962;57:33–45. [Google Scholar]
- Birgé L, Massart P. Minimum contrast estimators on sieves: exponential bounds and rates of convergence. Bernoulli. 1998;4(3):329–375. [Google Scholar]
- Boucheron S, Lugosi G, Massart P. Concentration Inequalities. Oxford University Press; Oxford: 2013. [Google Scholar]
- Corless RM, Gonnet GH, Hare DEG, Jeffrey DJ, Knuth DE. On the Lambert W function. Adv Comput Math. 1996;5(4):329–359. [Google Scholar]
- De La Peña VH, Giné E. Probability and its Applications (New York) Springer-Verlag; New York: 1999. Decoupling; From dependence to independence. [Google Scholar]
- Dudley RM. Uniform Central Limit Theorems, volume 63 of Cambridge Studies in Advanced Mathematics. Cambridge University Press; Cambridge: 1999. [Google Scholar]
- Ghosh S, Goldstein L. Applications of size biased couplings for concentration of measures. Electron Commun Probab. 2011a;16:70–83. [Google Scholar]
- Ghosh S, Goldstein L. Concentration of measures via size-biased couplings. Probab Theory Related Fields. 2011b;149(1-2):271–278. [Google Scholar]
- Goldstein L, Iṡlak Ü. Concentration inequalities via zero bias couplings. Statist Probab Lett. 2014;86:17–23. [Google Scholar]
- Hewitt E, Stromberg K. Real and Abstract Analysis. Springer-Verlag; New York-Heidelberg: 1975. A modern treatment of the theory of functions of a real variable, Third printing, Graduate Texts in Mathematics, No. 25. [Google Scholar]
- Johnson WB, Schechtman G, Zinn J. Best constants in moment inequalities for linear combinations of independent and exchangeable random variables. Ann Probab. 1985;13(1):234–253. [Google Scholar]
- Krasnoseľskiĭ MA, Rutickiĭ JB. Convex Functions and Orlicz Spaces. Leo F. Boron. P. Noordhoff Ltd.; Groningen: 1961. Translated from the first Russian edition. [Google Scholar]
- Kruglov VM. Strengthening of Prokhorov’s arcsine inequality. Theor Probab Appl. 2006;50:677–684. Transl. from Strengthening the Prokhorov arcsine inequality, Teor. Veroyatn. Primen., 50, (2005). [Google Scholar]
- Ledoux M. The Concentration of Measure Phenomenon, volume 89 of Mathematical Surveys and Monographs. American Mathematical Society; Providence, RI: 2001. [Google Scholar]
- Massart P. About the constants in Talagrand’s concentration inequalities for empirical processes. Ann Probab. 2000;28(2):863–884. [Google Scholar]
- Pisier G. Banach spaces, harmonic analysis, and probability theory (Storrs, Conn., 1980/1981), volume 995 of Lecture Notes in Math. Springer; Berlin: 1983. Some applications of the metric entropy condition to harmonic analysis; pp. 123–154. [Google Scholar]
- Pollard D. Empirical Processes: Theory and Applications. Institute of Mathematical Statistics; Hayward CA: American Statistical Association; Alexandria, VA: 1990. (NSF-CBMS Regional Conference Series in Probability and Statistics, 2). [Google Scholar]
- Prokhorov YV. An extremal problem in probability theory. Theor Probability Appl. 1959;4:201–203. [Google Scholar]
- Rao MM, Ren ZD. Theory of Orlicz spaces, volume 146 of Monographs and Textbooks in Pure and Applied Mathematics. Marcel Dekker, Inc; New York: 1991. [Google Scholar]
- Roy R, Olver FWJ. NIST handbook of mathematical functions. U.S. Dept. Commerce; Washington, DC: 2010. Elementary functions; pp. 103–134. [Google Scholar]
- Shorack GR, Wellner JA. Empirical Processes with Applications to Statistics. John Wiley & Sons Inc.; New York: 1986. (Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics). [Google Scholar]
- Stout WF. Almost Sure Convergence. Vol. 24 Academic Press; New York-London: 1974. (Probability and Mathematical Statistics). [A subsidiary of Harcourt Brace Jovanovich, Publishers] [Google Scholar]
- Talagrand M. Isoperimetry and integrability of the sum of independent Banach-space valued random variables. Ann Probab. 1989;17(4):1546–1570. [Google Scholar]
- Talagrand M. Geometric aspects of functional analysis (1989–90), volume 1469 of Lecture Notes in Math. Springer; Berlin: 1991. A new isoperimetric inequality and the concentration of measure phenomenon; pp. 94–124. [Google Scholar]
- Talagrand M. Sharper bounds for Gaussian and empirical processes. Ann Probab. 1994;22(1):28–76. [Google Scholar]
- Talagrand M. Concentration of measure and isoperimetric inequalities in product spaces. Inst Hautes Études Sci Publ Math. 1995;81:73–205. [Google Scholar]
- Van De Geer S, Lederer J. The Bernstein-Orlicz norm and deviation inequalities. Probab Theory Related Fields. 2013;157(1-2):225–250. [Google Scholar]
- Van Der Vaart AW, Wellner JA. Weak Convergence and Empirical Processes. Springer-Verlag; New York: 1996. (Springer Series in Statistics). [Google Scholar]
