Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Nov 1.
Published in final edited form as: Math Program. 2016 Mar 25;160(1):433–475. doi: 10.1007/s10107-016-0993-7

Approximating the Little Grothendieck Problem over the Orthogonal and Unitary Groups

Afonso S Bandeira *, Christopher Kennedy , Amit Singer
PMCID: PMC5110258  NIHMSID: NIHMS772861  PMID: 27867224

Abstract

The little Grothendieck problem consists of maximizing Σij Cijxixj for a positive semidef-inite matrix C, over binary variables xi ∈ {±1}. In this paper we focus on a natural generalization of this problem, the little Grothendieck problem over the orthogonal group. Given C ∈ ℝdn × dn a positive semidefinite matrix, the objective is to maximize ijtr(CijTOiOjT) restricting Oi to take values in the group of orthogonal matrices Od, where Cij denotes the (ij)-th d × d block of C.

We propose an approximation algorithm, which we refer to as Orthogonal-Cut, to solve the little Grothendieck problem over the group of orthogonal matrices Od and show a constant approximation ratio. Our method is based on semidefinite programming. For a given d ≥ 1, we show a constant approximation ratio of α(d)2, where α(d) is the expected average singular value of a d × d matrix with random Gaussian N(0,1d) i.i.d. entries. For d = 1 we recover the known α(1)2 = 2/π approximation guarantee for the classical little Grothendieck problem. Our algorithm and analysis naturally extends to the complex valued case also providing a constant approximation ratio for the analogous little Grothendieck problem over the Unitary Group Ud.

Orthogonal-Cut also serves as an approximation algorithm for several applications, including the Procrustes problem where it improves over the best previously known approximation ratio of 122. The little Grothendieck problem falls under the larger class of problems approximated by a recent algorithm proposed in the context of the non-commutative Grothendieck inequality. Nonetheless, our approach is simpler and provides better approximation with matching integrality gaps.

Finally, we also provide an improved approximation algorithm for the more general little Grothendieck problem over the orthogonal (or unitary) group with rank constraints, recovering, when d = 1, the sharp, known ratios.

Keywords: Approximation algorithms, Procrustes problem, Semidefinite programming

1 Introduction

The little Grothendieck problem [AN04] in combinatorial optimization is written as

maxxi{±1}i=1nj=1nCijxixj, (1)

where C is a n × n positive semidefinite matrix real matrix.

Problem (1) is known to be NP-hard. In fact, if C is a Laplacian matrix of a graph then (1) is equivalent to the Max-Cut problem. In a seminal paper in the context of the Max-Cut problem, Goemans and Williamson [GW95] provide a semidefinite relaxation for (1):

supmmaxXimXi2=1i=1nj=1nCijXiTXj. (2)

It is clear that in (2), one can take m = n. Furthermore, (2) is equivalent to a semidefinite program and can be solved, to arbitrary precision, in polynomial time [VB96]. In the same paper [GW95] it is shown that a simple rounding technique is guaranteed to produce a solution whose objective value is, in expectation, at least a multiplicative factor 2πmin0θπθ1cosθ0.878 of the optimum.

A few years later, Nesterov [Nes98] showed an approximation ratio of 2π for the general case of an arbitrary positive semidefinite C ⪰ 0 using the same relaxation as [GW95]. This implies, in particular, that the value of (1) can never be smaller than 2π times the value of (2). Interestingly, such an inequality was already known from the influential work of Grothendieck on norms of tensor products of Banach spaces [Gro96] (see [Pis11] for a survey on this).

Several more applications have since been found for the Grothendieck problem (and variants), and its semidefinite relaxation. Alon and Naor [AN04] showed applications to estimating the cut-norm of a matrix; Ben-Tal and Nemirovski [BTN02] showed applications to control theory; Briet, Buhrman, and Toner [BBT11] explored connections with quantum non-locality. For many more applications, see for example [AMMN05] (and references therein).

In this paper, we focus on a natural generalization of problem (1), the little Grothendieck problem over the orthogonal group, where the variables are now elements of the orthogonal group Od, instead of {±1}. More precisely, given C ∈ ℝdn × dn a positive semidefinite matrix, we consider the problem

maxO1,,OnOdi=1nj=1ntr(CijTOiOjT), (3)

where Cij denotes the (i, j)-th d × d block of C, and Od is the group of d × d orthogonal matrices (i.e., OOd if and only if OOT = OT O = Id×d).

We will also consider the unitary group variant, where the variables are now elements of the unitary group Ud (i.e., UUd if and only if UUH = UH U = Id×d). More precisely, given C ∈ ℂdn×dn a complex valued positive semidefinite matrix, we consider the problem

maxU1,,UnUdi=1nj=1ntr(CijHUiUjH). (4)

Since C is Hermitian positive semidefinite, the value of the objective function in (4) is always real. Note also that when d = 1, (3) reduces to (1). Also, since U1 is the multiplicative group of the complex numbers with unit norm, (4) recovers the classical complex case of the little Grothendieck problem. In fact, the work of Nesterov was extended [SZY07] to the complex plane (corresponding to U1, or equivalently, the special orthogonal group SO2) with an approximation ratio of π4 for C ⪰ 0. As we will see later, the analysis of our algorithm shares many ideas with the proofs of both [Nes98] and [SZY07] and recovers both results.

As we will see in Section 2, several problems can be written in the forms (3) and (4), such as the Procrustes problem [Sch66, Nem07, So11] and Global Registration [CKS15]. Moreover, the approximation ratio we obtain for (3) and (4) translates into the same approximation ratio for these applications, improving over the best previously known approximation ratio of 122 in the real case and 12 in the complex case, given by [NRV13] for these problems.

Problem (3) belongs to a wider class of problems considered by Nemirovski [Nem07] called QO-OC (Quadratic Optimization under Orthogonality Constraints), which itself is a subclass of QC-QP (Quadratically Constrainted Quadratic Programs). Please refer to Section 2 for a more detailed comparison with the results of Nemirovski [Nem07]. More recently, Naor et al. [NRV13] proposed an efficient rounding scheme for the non commutative Grothendieck inequality that provides an approximation algorithm for a vast set of problems involving orthogonality constraints, including problems of the form of (3) and (4). We refer to Section 1.2 for a comparison between this approach and ours.

Similarly to (2) we formulate a semidefinite relaxation we name the Orthogonal-Cut SDP:

supmmaxXiXiT=Id×dXid×mi=1nj=1ntr(CijTXiXiT). (5)

Analogously, in the unitary case, we consider the relaxation

supmmaxYiYiH=Id×dYid×mi=1nj=1ntr(CijHYiYjH). (6)

Since C is Hermitian positive semidefinite, the value of the objective function in (6) is guaranteed to be real. Note also that we can take m = dn as the Gram matrix [XiXjT]i,j does not have a rank constraint for this value of m. In fact, both problems (5) and (6) are equivalent to the semidefinite program

maxGKdn×dnGii=Id×d,G_0tr(CG), (7)

for K respectively ℝ and ℂ, which are generally known to be computationally tractable1 [VB96, Nes04, AHO98]. At first glance, one could think of problem (5) as having d2n variables and that we would have to take m = d2n for (5) to be tractable (in fact, this is the size of the SDP considered by Nemirovski [Nem07]). The savings in size (corresponding to number of variables) of our proposed SDP relaxation come from the group structure of Od (or Ud).

One of the main contributions of this paper is showing that Algorithm 3 (Section 1.1) gives a constant factor approximation to (3), and its unitary analog (4), with an optimal approximation ratio for our relaxation (Section 6). It consists of a simple generalization of the rounding in [GW95] applied to (5), or (4).

Theorem 1

Let C ⪰ 0 and real. Let V1,,VnOd be the (random) output of the orthogonal version of Algorithm 3. Then

E[i=1nj=1ntr(CijTViVjT)]α(d)2maxO1,,OnOdi=1nj=1ntr(CijTOiOjT),

where α(d) is the constant defined below.

Analogously, in the unitary case, if W1,,WnUd are the (random) output of the unitary version of Algorithm 3, then for C ⪰ 0 and complex,

E[i=1nj=1ntr(CijHWiWjH)]α(d)2maxU1,,UnUdi=1Nj=1ntr(CijHUiUjH),

where α(d) is defined below.

Definition 2

Let G ∈ ℝd×d and G ∈ ℂd×d be, respectively, a Gaussian random matrix with i.i.d real valued entries N(0,d1) and a Gaussian random matrix with i.i.d complex valued entries N(0,d1). We define

α(d):=𝔼[1dj=1dσj(G)]andα(d):=𝔼[1dj=1dσj(G)],

where σj(G) is the jth singular value of G.

Although we do not have a complete understanding of the behavior of α(d) and α(d) as functions of d, we can, for each d separately, compute a closed form expression (see Section 4). For d = 1 we recover the sharp α(1)2=2π and α(1)2=π4 results of, respectively, Nesterov [Nes98] and So et al. [SZY07]. One can also show that limdαK(d)2=(83π)2, for both K= and K=. Curiously,

α(1)2=2π<(83π)2<π4=α(1)2.

Our computations strongly suggest that α(d) is monotonically increasing while its complex analog α(d) is monotonically decreasing. We find the fact that the approximation ratio seems to get, as the dimension increases, better in the real case and worse in the complex case quite intriguing. One might naively think that the problem for a specific d can be formulated as a degenerate problem for a larger d, however this does not seem to be true, as evidenced by the fact that α2(d) is increasing. Another interesting point is that α(2) ≠ α(1) which suggests that the little Grothendieck problem over O2 is quite different from the analog in U1 (which is isomorphic to SO2). Unfortunately, we were unable to provide a proof for the monotonicity of αK(d) (Conjecture 8). Nevertheless, we can show lower bounds for both α2(d) and α2(d) that have the right asymptotics (see Section 4). In particular, we can show that our approximation ratios are uniformly bounded below by the approximation ratio given in [NRV13].

In some applications, such as the Common Lines problem [SS11] (see Section 5), one is interested in a more general version of (3) where the variables take values in the Stiefel manifold O(d,r), the set of matrices O ∈ ℝd×r such that OOT = Id×d. This motivates considering a generalized version of (3) formulated as, for r ≥ d,

maxO1,,OnO(d,r)i=1nj=1ntr(CijTOiOjT), (8)

for C ⪰ 0. The special case d = 1 was formulated and studied in [BBT11] and [BFV10] in the context of quantum non-locality and quantum XOR games. Note that in the special case r = nd, (8) reduces to (5) and is equivalent to a semidefinite program.

We propose an adaption of Algorithm 3, Algorithm 9, and show an approximation ratio of α(d, r)2, where α(d, r) is also defined as the average singular value of a Gaussian matrix (see Section 5). For d = 1 we recover the sharp results of Briet el al. [BFV10] giving a simple interpretation for the approximation ratios, as α(1, r) is simply the mean of a normalized chi-distribution with r degrees of freedom. As before, the techniques are easily extended to the complex valued case.

In order to understand the optimality of the approximation ratios α(d)2 and α(d)2 we provide an integrality gap for the relaxations (5) and (6) that matches these ratios, showing that they are tight. Our construction of an instance having this gap is an adaption of the classical construction for the d = 1 case (see, e.g., [AN04]). As it will become clear later (see Section 6), there is an extra difficulty in the d > 1 orthogonal case which can be dealt with using the Lowner-Heinz Theorem on operator convexity (see Theorem 13 and the notes [Car09]).

Besides the monotonicity of αK2(d) (Conjecture 8), there are several interesting questions raised from this work, including the hardness of approximation of the problems considered in this paper (see Section 7 for a discussion on these and other directions for future work).

Organization of the paper

The paper is organized as follows. In Section 1.1 below we present the approximation algorithm for (3) and (4). In Section 1.2, we compare our results with the ones in [NRV13]. We then describe a few applications in Section 2 and show the analysis for the approximation ratio guarantee in Section 3. In Section 4 we analyze the value of the approximation ratio constants. Section 5 is devoted to a more general, rank constrained, version of (4). We give an integrality gap for our relaxation in Section 6 and discuss open problems and future work in Section 7. Finally, we present supporting technical results in the Appendix.

1.1 Algorithm

We now present the (randomized) approximation algorithm we propose to solve (3) and (4).

Algorithm 3

Compute X1, …, Xn ∈ ℝd×nd (or Y1, …, Yn ∈ ℂd×nd) a solution to (5) (or (6)). Let R be a nd × d Gaussian random matrix whose entries are real (or complex) i.i.d. N(0,1d). The approximate solution for (3) (or (4)) is now computed as

Vi=P(XiR),

where P(X)=argminZOdZXF(orP(Y)=argminZUdZYF), for any X ∈ ℝd×d (or Y ∈ ℂd×d) and XF=tr(XXT)(YF=tr(YYH)) is the Frobenius norm.

Note that (5) and (6) can be solved with arbitrary precision in polynomial time [VB96] as they are equivalent to a semidefinite program (followed by a Cholesky decomposition) with a, respectively real and complex valued, matrix variable of size dn × dn, and d2n linear constraints. In fact, this semidefinite program has a very similar structure to the classical Max-Cut SDP. This may allow one to adapt specific methods designed to solve the Max-Cut SDP such as, for example, the row-by-row method [WGS12] (see Section 2.4 of [Ban15]).

Moreover, given X a d × d matrix (real or complex), the polar component P(X) is the orthogonal (or unitary) matrix part of the polar decomposition, that can be easily computed via the singular value decomposition of X = UΣVH as P(X)=UVH (see [FH55, Kel75, Hig86]), rendering Algorithm 3 efficient. The polar component P(X)=UVH is the analog in high dimensions of the sign in O1 and the angle in U1 and can also be written as P(X)=X(XHX)12.

1.2 Relation to non-commutative Grothendieck inequality

The approximation algorithm proposed in [NRV13] can also be used to approximate problems (3) and (4). In fact, the method in [NRV13] deals with problems of the form

supX,YONpqklMpqklXpqYkl, (9)

where M is a N × N × N × N real valued 4-tensor.

Problem (3) can be encoded in the form of (9) by taking N = dn and having the d × d block of M, obtained by having the first two indices range from (i − 1)d + 1 to id and the last two from (j − 1)d + 1 to jd, equal to Cij, and the rest of the tensor equal to zero [NRV13]. More explicitly, the nonzero entries of M are given by M(i−1)d+r,(i−1)d+r,(j−1)d+s,(j−1)d+s = [Cij]rs, for each i, j and r, s = 1, …, d. Since C is positive semidefinite, the supremum in (9) is attained at a pair (X, Y) such that X = Y.

In order to describe the relaxation one needs to first define the space of vector-valued orthogonal matrices ON(m)={XN×N×m:XXT=XTX=IN×N} where XXT and XTX are N × N matrices defined as (XXT)pq=k=1Nr=1mXpkrXqkr and (XTX)pq=k=1Nr=1mXkprXkqr.

The relaxation proposed in [NRV13] (which is equivalent to our relaxation when M is specified as above) is given by

supmsupU,VON(m)pqklMpqklUpqVkl, (10)

and there exists a rounding procedure [NRV13] that achieves an approximation ratio of 122. Analogously, in the unitary case, the relaxation is essentially the same and the approximation ratio is 12. We can show (see Section 4) that the approximation ratios we obtain are larger than these for all d ≥ 1. Interestingly, the approximation ratio of 12, for the complex case in [NRV13], is tight in the full generality of the problem considered in [NRV13], nevertheless α(d)2 is larger than this for all dimensions d.

Note also that to approximate (3) with this approach one needs to have N = dn in (10). This means that a naïve implementation of this relaxation would result in a semidefinite program with a matrix variable of size d2n2 × d2n2, while our approach is based on semidefinite programs with matrix variables of size dn × dn. It is however conceivable that when restricted to problems of the type of (3), the SDP relaxation (10) may enjoy certain symmetries or other properties that facilitate its solution.

2 Applications

Problem (3) can describe several problems of interest. As examples, we describe below how it encodes a complementary version of the orthogonal Procrustes problem and the problem of Global Registration over Euclidean Transforms. Later, in Section 5, we briefly discuss yet another problem, the Common Lines problem, that is encoded by a more general rank constrained version of (3).

2.1 Orthogonal Procrustes

Given n point clouds in ℝd of k points each, the orthogonal Procrustes problem [Sch66] consists of finding n orthogonal transformations that best simultaneously align the point clouds. If the points are represented as the columns of matrices A1, …, An, where Ai ∈ ℝd×k then the orthogonal Procrustes problem consists of solving

minO1,,OnOdi,j=1nOiTAiOjTAjF2. (11)

Since OiTAiOjTAjF2=AiF2+AjF22tr((AiAjT)TOiOjT), (11) has the same solution as the complementary version of the problem

maxO1,,OnOdi,j=1ntr((AiAjT)TOiOjT). (12)

Since C ∈ ℝdn×dn given by Cij=AiAjT is positive semidefinite, problem (12) is encoded by (3) and Algorithm 3 provides a solution with an approximation ratio guaranteed (Theorem 1) to be at least α(d)2.

The algorithm proposed in Naor et al. [NRV13] gives an approximation ratio of 122, smaller than α(d)2, for (12). As discussed above, the approach in [NRV13] is based on a semidefinite relaxation with a matrix variable of size d2n2 × d2n2 instead of dn × dn as in (5) (see Section 1.2 for more details).

Nemirovski [Nem07] proposed a different semidefinite relaxation (with a matrix variable of size d2n × d2n instead of dn × dn as in (5)) for the orthogonal Procrustes problem. In fact, his algorithm approximates the slightly different problem

maxO1,,OnOdijtr((AiAjT)TOiOjT), (13)

which is an additive constant (independent of O1, …, On) smaller than (12). The best known approximation ratio for this semidefinite relaxation, due to So [So11], is O(1log(n+k+d)). Although an approximation to (13) would technically be stronger than an approximation to (12), the two quantities are essentially the same provided that the point clouds are indeed perturbations of orthogonal transformations of the same original point cloud, as is the case in most applications (see [NRV13] for a more thorough discussion on the differences between formulations (12) and (13)).

Another important instance of this problem is when the transformations are elements of SO2 (the special orthogonal group of dimension 2, corresponding to rotations of the plane). Since SO2 is isomorphic to U1 we can encode it as an instance of problem (4), in this case we recover the previously known optimal approximation ratio of π4 [SZY07].

Note that, since all instances of problem (3) can be written as an instance of orthogonal Procrustes, the integrality gap we show (Theorem 14) guarantees that our approximation ratio is optimal for the natural semidefinite relaxation we consider for the problem.

2.2 Global Registration over Euclidean Transforms

The problem of global registration over Euclidean rigid motions is an extension of orthogonal Procrustes. In global registration, one is required to estimate the positions x1, …, xk of k points in ℝd and the unknown rigid transforms of n local coordinate systems given (perhaps noisy) measurements of the local coordinates of each point in some (though not necessarily all) of the local coordinate systems. The problem differs from orthogonal Procrustes in two aspects: First, for each local coordinate system, we need to estimate not only an orthogonal transformation but also a translation in ℝd. Second, each point may appear in only a subset of the coordinate systems. Despite those differences, it is shown in [CKS15] that global registration can also be reduced to the form (3) with a matrix C that is positive semidefinite.

More precisely, denoting by Pi the subset of points that belong to the i-th local coordinate system (i = 1 … n), and given the local coordinates

xj(i)=OiT(xlti)+ξil

of point xlPi (where Oi denotes an unknown orthogonal transformation, ti an unknown translation and ξil a noise term). The goal is to estimate the global coordinates xl. The idea is to minimize the function

ϕ=i=1nlPixl(Oixl(i)+ti)2,

over xl, ti ∈ ℝd, OiOd. It is not difficult to see that the optimal xl and ti can be written in terms of O1, …, On. Substituting them back into ε, the authors in [CKS15] reduce the previous optimization to solving

maxOiOdi=1nj=1ntr([BLBT]ijOiOjT), (14)

where L is a certain (n + k) × (n + k) Laplacian matrix, L is its pseudo inverse, and B is a (dn) × (n + k) matrix (see [CKS15]). This means that BLBT ⪰ 0, and (14) is of the form of (3).

3 Analysis of the approximation algorithm

In this Section we prove Theorem 1. As (5) and (6) are relaxations of, respectively, problem (3) and problem (4) their maximums are necessarily at least as large as the ones of, respectively, (3) and (4). This means that Theorem 1 is a direct consequence of the following theorem.

Theorem 4

Let C ⪰ 0 and real. Let X1, …, Xn be a feasible solution to (5). Let V1,,VnOd be the output of the (random) rounding procedure described in Algorithm 3. Then

𝔼[i=1nj=1ntr(CijTViVjT)]α(d)2i=1nj=1ntr(CijTXiXjT),

where α(d) is the constant in Definition 2. Analogously, if C ⪰ 0 and complex and Y1, …, Yn is a feasible solution of (6) and W1,,WnUd the output of the (random) rounding procedure described in Algorithm 3. Then

E[i=1nj=1ntr(CijHWiWjH)]α(d)2i=1nj=1ntr(CijTYiYjH),

where α(d) is the constant in Definition 2.

In Section 6 we show that these ratios are optimal (Theorem 14).

Before proving Theorem 4 we present a sketch of the proof for the case d = 1 (and real). The argument is known as the Rietz method (See [AN04])2:

Let X1, …, Xn ∈ ℝn be a feasible solution to (5), meaning that XiXiT=1. Let R ∈ ℝn×1 be a random matrix with i.i.d. standard Gaussian entries. Our objective is to compare E[i,jnCijsign(XiR)sign(XjR)] with i,jnCijXiXjT. The main observation is that although E[sign(XiR)sign(XjR)] is not a linear function of XiXjT, the expectation E[sign(XiR)XjR] is. In fact E[sign(XiR)XjR]=α(1)XiXjT=2πXiXjT — which follows readily by thinking of Xi and Xj as vectors in the two dimensional plane that they span. We use this fact (together with the positiveness of C) to show our result. The idea is to build the matrix S ⪰ 0,

Sij=(XiRπ2sign(XiR))(XjRπ2sign(XjR)).

Since both C and S are PSD, tr(CS) ≥ 0, which means that

0E[ijCij(XiRπ2sign(XiR))(XjRπ2sign(XjR))].

Combining this with the observation above and the fact that E[XiRXjR]=XiXjT, we have

Ei,jnCijsign(XiR)sign(XjR)2πi,jnCijXiXjT.

Proof

[of Theorem 4] For the sake of brevity we restrict the presentation of the proof to the real case. Nevertheless, it is easy to see that all the arguments trivially adapt to the complex case by, essentially, replacing all transposes with Hermitian adjoints and α(d) with α(d).

Let R ∈ ℝnd×d be a Gaussian random matrix with i.i.d entries N(0,1d). We want to provide a lower bound for

E[i=1nj=1ntr(CijTViVjT)]=E[i=1nj=1ntr(CijTP(UiR)P(UjR)T)].

Similarly to the d = 1 case, one of the main ingredients of the proof is the fact given by the lemma below.

Lemma 5

Let rd. Let M, N ∈ ℝd×nd such that MMT = NNT = Id×d. Let R ∈ ℝnd×d be a Gaussian random matrix with real valued i.i.d entries N(0,1d). Then

E[P(MR)(NR)T]=E[(MR)P(NR)T]=α(d)MNT,

where α(d) is constant in Definition 2.

Analogously, if M, N ∈ ℂd×nd such that MMH = NNH = Id×d, and R ∈ ℂnd×r is a Gaussian random matrix with complex valued i.i.d entries N(0,1d), then

E[P(MR)(NR)H]=E[(MR)P(NR)H]=α(d)MNH,

where α(d) is constant in Definition 2.

Before proving Lemma 5 we use it to finish the proof of Theorem 4.

Just as above, we define the positive semidefinite matrix S ∈ ℝdn×dn whose (i, j)-th block is given by

Sij=(UiRα(d)1P(UiR))(UjRα(d)1P(UjR))T

We have ESij =

=E[UiR(UjR)Tα(d)1P(UiR)(UjR)Tα(d)1UiRP(UjR)T+α(d)2P(UiR)P(UjR)T]=UiE[RRT]UjTα(d)1E[P(UiR)(UjR)T]α(d)1E[UiRP(UjR)T]+α(d)2E[ViVjT]=UiUjTUiUjTUiUjT+α(d)2E[ViVjT]=α(d)2E[ViVjT]UiUjT.

By construction S ⪰ 0. Since C ⪰ 0, tr(CS) ≥ 0, which means that

0E[tr(CS)]=tr(CE[S])=i=1nj=1ntr(CijT(α(d)2E[ViVjT]UiUjT)).

Thus,

E[i=1nj=1ntr(CijTViVjT)]α(d)2i=1nj=1ntr(CijTUiUjT).

      □

We now present and prove an auxiliary lemma, needed for the proof of Lemma 5.

Lemma 6

Let G be a d × d Gaussian random matrix with real valued i.i.d. N(0,1d) entries and let α(d) as defined in Definition 2. Then,

E(P(G)GT)=E(GP(G)T)=α(d)Id×d.

Furthermore, if G is a d × d Gaussian random matrix with complex valued i.i.d. N(0,1d) entries and α(d) the analogous constant (Definition 2), then

E(P(G)GH)=E(GP(G)H)=α(d)Id×d.

Proof

We restrict the presentation to the real case. All the arguments are equivalent to the complex case, replacing all transposes with Hermitian adjoints and α(d) with α(d).

Let G = UΣVT be the singular value decomposition of G. Since GGT = UΣ2UT is a Wishart matrix, it is well known that its eigenvalues and eigenvectors are independent and U is distributed according to the Haar measure in Od (see e.g. Lemma 2.6 in [TV04]). To resolve ambiguities, we consider Σ ordered such that Σ11 ≥ Σ22 ≥ … ≥ Σdd.

Let Y=P(G)GT. Since

P(G)=P(UVT)=UId×dVT,

we have

Y=P(UVT)(UVT)T=UId×dVTVUT=UUT.

Note that GP(G)T=UUT=Y.

Denoting u1, …, ud the rows of U, since U is distributed according to the Haar measure, we have that uj and −uj have the same distribution conditioned on Σ and ui, for any ij. This implies that if ij, Yij=uiujT is a symmetric random variable, and so EYij=0. Also, ui ~ uj implies that Yii ~ Yjj. This means that EY=cId×d for some constant c. To obtain c,

c=c1dtr(Id×d)=1dEtr(Y)=1dEtr(UUT)=1dEk=1nσk(G)=α(d),

which shows the lemma.

Proof

[of Lemma 5] We restrict the presentation of proof to the real case. Nevertheless, as before, all the arguments trivially adapt to the complex case by, essentially, replacing all transposes with Hermitian adjoints and α(d) with α(d).

Let A = [MT NT] ∈ ℝdn×2d and A = QB be the QR decomposition of A with Q ∈ ℝnd×nd an orthogonal matrix and B ∈ ℝnd×2d upper triangular with non-negative diagonal entries; note that only the first 2d rows of B are nonzero. We can write

QTA=B=[B11B120dB220d0d0d0d]dn×2d,

where B11 ∈ ℝd×d and B22 ∈ ℝd×d are upper triangular matrices with non-negative diagonal entries. Since

(QTMT)11T(QTMT)11=(QTMT)T(QTMT)=MQQTMT=MInd×ndMT=MMT=Id×d,

B11 = (QTMT)11 is an orthogonal matrix, which together with the non-negativity of the diagonal entries (and the fact that B11 is upper-triangular) forces B11 to be B11 = Id×d.

Since R is a Gaussian matrix and Q is an orthogonal matrix, QR ~ R which implies

E[P(MR)(NR)T]=E[P(MQR)(NQR)T].

Since MQ=[B11T,0d×d,,0d×d]=[Id×d,0d×d,,0d×d] and NQ=[B12T,B22T,0d×d,,0d×d],

E[P(MR)(NR)T]=E[P(R1)(B12TR1+B22TR2)T],

where R1 and R2 are the first two d × d blocks of R. Since these blocks are independent, the second term vanishes and we have

E[P(MR)(NR)T]=E[P(R1)R1T]B12.

The Lemma now follows from using Lemma 6 to obtain E[P(R1)R1T]=α(d)Id×d and noting that B12 = (QTMT) (QTNT) = MNT.

The same argument, with Q’B’ the QR decomposition of A’ = [NTMT] ∈ ℝdn×2d instead, shows

E[(MR)P(NR)T]=E[R1P(R1)T]MNT=α(d)MNT.

       □

4 The approximation ratios α(d)2 and α(d)2

The approximation ratio we obtain (Theorem 1) for Algorithm 3 is given, in the orthogonal case, by α(d)2 and, in the unitary case, by α(d)2. α(d) and α(d) are defined as the average singular value of a d × d Gaussian matrix G with, respectively real and complex valued, i.i.d N(0,1d) entries. These singular values correspond to the square root of the eigenvalues of a Wishart matrix W = GGT, which are well-studied objects (see, e.g., [She01] or [CD11]).

For d = 1, this corresponds to the expected value of the absolute value of standard Gaussian (real or complex) random variable. Hence,

α(1)=2πandα(1)=π4,

meaning that, for d = 1, we recover the approximation ratio of 2π, of Nesterov [Nes98] for the real case, and the approximation ratio of π4 of So et al. [SZY07] in the complex case.

For any d ≥ 1, the marginal distribution of an eigenvalue of the Wishart matrix W = GGT is known [LV11, CD11, Lev12] (see Section B). Denoting by pd(K) the marginal distribution for K= and K=, we have

αK(d)=1d1/20x1/2pd(K)(x)dx. (15)

In the complex valued case, pd()(x) can be written in terms of Laguerre polynomials [CD11, Lev12] and α(d) is given by

α(d)=d3/2n=0d10x1/2exLn(x)2dx, (16)

Where Ln(x) is the nth Laguerre polynomial. In Section B we give a lower bound to (16). The real case is more involved [LV11], nevertheless we are able to provide a lower bound for α(d) as well.

Theorem 7

Consider α(d) and α(d) as defined in (2). The following holds,

α(d)83π5.05dandα(d)83π9.07d.

Proof

These bounds are a direct consequence of Lemmas 20 and 21.

One can easily evaluate limdαK(d) (without using Theorem 7) by noting that the distribution of the eigenvalues of the Wishart matrix we are interested in, as d → ∞, converges in probability to the Marchenko-Pastur distribution [She01] with density

mp(x)=12πxx(4x)1[0,4],

for both K= and K=. This immediately gives,

limdαK(d)=04x12πxx(4x)dx=83π.

We note that one could also obtain lower bounds for αK2(d) from results on the rate of convergence to mp(x) [GT11]. However this approach seems to not provide bounds with explicit constants and to not be as sharp as the approach taken in Theorem 7.

For any d, the exact value of αK(d) can be computed, by (15), using Mathematica (See table below). Figure 1 plots these values for d = 1, …, 44. We also plot the bounds for the real and complex case obtained in Theorem 7, and the approximation ratios obtained in [NRV13], for comparison.

Figure 1.

Figure 1

Plot showing the computed values of αK(d)2, for d ≤ 44, the limit of αK(d)2 as d → ∞, the lower bound for αK(d)2 given by Theorem 7 as function of d, and the approximation ratio of 122 and 12 obtained in [NRV13].

d α(d) α(d) α(d) ≈ α(d)2 α(d) ≈ α(d)2

1
2π
π2
0.7979 0.6366 0.8862 0.7854
2
2214π
11π216
0.8102 0.6564 0.8617 0.7424
3
22+3π63π
107π3128
0.8188 0.6704 0.8554 0.7312
83π
83π
0.8488 0.7205 0.8488 0.7205

The following conjecture is suggested by our analysis and numerical computations.

Conjecture 8

Let α(d) and α(d) be the average singular value of a d × d matrix with random i.i.d., respectively real valued and complex valued, N(0,1d) entries (see Definition 2). Then, for all d ≥ 1,

α(d+1)α(d)andα(d+1)α(d),

5 The little Grothendieck problem over the Stiefel manifold

In this section we focus on a generalization of (3), the little Grothendieck problem over the Stiefel manifold O(d,r), the set of matrices O ∈ ℝd×r such that OOT = Id×d. In this exposition we will restrict ourselves to the real valued case but it is easy to see that the ideas in this Section easily adapt to the complex valued case.

We consider the problem

maxO1,,OnO(d,r)i=1nj=1ntr(CijTOiOjT), (17)

for C ⪰ 0. The special case d = 1 was formulated and studied in [BBT11] and [BFV10] in the context of quantum non-locality and quantum XOR games.

Note that, for r = d, problem (17) reduces to (3) and, for r = nd, it reduces to the tractable relaxation (5). As a solution to (3) can be transformed, via zero padding, into a solution to (17) with the same objective function value, Algorithm 3 automatically provides an approximation ratio for (17), however we want to understand how this approximation ratio can be improved using the extra freedom (in particular, in the case r = nd, the approximation ratio is trivially 1). Below we show an adaptation of Algorithm 3, based on the same relaxation (5), for problem (17) and show an improved approximation ratio.

Algorithm 9

Compute X1, …, Xn ∈ ℝd×nd a solution to (5). Let R be and × r Gaussian random matrix whose entries are real i.i.d. N(0,1r). The approximate solution for (17) is now computed as

Vi=P(d,r)(XiR),

where P(d,r)(X)=argminZO(d,r)ZXF, for any X ∈ ℝd×r, is a generalization of the polar component to the Stiefel manifold O(d,r).

Below we show an approximation ratio for Algorithm 9.

Theorem 10

Let C ⪰ 0. Let V1,,VnO(d,r) be the (random) output of Algorithm 9. Then,

E[i=1nj=1ntr(CijTViVjT)]α(d,r)2maxO1,,OnO(d,r)i=1nj=1ntr(CijTOiOjT),

where α(d, r) is the defined below (Definition 11).

Definition 11

Let rd and G ∈ ℝd×r be a Gaussian random matrix with i.i.d real entri N(0,1r). We define

α(d,r):=E[1dj=1dσj(G)],

where σj(G) is the jth singular value of G.

We investigate the limiting behavior of α(d, r) as r → ∞ and as r, d → ∞ at a proporitional rate in Section 6.2.

The proof of Theorem 10 follows the same line of reasoning as that of Theorem 1 (and Theorem 4). We do not provide the proof in full, but state and prove Lemmas 17 and 18 in the Appendix, which are the analogous, to this setting, of Lemmas 6 and 5.

Besides the applications, for d = 1, described in [BBT11] and [BFV10], Problem (17) is also motivated by an application in molecule imaging, the common lines problem.

5.1 The common lines problem

The common lines problem arises in three-dimensional structure determination of biological molecules using Cryo-Electron Microscopy [SS11], and can be formulated as follows. Consider n rotation matrices O1,,OnSO3. The three columns of each rotation matrix form a orthonormal basis to ℝ3. In particular, the first two columns of each rotation matrix span a two-dimensional subspace (a plane) in ℝ3. We assume that no two planes are parallel. Every pair of planes intersect at a line, called the common-line of intersection. Let bij ∈ ℝ3 be a unit vector that points in the direction of the common-line between the planes corresponding to Oi and Oj. Hence, there exist unit vectors cij and cji with vanishing third component (i.e., cij = (xij, yij, 0)T) such that Oicij = Ojcji = bij. The common lines problem consists of estimating the rotation matrices O1, …, On from (perhaps noisy) measurements of the unit vectors cij and cji. The least-squares formulation of this problem is equivalent to

maxO1,,OnSO3i,j=1ntr(cjicijTOiTOj) (18)

However, since cij has zero in the third coordinate, the common-line equations Oicij = Ojcji do not involve the third columns of the rotation matrices. The optimization problem (18) is therefore equivalent to

maxO1T,,OnTO(2,3)i,j=1ntr(Π(cji)Π(cij)TOiTOj), (19)

where Π: ℝ3 → ℝ2 is a projection discarding the third component (i.e., Π(x, y, z) = (x, y)) and OiTO(2,3). The coefficient matrix in (19), Cij = Π(cij)Π(cji)T, is not positive semidefinite. However, one can add a diagonal matrix with large enough values to it in order to make it PSD. Although this does not affect the solution of (19) it does increase its function value by a constant, meaning that the approximation ratio obtained in Theorem 10 does not directly translate into an approximation ratio for Problem (19); see Section 7 for a discussion on extending the results to the non positive semidefinite case.

5.2 The approximation ratio α(d, r)2

In this Section we attempt to understand the behavior of α(d, r)2, the approximation ratio obtained for Algorithm 9. Recall that α(d, r) is defined as the average singular value of G ∈ ℝd×r, a Gaussian random matrix with i.i.d. entries N(0,1r).

For d = 1 this simply corresponds to the average length of a Gaussian vector in ℝr with i.i.d. entries N(0,1r). This means that α(1, r) is the mean of a normalized chi-distribution,

α(1,r)=2rΓ(r+12)Γ(r2).

In fact, this corresponds to the results of Briet el al [BFV10], which are known to be sharp [BFV10].

For d > 1 we do not completely understand the behavior of α(d, r), nevertheless it is easy to provide a lower bound for it by a function approaching 1 as r → ∞.

Proposition 12

Consider α(d, r) as in Definition 11. Then,

α(d,r)1dr. (20)
Proof

Gordon’s theorem for Gaussian matrices (see Theorem 5.32 in [Ver12]) gives us

Esmin(G)1dr,

where smin(G) is the smallest singular value. The bound follows immediately from noting that the average singular value is larger than the expected value of the smallest singular value.

As we are bounding α(d, r) by the expected value of the smallest singular value of a Gaussian matrix, we do not expect (20) to be tight. In fact, for d = 1, the stronger α(1,r)1O(1r) bound holds [BFV10].

Similarly to α(d), we can describe the behavior of α(d, r) in the limit as d → ∞ and rdρ. More precisely, the singular values of G correspond to the square root of the eigenvalues of the Wishart matrix [CD11] GGT~Wd(1r,r). Let us set r = ρd, for ρ ≥ 1. The distribution of the eigenvalues of a Wishart matrix Wd(1ρd,d), as d → ∞ are known to converge to the Marchenko Pastur distribution (see [CD11]) given by

dν(x)=12π((1+λ)2x)(x(1λ)2)λx1[(1λ)2,(1+λ)2]dx,

where λ=1ρ.

Hence, we can define ε(ρ) as

ϕ(ρ):=limdα(d,ρd)=(11ρ)2(1+1ρ)2x12π((1+1ρ)2x)(x(11ρ)2)1ρxdx.

Although we do not provide a closed form solution for ε(ρ) the integral can be easily computed numerically and we plot it below. It shows how the approximation ratio improves as ρ increases.

6 Integrality Gap

In this section we provide an integrality gap for relaxation (5) that matches our approximation ratio α(d)2. For the sake of the exposition we will restrict ourselves to the real case, but it is not difficult to see that all the arguments can be adapted to the complex case.

Our construction is an adaption of the classical construction for the d = 1 case (see, e.g., [AN04]). As it will become clear below, there is an extra difficulty in the d > 1 orthogonal case. In fact, the bound on the integrality gap of (5) given by this construction is α(d)2, defined as

α(d)=maxDdiagonalDF2=d,D_0E1di=1dσi(GD), (21)

where G is a Gaussian matrix with i.i.d. real entries N(0,1d).

Fortunately, using the notion of operator concavity of a function and the Lowner-Heinz Theorem [Car09], we are able to show the following theorem.

Theorem 13

Let d ≥ 1. Also, let α(d) be as defined in Definition 2 and α(d) as defined in (21). Then,

α(d)=α(d).

Proof

We want to show that

maxDdiagonalDF2=dD_0Ei=1σi(GD)=Ei=1σi(G),

where G is a d × d matrix with i.i.d. entries N(0,1d). By taking V = D2, and recalling the definition of singular value, we obtain the following claim (which immediately implies Theorem 13)

Claim 6.1

maxVdiagonaltr(V)=dV_0E tr[(GVGT)12]=E tr[(GGT)12].

Proof

We will proceed by contradiction, suppose (6.1) does not hold. Since the optimization space is compact and the function continuous it must have a maximum that is attained by a certain VId×d. Out of all maximizers V, let V(*) be the one with smallest possible Frobenius norm. The idea will be to use concavity arguments to build an optimal V(card) with smaller Frobenius norm, arriving at a contradiction and hence showing the theorem.

Since V(*) is optimal we have

E tr[(GV()GT)12]=α(d).

Furthermore, since V(*)Id×d, it must have two different diagonal elements. Let V(**) be a matrix obtained by swapping, in V(*), two of its non-equal diagonal elements. Clearly, ‖V(**)F = ‖V(*)F and, because of the rotation invariance of the Gaussian, it is easy to see that

E tr[(GV()GT)12]=α(d).

Since V(*) ⪰ 0, these two matrices are not multiples of each other and so

V(card)=V()+V()2,

has a strictly smaller Frobenius norm than V(*). It is also clear that V(card) is a feasible solution. We conclude the proof by showing

E tr[(GV(card)GT)12]12(E tr[(GV()GT)12]+E tr[(GV()GT)12])=α(d). (22)

By linearity of expectation and construction of V(card), (22) is equivalent to

E[tr[(GV()GT+GV()GT2)12]12(tr[(GV()GT)12]+tr[(GV()GT)12])]0.

This inequality follows from the stronger statement: Given two d × d matrices A ⪰ 0 and B ⪰ 0, the following holds

(A+B2)12A12+B122_0. (23)

Finally, (23) follows from the Lowner-Heinz Theorem, which states that the square root function is operator concave (See these lecture notes [Car09] for a very nice introduction to these inequalities).

Theorem 13 guarantees the optimality of the approximation ratio obtained in Section 3. In fact, we show the theorem below.

Theorem 14

For any d ≥ 1 and any ɛ > 0, there exists n for which there exists C ∈ ℝdn×dn such that C ⪰ 0, and

maxO1,,OnOdi=1nj=1ntr(CijTOiOjT)maxXiXiT=Id×dXid×dni=1nj=1ntr(CijTXiXjT)α(d)2+ε. (24)

We will construct C randomly and show that it satisfies (24) with positive probability. Given p an integer we consider n i.i.d. matrix random variables Vk, with k = 1, …, n, where each Vk is a d × dp Gaussian matrix whose entries are N(0,1dp). We then define C as the random matrix with d × d blocks Cij=1n2ViVjT. The idea now is to understand the typical behavior of both

wr=maxXiXiT=Id×dXid×dni=1nj=1ntr(CijTXiXjT) and wc=maxO1,,OnOdi=1nj=1ntr(CijTOiOjT).

For wc, we can rewrite

wc=maxO1,,OnOd1n2i,jtr((ViVjT)TOiOjT)=maxO1,,OnOd1ni=1nOiTViF2.

If

D=1ni=1nOiTVi1ni=1nOiTViF

then wc=i=1ntr(maxOiOdOiTViDT)=i=1ntr(P(ViDT)TViDT). The idea is that, given a fixed (direction unit frobenius-norm matrix) D, i=1ntr(P(ViDT)TViDT) converges to the expected value of one of the summands and, by an ε-net argument (since the dimension of the space where D is depends only on d and p and the number of summands is n which can be made much larger than d and p) we can argue that the sum is close, for all Ds simultaneously, to that expectation. It is not hard to see that we can assume that D=1d[D0] where D is diagonal and non-negative d × d matrix with DF2=d. In that case (see (21)),

E tr(P(ViDT)TViDT)=E1pdk=1dσk(GD)dpα(d),

where G is a Gaussian matrix with i.i.d. real entries N(0,1d). This, together with Theorem 13, gives E tr(P(ViDT)TViDT)dpα(d). All of this is made precise in the following lemma

Lemma 15

For any d and ɛ > 0 there exists p0 and n0 such that, for any p > p0 and n > n0,

maxO1,,OnOd1ni=1nOiTViF2dpα(d)2+ε,

with probability strictly larger than 1/2.

Proof

Let us define

A(V):=maxO1,,OnOd1ni=1nOiTViF.

We have

A(V)=maxDd×pd:DF=1maxO1,,OnO(d)tr(1ni=1nOiTViDT)=maxDd×pd:DF=11ni=1nmaxOiO(d)tr(OiTViDT)=maxDd×pd:DF=11ni=1ntr(P(ViDT)TViDT).

For D with ‖DF = 1, we define

AD(V)=1ni=1ntr(P(ViDT)TViDT).

We proceed by understanding the behavior of AD(V) for a specific D.

Let D=UL[0]URT, where Σ is a d × d non-negative diagonal matrix, be the singular value decomposition of D. For each i = 1, …, n, we have (using rotation invariance of the Gaussian distribution):

tr(P(ViDT)TViDT)~tr(P(Vi(UL[0]UR)T)TVi(UL[0]UR)T)~tr(ULP(Vi[0])TVi[0]ULT)~tr(P(Vi[0])TVi[0])~1dptr(P(Gd)TGd),

where G is a d × d Gaussian matrix with N(0,1d) entries.

This means that

AD(V)=1ni=1nXi,

with Xi i.i.d. distributed as 1dptr(P(Gd)TGd).

Since dF2=d, by (21), we get

E tr(P(Gd)TGd)dα(d).

This, together with Theorem 13, gives

EXidpα(d). (25)

In order to give tail bounds for AD(V)=1ni=1nXi we will show that Xi is subgaussian and use Hoeffding’s inequality (see Vershynin’s notes [Ver12]). In fact,

Xi~1dptr(P(Gd)TGd)1pP(G)FGF=dpGDFdpGF.

Note that dpGF is a subgaussian random variable as ‖G‖F is smaller than the entry wise 1 norm of G which is the sum of d2 half-normals (more specifically, the absolute value of a N(0,1d) random variable). Since half-normals are subgaussian and the sum of subgaussian random variables is a subgaussian random variable with subgaussian norm at most the sum of the norms (see the Rotation invariance Lemma in [Ver12]) we get that Xi is subgaussian. Furthermore, the subgaussian norm of Xi, which we define as Xiψ2=supp1p1/2(E|X|p)1/p, is bounded by Xiψ2Cd2p=Cdp, for some universal constant C.

Hence, we can use Hoeffding’s inequality (see [Ver12]) and get, since EXidpα(d),

Prob[ADdpα(d)+t]Prob[|ADEXi|t]exp(1c2t2nXiψ22)3exp(c1t2pd2n),

where ci are universal constants.

To find an upper bound for A=maxDd×pd:DF=1AD we use a classicl ε-net argument. There exists a set N of matrices Dk ∈ ℝd×pd satisfying ‖DkF = 1, such that for any D ∈ ℝd×pd with Frobenius norm 1, there exists an element DkN such that DDkFε. N is called an ε-net, and it’s known (see [Ver12]) that there exists such a set with size

|N|(1+2ε)d2p.

By the union-bound, with probability at least

1|N|Prob[ADdpα(d)+t]1[(1+2ε)d2p3exp(c1t2nd3)],

all the Dk’s in N satisfy

ADkdpα(d)+t.

If D is not in N, there exists DkN such that ‖D − DkF ≤ ε. This means that

AD1ni=1ntr(P(ViDT)TViDT)+1ni=1ntr(P(ViDT)TVi(DTDT))1ni=1ntr(P(ViDT)TViDT)+1ni=1nP(ViDT)TViFDDFdpα(d)+t+ε(1ni=1nViF).

We can globally bound (1ni=1nViF) by Hoeffding’s inequality as well (see [Ver12]). Using the same argument as above, it is easy to see that ‖ViF has subgaussian norm bounded by d, and an explicit computation shows its mean is 1dp2Γ((d2p+1)/2)Γ(d2p/2)2d, where the inequality follows from lemma 20.

This means that by Hoeffding’s inequality (see [Ver12])

Prob[1ni=1nViF2d+t]exp(1c4t2nViFψ22)3exp(c3t2nd),

with ci universal constants.

By union-bound on the two events above, with probability at least

13exp(c3t2nd)[(1+2ε)d2p3exp(c1t2nd3)],

we have

Adpα(d)+t+ε(2d+t).

Choosing t=12p and ε=16dp we get

Adpα(d)+1p,

with probability at least

13exp(c3n4p2)[(1+12dp)d2p3exp(c1n4p2d3)]=16[(1+12dp)d2p3exp(c1n4p2d3)]

which can be made arbitrarily close to 1 by taking n large enough.

This means that

maxO1,,OnOd1ni=1nOiTViF2dpα(d)2+1p,

with high probability, proving the lemma.      □

Regarding wr, we know that it is at least the value of 1n2i,jntr((ViVjT)TXiXjT) for Xi=P(Vi). Since, for p large enough, ViViTId×d we essentially have wr1n2i,jnViVjTF2 which should approximate EViVjTF2dp. This is made precise in the following lemma:

Lemma 16

For any d and ɛ > 0 there exists p0 and n0 such that, for any p > p0 and n > n0,

1n2i,jntr((ViVjT)TP(d,dp)(Vi)P(d,dp)(Vj)T)dpε,

with probability strictly larger than 1/2.

Proof

Recall that P(d,dp)(Vi) is the d × dp matrix polar component of Vi, meaning that

tr(P(d,dp)(Vi)TVi)=k=1dσk(Vi).

Hence,

1n2i,jntr((ViVjT)TP(d,dp)(Vi)P(d,dp)(Vj)T)=1ni=1nP(Vi)TVjF21Idp×dpF2[1ni=1ntr(P(Vi)TViIdp×dp)]2=1dp[1ni=1nk=1dσk(Vi)]2.

We proceed by using a lower bound for the expected value of the smallest eigenvalue (see [Ver12]), and get

Ek=1dσk(Vi)dEσmin(Vi)=d(11p).

Since k=1dσk(Vi)dViF, it has subgaussian norm smaller than Cd, with C an universal constant (using the same argument as in Lemma 15). Therefore, by Hoeffding’s inequality (see [Ver12]),

Prob[1ni=1nk=1dσk(Vi)d(11p)t]exp(1c1t2k=1dσk(Vi)ψ22n)exp(1c2t2d2n),

where ci are universal constants.

By setting t=dp, we get

1n2i,jntr((ViVjT)TP(d,dp)(Vi)P(d,dp)(Vj)T)dp(121p)2,

with probability at least 1exp(1c21pn)=1on(1) proving the Lemma.

Theorem 14 immediately follows from these two lemmas.

We note that these techniques are quite general. It is not difficult to see that these arguments, establishing integrality gaps that match the approximation ratios obtained, can be easily adapted for both the unitary case and the rank constrained case introduced in Section 5. For the sake of exposition we omit the details in these cases.

7 Open Problems and Future Work

Besides Conjecture 8, there are several extensions of this work that the authors consider to be interesting directions for future work.

A natural extension is to consider the little Grothendieck problem (3) over other groups of matrices. One interesting extension would be to consider the special orthogonal group SOd and the special unitary group SUd, these seem more difficult since they are not described by quadratic constraints.3

In some applications, like Synchronization [BSS13, Sin11] (a similar problem to Orthogonal Procrustes) and Common Lines [SS11], the positive semidefiniteness condition is not natural. It would be useful to better understand approximation algorithms for a version of (3) where C is not assumed to be positive semidefinite. Previous work in the special case d = 1, [NRT99, CW04, AMMN05] for O1 and [SZY07] for U1, suggest that it is possible to obtain an approximation ratio for (3) depending logarithmically on the size of the problem. Moreover, for O1, the logarithmic term is known to be needed in general [AMMN05].

It would also be interesting to understand whether the techniques in [AN04] can be adapted to obtain an approximation algorithm to the bipartite Grothendieck problem over the orthogonal group; this would be closer in spirit to the non commutative Grothendieck inequality [NRV13].

Another interesting question is whether the approximation ratios obtained in this paper correspond to the hardness of approximation of the problem (perhaps conditioned on the Unique-Games conjecture [Kho10]). Our optimality conditions are restricted to the particular relaxation we consider and do not exclude the existence of an efficient algorithm, not relying on the same relaxation, that approximates (3) with a better approximation ratio. Nevertheless, Raghavendra [Rag08] results on hardness for a host of problems matching the integrality gap of natural SDP relaxations suggest that our approximation ratios might be optimal (see also the recent results in [BRS15]).

Figure 2.

Figure 2

Plot of ε(ρ) = limd → ∞ α(d, ρd) for ρ ∈ [1, 5].

Acknowledgments

The authors would like to thank Moses Charikar for valuable guidance in context of this work and Jop Briet, Alexander Iriza, Yuehaw Khoo, Dustin Mixon, Oded Regev, and Zhizhen Zhao for insightful discussions on the topic of this paper. Special thanks to Johannes Trost for a very useful answer to a Mathoverflow question posed by the first author. Finally, we would like to thank the reviewers for numerous suggestions that helped to greatly improve the quality of this paper.

A. S. Bandeira was supported by AFOSR Grant No. FA9550-12-1-0317. A. Singer was partially supported by Award Number FA9550-12-1-0317 and FA9550-13-1-0076 from AFOSR, by Award Number R01GM090200 from the NIGMS, and by Award Number LTR DTD 06-05-2012 from the Simons Foundation. Parts of this work have appeared in C. Kennedy’s senior thesis at Princeton University.

A Technical proofs – analysis of algorithm for the Stiefel Manifold setting

Lemma 17

Let rd. Let G be a d × r Gaussian random matrix with real valued i.i.d. N(0,1r) entries and let α(d, r) as defined in Definition 11. Then,

E(Pd,r(G)GT)=E(GPd,r(G)T)=α(d,r)Id×d.

Furthermore, if G is a d × r Gaussian random matrix with complex valued i.i.d. N(0,1r) entries and α(d, r) the analogous constant (Definition 11), then

E(Pd,r(G)GH)=E(GPd,r(G)H)=α(d,r)Id×d.

The proof of this Lemma is a simple adaptation of the proof of Lemma 6.

Proof

We restrict the presentation to the real case. As before, all the arguments are equivalent to the complex case, replacing all transposes with Hermitian adjoints and α(d, r) with α(d, r).

Let G = U[Σ 0]VT be the singular value decomposition of G. Since GGT = UΣ2UT is a Wishart matrix, it is well known that its eigenvalues and eigenvectors are independent and U is distributed according to the Haar measure in Od (see e.g. Lemma 2.6 in [TV04]). To resolve ambiguities, we consider Σ ordered such that Σ11 ≥ Σ22 ≥ … ≥ Σdd.

Let Y=P(d,r)(G)GT. Since

P(d,r)(G)=P(d,r)(UVT)=UId×dVT,

we have

Y=P(d,r)(UVT)(UVT)T=UId×dVTVUT=UUT.

Note that GP(d,r)(G)T=UUT=Y.

Since Yij=uiujT, where u1, …, ud are the rows of U, and U is distributed according to the Haar measure, we have that uj and −uj have the same distribution conditioned on any ui, for ij, and Σ. This implies that, if ij, Yij=uiujT is a symmetric random variable, and so EYij=0. Also, ui ~ uj implies that Yii ~ Yjj. This means that EY=cId×d for some constant c. To obtain c,

c=c1dtr(Id×d)=1dE tr(Y) =1dE tr(UUT)=1dEk=1nσk(G)=α(d,r),

which shows the lemma.      □

Lemma 18

Let rd. Let M, N ∈d×nd such that M MT = N NT = Id×d. Let R ∈nd×r be a Gaussian random matrix with real valued i.i.d. entries N(0,1r). Then

E[P(d,r)(MR)(N R)T]=E[(MR)P(d,r)(NR)T]=α(d,r)MNT,

where α(d, r) is the constant in Definition 11.

Analogously, if M, N ∈ ℂd×nd such that MMH=N NH = Id×d, and R ∈ ℂnd×r is a Gaussian random matrix with complex valued i.i.d. entries N(0,1r), then

E[P(d,r)(MR)(NR)H]=E[(MR)P(d,r)(NR)H]=α(d,r)MNH,

where α(d, r) is the constant in Definition 11.

Similarly to above, the proof of this Lemma is a simple adaptation of the proof of Lemma 5.

Proof

We restrict the presentation of proof to the real case. Nevertheless, all the arguments trivially adapt to the complex case by, essentially, replacing all transposes with Hermitian adjoints and α(d) and α(d, r) with α(d) and α(d, r).

Let A = [MT NT] ∈ ℝdn×2d and A = QB be the QR decomposition of A with Q ∈ ℝnd×nd an orthogonal matrix and B ∈ ℝnd×2d upper triangular with non-negative diagonal entries; note that only the first 2d rows of B are nonzero. We can write

QTA=B=[B11B120dB220d0d0d0d]dn×2d,

where B11 ∈ ℝd×d and B22 ∈ ℝd×d are upper triangular matrices with non-negative diagonal entries. Since

(QTMT)11T(QTMT)11=(QTMT)T(QTMT)=MQQTMT=MInd×ndMT=MMT=Id×d,

B11 = (QT MT)11 is an orthogonal matrix, which together with the non-negativity of the diagonal entries (and the fact that B11 is upper-triangular) forces B11 to be B11 = Id×d.

Since R is a Gaussian matrix and Q is an orthogonal matrix, QR ~ R which implies

E[P(d,r)(MR)(NR)T]=E[P(d,r)(MQR)(NQR)T].

Since MQ=[B11T,0d×d,,0d×d]=[Id×d,0d×d,,0d×d] and NQ=[B12T,B22T,0d×d,,0d×d],

E[P(d,r)(MR)(NR)T]=E[P(d,r)(R1)(B12TR1+B22TR2)T],

where R1 and R2 are the first two d × r blocks of R. Since these blocks are independent, the second term vanishes and we have

E[P(d,r)(MR)(NR)T]=E[P(d,r)(R1)R1T]B12.

The Lemma now follows from using Lemma 17 to obtain E[P(d,r)(R1)R1T]=α(d,r)Id×d and nothing that B12=(QTMT)T(QTNT)=MNT.

The same argument, with Q′B′ the QR decomposition of A′ = [NTMT] ∈ ℝdn×2d instead, shows

E[(MR)P(d,r)(NR)T]=E[R1P(d,r)(R1)T]MNT=α(d,r)MNT.

B Bounds for the average singular value

Lemma 19

Let Gd×d be a Gaussian random matrix with i.i.d. complex valued N(0,d1) entries and define α(d):=E[1dj=1dσj(G)]. We have the following bound

α(d)83π5.05d.

Proof

We express α(d) as sums and products of Gamma functions and then use classical bounds to obtain our result.

Recall that from equation (16),

α(d)=d3/2n=0d1Tn, (26)

where

Tn=0x1/2exLn(x)2dx,

and Ln(x) is the nth Laguerre polynomial,

Ln(x)=k=0n(nk)(1)kk!xk.

This integral can be expressed as (see [GR94] section 7.414 equation 4(1))

Tn=Γ(n+3/2)Γ(n+1)m=0n(12)m(n)m(m!)2(n12)m, (27)

where (x)m is the Pochhammer symbol

(x)m=Γ(x+m)Γ(x).

The next lemma states a couple basic facts about the Gamma function that we will need in the subsequent computations.

Lemma 20

The Gamma function satisfies the following inequalities:

1nΓ(n)Γ(n+1/2)1n1/2nΓ(n+1)Γ(n+1/2)1n+1/2.

Proof

See [AS64] page 255.

We want to bound the summation in (27), which we rewrite as

m=0n(12)m(n)m(m!)2(n12)m=m=0(12)m2(m!)2m=n+1(12)m2(m!)2m=0n(12)m2(m!)2(1(n)m(n12)m).

For simplicity define

(I):=m=0(12)m2(m!)2(II):=m=n+1(12)m2(m!)2(III):=m=0n(12)m2(m!)2(1(n)m(n12)m),

so that (27) becomes

Tn=Γ(n+3/2)Γ(n+1)((I)+(II)+(III)).

The first term we can compute explicitly (see [GR94]) as

(I)=4π.

For the second term we use the fact that (12)m=Γ(m1/2)/Γ(1/2) to get

(II)=m=n+11Γ(1/2)2Γ(m1/2)2Γ(m+1)2=14πm=n+1Γ(m1/2)2Γ(m+1)2.

Using the first inequality in Lemma 20 and the multiplication formula for the Gamma function,

Γ(m1/2)Γ(m+1)=1m1/2Γ(m+1/2)Γ(m+1)1(m1/2)m

so we have

(II)14πm=n+11(m1/2)2m14πn1/21x3dx=12π(2n1)2.

For the third term, we use the formula (x)m=Γ(x+n)Γ(x) to deduce

(III)=m=0n(12)m2(m!)2(1(n)m(n12)m)=14πm=0nΓ(m1/2)2Γ(m+1)2(1Γ(n+1)Γ(nm+3/2)Γ(n+3/2)Γ(nm+1))=Γ(n+1)Γ(n+3/2)14πm=0nΓ(m1/2)2Γ(m+1)2(Γ(n+3/2)Γ(n+1)Γ(nm+3/2)Γ(nm+1)).

Using the second bound in Lemma 20,

Γ(nm+3/2)Γ(nm+1)nm+1/2,

and also

Γ(n+3/2)Γ(n+1)n+1,

so that

(III)14πn+1/2m=0n(1(m1/2)m+1/2)2(n+1nm+1/2).

If we multiply top and bottom by n+1+nm+1/2 and use the fact that

m+1/2n+1+nm+1/2m+1/2n+1,

then

(III)14πn+1/2m=0n1(m1/2)2(m+1/2)m+1/2n+112π(n+1)m=0n1(m1/2)21n+18+π22π3n+1.

Combining our bounds for (I), (II) and (III),

Tn=Γ(n+3/2)Γ(n+1)[(I)(II)(III)]Γ(n+3/2)Γ(n+1)(4π12π(2n1)23n+1)n+1/2(4π12π(2n1)23n+1),

and by (26),

α(d)1d3/2n=1d1n+1/2(4π12π(2n1)23n+1).

The term 1d3/2n=1d14n+1/2/π is the main term and can be bounded below by

1d3/2n=1d14n+1/2π1d3/283π((d1/2)3/2(1/2)3/2)(1(2d)1)83π(2d)3/283π(83π+12)d1.

The other error terms are at most

d3/2n=1d1n+1/2(12π(2n1)2+3n+1)1d3/2n=1d14π(n+1)n+1/21d3/2n=1d14π(n+1)1/24πd3/22d+1.

Combining the main and error term bounds, the lemma follows.

Lemma 21

For GKKd×d a Gaussian random matrix with i.i.d. K valued N (0, d−1) entries, define αK(d):=E[1dj=1dσj(GK)]. The following holds

α(d)α(d)4.02d1.

Proof

To find an explicit formula for α(d), we need an expression for the spectral distribution of the wishart matrix dGGT, which we call pd(x), given by equation (16) in [LV11]:

pd(x)=12d(2Rd(x)Γ(d2+12)Γ(d2)Ld1(x){ψ1(x)ψ2(x)}),

where

ψ1(x)=exk=0(κ+d2)/2δkL2k+1κ(x),ψ2(x)=(x2)1/2ex2[(1κ)2Γ(12,x2)Γ(12)+2κ1],Rd(x)=exm=0d1(Lm(x))2,δk=Γ(k+1κ2)Γ(k+32κ2),

κ = d mod 2 and Γ(a,y)=yta1etdt is the incomplete Gamma function.

This means that

α(d)=d1/20x1/2pd(x)dx=1d3/20x1/2Rd(x)dx12d3/20x1/2Γ(d2+12)Γ(d2)Ld1(x){ψ1(x)ψ2(x)}dx.

Recall that (see section 5)

α(d)=d3/2n=0d10x1/2exLn(x)2dx

which implies

α(d)=α(d)12d3/20x1/2Γ(d2+12)Γ(d2)Ld1(x){ψ1(x)ψ2(x)}dx. (28)

We are especially interested in the following terms which appear in the full expression for α(d):

Q(m,k)=0x1/2exLm(x)Lk(x)dx. (29)

From [GR94] section 7.414 equation 4(1), we have

Q(m,k)=14πi=0min{m,k}Γ(i+3/2)Γ(i+1)Γ(mi1/2)Γ(mi+1)Γ(ki1/2)Γ(ki+1).

The following lemma deals with bounds on sums involving Q(m, k) terms.

Lemma 22

For Q(m, k) as defined in (29) we have the following bounds

k=0mΓ(k+1/2)Γ(k+1)Q(2m,2k)2.8 (30)
k=1mΓ(k+3/2)Γ(k+1)Q(2m1,2k1)5.6 (31)

Proof

Note that in (30),

Q(2m,2k)=14πi=02kΓ(i+3/2)Γ(i+1)Γ(2mi1/2)Γ(2mi+1)Γ(2ki1/2)Γ(2ki+1)

since m ≥ k.

For 0 < i < 2k − 1, the ith term in the summation of Q(2m, 2k) can be bounded above by

Γ(i+3/2)Γ(i+1)Γ(2mi1/2)Γ(2mi+1)Γ(2ki1/2)Γ(2ki+1)i+11(2ki)2ki11(2mi)2mi1i+11(2ki1)3/2(2mi1)3/2.

This means that

Q(2m,2k)18πΓ(2m1/2)Γ(2k1/2)Γ(2m+1)Γ(2k+1)+14πi=12k1i+11(2ki1)3/2(2mi+1)3/2+14ππΓ(2k+1/2)Γ(2k)Γ(2m2k+1/2)Γ(2m2k+2)+14πmax(Γ(2k+3/2)Γ(2k+1)Γ(2m2k1/2)Γ(2m2k+1)(2π),0).

We bound the sum from i = 1 to 2k − 3 by

14πi=12k3i+11(2ki1)3/2(2mi+1)3/214πi=02k3i+11(2ki1)3/2(2m2k+1)3/214π(2m2k+1)3/202k2x+1(2kx1)3/2dx14π(2m2k+1)3/2(8k+4k2k1),

so that for k ≥ 1,

Q(2m,2k)18πΓ(2m1/2)Γ(2k1/2)Γ(2m+1)Γ(2k+1)+14π(2m2k+1)3/2(8k+4k2k1)+14π2k1(2m2k+3)3/2+14ππΓ(2k+1/2)Γ(2k)Γ(2m2k+1/2)Γ(2m2k+2)+14πmax(Γ(2k+3/2)Γ(2k+1)Γ(2m2k1/2)Γ(2m2k+1)(2π),0).

For k = 0, Q(2m, 0) < 0 except for the term Q(0,0)=π/2 which also becomes negative in the full sum, so we ignore these terms.

We now turn our attention to the full sum k=0mΓ(k+1/2)Γ(k+1)Q(2m,2k). As before, we define for clarity

(I):=18πk=1mΓ(k+1/2)Γ(k+1)Γ(2m1/2)Γ(2k1/2)Γ(2m+1)Γ(2k+1)(II):=14πk=1mΓ(k+1/2)Γ(k+1)1(2m2k+1)3/2(8k+4k2k1)(III):=14πk=1mΓ(k+1/2)Γ(k+1)2k1(2m2k+3)3/2(IV):=14πk=1mΓ(k+1/2)Γ(k+1)Γ(2k+1/2)Γ(2k)Γ(2m2k+1/2)Γ(2m2k+2)(V):=14πk=1mΓ(k+1/2)Γ(k+1)max(Γ(2k+3/2)Γ(2k+1)Γ(2m2k1/2)Γ(2m2k+1)(2π),0).

Using the bounds in lemma 20,

(I)k=1m132πk1/21mk12m12k1132π,(II)14πk=1m1k1/2(2m2k+1)3/2(4k)1π(112m1)1π,(III)14πk=1m1k1/22k1(2m2k+3)3/2124π,(IV)14π(k=1m(2k)1/2k1/21(2m2k+1)2m2k+π)12π+12,(V)=14π4πΓ(2m+3/2)Γ(2m+1)Γ(m+1/2)Γ(m+1)2m+1m3.

Finally,

k=0mΓ(k+1/2)Γ(k)Q(2m,2k)(I)+(II)+(III)+(IV)+(V)2.8.

To deduce the inequality (31), we use the previously derived bounds to show that

Q(2m1,2k1)14πi=12k3i+11(2ki2)3/2(2mi)3/2+14ππΓ(2k1/2)Γ(2k1)Γ(2m2k+1/2)Γ(2m2k+2)+14πmax(Γ(2k+1/2)Γ(2k)Γ(2m2k+1/2)Γ(2m2k+2)(2π),0),

so that Q(2m − 1; 2k − 1) ≤ Q(2m, 2k). Now it suffices to note that in the full sum, k=1mΓ(k+3/2)Γ(k+1)Q(2m1,2k1)2k=1mΓ(k+1/2)Γ(k)Q(2m1,2k1) and we get

k=1mΓ(k+3/2)Γ(k+1)Q(2m1,2k1)2k=1mΓ(k+1/2)Γ(k)Q(2m,2k)5.6.

We now return our focus to finding a bound on the expression for α (d) given in (28). Since ψ1, ψ2 depend on the parity of d, we split in to two cases.

Odd d = 2m + 1

From (see [GR94] section 7.414 equation 6),

0ex/2L2m(x)dx=2,

thus equation (28) becomes

α(2m+1)α(2m+1)=1(2m+1)3/2Γ(m+1)Γ(m+1/2)(k=0mΓ(k+1/2)Γ(k)Q(2m,2k)21/2),

and using the first bound in Lemma 22,

α(2m+1)α(2m+1)2.8m+1/21(2m+1)3/2m1.

Even d = 2m

For d = 2m, we have

α(2m)=α(2m)12(2m)3/20x1/2Γ(m+1/2)Γ(m)L2m1(x){ψ1(x)ψ2(x)}dx.

We split the integral into two parts,

(I):=12(2m)3/20x1/2Γ(m+1/2)Γ(m)L2m1(x)ψ1(x)dx(II):=12(2m)3/20x1/2Γ(m+1/2)Γ(m)L2m1(x)ψ2(x)dx.

Expanding from the definition of ψ1 above, we have

(I)=12(2m)3/20Γ(m+1/2)Γ(m)x1/2L2m1(x)exk=0m1Γ(k)Γ(k+1/2)L2k1(x)dx=12(2m)3/2Γ(m+1/2)Γ(m)k=1mΓ(k+3/2)Γ(k+1)Q(2m1,2k1),

so by Lemma 22,

(I)12(2m)3/2Γ(m+1/2)Γ(m)5.61m1/2.

The other part of the integral is

(II)=12(2m)3/20x1/2Γ(m+1/2)Γ(m)L2m1(x)(x/2)1/2ex/2[2Γ(1/2,x/2)Γ(1/2)1]dx=14m1/20Γ(m+1/2)Γ(m)L2m1(x)ex/22Γ(1/2,x/2)Γ(1/2)dx+12m3/2Γ(m+1/2)Γ(m),

where we use the fact that for odd 2m − 1 (see [GR94] section 7.414 equation 6),

0L2m1(x)ex/2dx=2.

We can bound the first integral in the expression of (II) by

|0L2m1(x)ex/2Γ(1/2,x/2)dx|(0exL2m1(x)2dx)1/2(0Γ(1/2,x/2)2dx)1/2=[0(xt1/2etdt)2dx]1/2=[01(xt1/2etdt)2dx+1(xt1/2etdt)2dx]1/2(Γ(1/2)2+1(ex)2dx)1/2(π+e2/2)1/2,

so finally

(II)(π+1/2e2)1/22πm3/2Γ(m+1/2)Γ(m)+m1/22m3/21.01m1.

Combining the above bounds we see that in the case of even d = 2m,

α(2m)α(2m)=(I)+(II)2.01m1.

Footnotes

1

We also note that these semidefinite programs satisfy Slater’s condition as the identity matrix is a feasible point. This ensures strong duality, which can be exploited by many semidefinite programming solvers.

2

These ideas also play a major role in the unidimensional complex case treated by So et al [SZY07].

3

The additional constraint that forces a matrix to be in the special orthogonal or unitary group is having determinant equal to 1 which is not quadratic.

Contributor Information

Afonso S. Bandeira, Email: bandeira@mit.edu.

Christopher Kennedy, Email: ckennedy@math.utexas.edu.

Amit Singer, Email: amits@math.princeton.edu.

References

  • [AHO98].Alizadeh F, Haeberly JPA, Overton ML. Primal-dual interior-point methods for semidefinite programming: convergence rates, stability and numerical results. SIAM Journal on Optimization. 1998;8(3):746–768. [Google Scholar]
  • [AMMN05].Alon N, Makarychev K, Makarychev Y, Naor A. Quadratic forms on graphs. Invent Math. 2005;163:486–493. [Google Scholar]
  • [AN04].Alon N, Naor A. Proc of the 36 th ACM STOC. ACM Press; 2004. Approximating the cut-norm via Grothendieck’s inequality; pp. 72–80. [Google Scholar]
  • [AS64].Abramowitz M, Stegun IA. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover; New York: 1964. [Google Scholar]
  • [Ban15].Bandeira AS. PhD thesis. Princeton University; 2015. Convex relaxations for certain inverse problems on graphs. (Program in Applied and Computational Mathematics). [Google Scholar]
  • [BBT11].Briet J, Buhrman H, Toner B. A generalized Grothendieck inequality and nonlocal correlations that require high entanglement. Communications in Mathematical Physics. 2011;305(3):827–843. [Google Scholar]
  • [BFV10].Briet J, Filho FMO, Vallentin F. Automata, Languages and Programming, volume 6198 of Lecture Notes in Computer Science. Springer; Berlin Heidelberg: 2010. The positive semidefinite Grothendieck problem with rank constraint; pp. 31–42. [Google Scholar]
  • [BRS15].Briet J, Regev O, Saket R. Tight hardness of the non-commutative Grothendieck problem. FOCS 2015, to appear. 2015 [Google Scholar]
  • [BSS13].Bandeira AS, Singer A, Spielman DA. A Cheeger inequality for the graph connection Laplacian. SIAM J Matrix Anal Appl. 2013;34(4):1611–1630. [Google Scholar]
  • [BTN02].Ben-Tal A, Nemirovski A. On tractable approximations of uncertain linear matrix inequalities affected by interval uncertainty. SIAM Journal on Optimization. 2002;12:811–833. [Google Scholar]
  • [Car09].Carlen EA. Trace inequalities and quantum entropy: An introductory course. 2009 available at http://www.ueltschi.org/azschool/notes/ericcarlen.pdf.
  • [CD11].Couillet R, Debbah M. Random Matrix Methods for Wireless Communications. Cambridge University Press; New York, NY, USA: 2011. [Google Scholar]
  • [CKS15].Chaudhury KN, Khoo Y, Singer A. Global registration of multiple point clouds using semidefinite programming. SIAM Journal on Optimization. 2015;25(1):126–185. [Google Scholar]
  • [CW04].Charikar M, Wirth A. Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’04. Washington, DC, USA: IEEE Computer Society; 2004. Maximizing quadratic programs: Extending Grothendieck’s inequality; pp. 54–60. [Google Scholar]
  • [FH55].Fan K, Hoffman AJ. Some metric inequalities in the space of matrices. Proceedings of the American Mathematical Society. 1955;6(1):111–116. [Google Scholar]
  • [GR94].Gradshteyn IS, Ryzhik IM. Table of Integrals, Series, and Products. Fifth. Academic Press; Jan, 1994. 5th edition. [Google Scholar]
  • [Gro96].Grothendieck A. Resume de la theorie metrique des produits tensoriels topologiques (french) Reprint of Bol Soc Mat Sao Paulo. 1996:179. [Google Scholar]
  • [GT11].Gotze F, Tikhomirov A. On the rate of convergence to the Marchenko–Pastur distribution. 2011 arXiv:1110.1284 [math.PR] [Google Scholar]
  • [GW95].Goemans MX, Williamson DP. Improved apprximation algorithms for maximum cut and satisfiability problems using semidefine programming. Journal of the Association for Computing Machinery. 1995;42:1115–1145. [Google Scholar]
  • [Hig86].Higham NJ. Computing the polar decomposition – with applications. SIAM J Sci Stat Comput. 1986 Oct;7:1160–1174. [Google Scholar]
  • [Kel75].Keller JB. Closest unitary, orthogonal and hermitian operators to a given operator. Mathematics Magazine. 1975;48(4):192–197. [Google Scholar]
  • [Kho10].Khot S. Proceedings of the 2010 IEEE 25th Annual Conference on Computational Complexity, CCC ’10. Washington, DC, USA: IEEE Computer Society; 2010. On the unique games conjecture (invited survey) pp. 99–121. [Google Scholar]
  • [Lev12].Leveque O. Random matrices and communication systems: Wishart random matrices: marginal eigenvalue distribution. 2012 Available at: http://ipg.epfl.ch/~leveque/Matrix/
  • [LV11].Livan G, Vivo P. Moments of Wishart-Laguerre and Jacobi ensembles of random matrices: application to the quantum transport problem in chaotic cavities. Acta Physica Polonica B. 2011;42:1081. [Google Scholar]
  • [Nem07].Nemirovski A. Sums of random symmetric matrices and quadratic optimization under orthogonality constraints. Math Program. 2007;109(2–3):283–317. [Google Scholar]
  • [Nes98].Nesterov Y. Semidefinite relaxation and nonconvex quadratic optimization. Opti-mization Methods and Software. 1998;9(1–3):141–160. [Google Scholar]
  • [Nes04].Nesterov Y. Introductory lectures on convex optimization: A basic course, volume 87 of Applied optimization. Springer; 2004. [Google Scholar]
  • [NRT99].Nemirovski A, Roos C, Terlaky T. On maximization of quadratic form over intersection of ellipsoids with common center. Mathematical Programming. 1999;86(3):463–473. [Google Scholar]
  • [NRV13].Naor A, Regev O, Vidick T. Proceedings of the 45th annual ACM symposium on Symposium on theory of computing, STOC ’13. New York, NY, USA: ACM; 2013. Efficient rounding for the noncommutative Grothendieck inequality; pp. 71–80. [Google Scholar]
  • [Pis11].Pisier G. Grothendieck’s theorem, past and present. Bull Amer Math Soc. 2011;49:237323. [Google Scholar]
  • [Rag08].Raghavendra P. Optimal algorithms and inapproximability results for every CSP. Proc 40 th ACM STOC. 2008:245–254. [Google Scholar]
  • [Sch66].Schonemann PH. A generalized solution of the orthogonal procrustes problem. Psychometrika. 1966;31(1):1–10. [Google Scholar]
  • [She01].Shen J. On the singular values of gaussian random matrices. Linear Algebra and its Applications. 2001;326(13):1–14. [Google Scholar]
  • [Sin11].Singer A. Angular synchronization by eigenvectors and semidefinite programming. Appl Comput Harmon Anal. 2011;30(1):20–36. doi: 10.1016/j.acha.2010.02.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [So11].So AC. Moment inequalities for sums of random matrices and their applications in optimization. Mathematical Programming. 2011;130(1):125–151. [Google Scholar]
  • [SS11].Singer A, Shkolnisky Y. Three-dimensional structure determination from common lines in Cryo-EM by eigenvectors and semidefinite programming. SIAM J Imaging Sciences. 2011;4(2):543–572. doi: 10.1137/090767777. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [SZY07].So A, Zhang J, Ye Y. On approximating complex quadratic optimization problems via semidefinite programming relaxations. Math Program Ser B. 2007 [Google Scholar]
  • [TV04].Tulino AM, Verdú S. Random matrix theory and wireless communications. Commun Inf Theory. 2004 Jun;1(1):1–182. [Google Scholar]
  • [VB96].Vanderberghe L, Boyd S. Semidefinite programming. SIAM Review. 1996;38:49–95. [Google Scholar]
  • [Ver12].Vershynin R. In: Introduction to the non-asymptotic analysis of random matrices. Chapter 5 of: Compressed Sensing, Theory and Applications. Eldar Y, Kutyniok G, editors. Cambridge University Press; 2012. [Google Scholar]
  • [WGS12].Wen Z, Goldfarb D, Scheinberg K. Handbook on Semidefinite, Conic and Polynomial Optimization, volume 166 of International Series in Operations Research & Management Science. Springer; US: 2012. Block coordinate descent methods for semidefinite programming; pp. 533–564. [Google Scholar]

RESOURCES