Skip to main content
Springer logoLink to Springer
. 2018 Oct 11;2018(1):282. doi: 10.1186/s13660-018-1861-1

New estimations on the upper bounds for the nuclear norm of a tensor

Xu Kong 1,, Jicheng Li 2, Xiaolong Wang 3
PMCID: PMC6208618  PMID: 30839714

Abstract

Using the orthogonal rank of the tensor, a new estimation method for the upper bounds on the nuclear norms is presented and some new tight upper bounds on the nuclear norms are established. Taking into account the structure information of the tensor, an important factor affecting the upper bounds is discussed and some corresponding properties related to the nuclear norms are given. Meanwhile, some new results of the upper bounds on the nuclear norms are obtained.

Keywords: Tensor, Nuclear norm, Orthogonal rank, Upper bound

Introduction

A tensor is a multidimensional array that can provide a natural and convenient way for representing the multidimensional data such as discrete forms of multivariate functions, images, video sequences, and so on [1, 2, 10, 14]. With the successful and widespread use of the matrix nuclear norm (a sum of the singular values) in the information recovery, the research on the nuclear norm of the tensor (see Definition 2.3 in Sect. 2) has been a hot topic in both the theory and applications [58].

A natural problem is how to compute the nuclear norm of a tensor. Unfortunately, compared with the matrix nuclear norm, the nuclear norm of a tensor is closely related to the number field, and the computation of the tensor nuclear norm is NP-hard [5]. Thus, exploring some simple polynomial-time computable upper bounds on the nuclear norm is very important.

Relating to the nuclear norm of a tensor, Friedland and Lim established the following upper bound through the Frobenius norm of this tensor in [5].

Theorem 1.1

([5])

Let XRn1××nD. Then

Xi=1DniXF.

In [8], Hu established a tighter upper bound.

Theorem 1.2

(Lemma 5.1 in [8])

Let XRn1××nD. Then

Xi=1Dnimax{n1,,nD}XF. 1

Furthermore, Hu established another upper bound on the nuclear norm of a tensor through the nuclear norms of its unfolding matrices.

Theorem 1.3

(Theorem 5.2 in [8])

Let XRn1××nD. Then

Xi=2Dnimax{n2,,nD}X(1). 2

In this paper, we present some new upper bounds on the nuclear norms through using the orthogonal rank of the tensor [9, 12]. Furthermore, taking into account the structure information of the tensor, some new results of the upper bounds on the nuclear norms are obtained.

Since the spectral norm and nuclear norm of a tensor are closely related to the number field [6], for the sake of simplicity, we always assume that the tensors discussed are nonzero real tensors, and unless mentioned otherwise, we just discuss the spectral and the nuclear norm over the real field. Some corresponding notations are as follows: a tensor is denoted by the calligraphic letter (e.g., X), while the scalar is denoted by the plain letter, and matrices and vectors are denoted by bold letters (e.g., X and x).

The rest of the paper is organized as follows. In Sect. 2, we recall some definitions and related results which are needed for the subsequent sections. In Sect. 3, we present the upper bounds on the nuclear norms of general tensors and discuss the factor affecting the upper bounds. Finally, some conclusions are made in Sect. 4.

Notations and preliminaries

This section is devoted to reviewing some conceptions and results related to tensors, which are needed for the following sections.

Firstly, we discuss the unfolding matrix or matrix representation of a tensor.

Let X=(xi1iD)Rn1××nD. Then, by organizing several indexes of X and the remaining indexes of X as the row index and the column index, respectively, the tensor X can be reshaped into a matrix form [13]. Especially, if the number of row index is equal to one, then we get the mode-d matricization X(d), the columns of which are the mode-d fibers of the tensor (obtained by fixing every coordinate except one, [xi1id11id+1iD,,xi1id1ndid+1iD]T), arranged in a cyclic ordering; see [3] for details. Contrary to the operation above, a matrix can also be reshaped into a tensor by using the opposite operation.

In what follows, we review some definitions of the tensor norms.

Definition 2.1

([3])

Let X=(xi1iD)Rn1××nD, the Frobenius norm or Hilbert–Schmidt norm of the tensor X is defined as

XF=X,X=(i1=1n1iD=1nDxi1iD2)1/2.

Definition 2.2

([6])

Let “∘” denote the outer product operation. The spectral norm of XRn1××nD is defined as

X2=max{X,x(1)x(D):x(d)Rnd,x(d)2=1,1dD}.

Furthermore, X2 is equal to the Frobenius norm of the best rank-one approximation of the tensor X.

Similar to the matrix case, the nuclear norm can be defined through the dual norm of the spectral norm.

Definition 2.3

([6])

Let XRn1××nD, the nuclear norm of X is defined as the dual norm of the spectral norm. That is,

X:=max{X,Y:YRn1××nD,Y2=1}. 3

For the nuclear norm defined by (3), it can be shown that

X=min{p=1P|λp|:X=p=1Pλpxp(1)xp(D),xp(d)2=1,xp(d)Rnd,λpR,PN}.

Another important concept related to the matrix and the tensor is the mode-d multiplication.

Definition 2.4

([3])

Let X=(xi1iD)Rn1××nD. Then the mode-d multiplication of X by the matrix U=(uij)Rnd×nd is defined by

(X×dU)i1id1idid+1iD=id=1ndxi1id1idid+1iDuidid,1dD.

It should be mentioned that the mode-d multiplication is also available for nd=1.

Furthermore, let

(W(1),,W(D))X=X×1W(1)××DW(D).

If for all 1dD the matrices W(d) are orthogonal matrices (W(d)W(d)T is an identity matrix), then (W(1),,W(D))X is called a multi-linear orthogonal transformation of the tensor X.

Finally in this section, we introduce the tool used in the paper for the estimation of the upper bounds.

Definition 2.5

([9])

The orthogonal rank of XRn1×nD is defined as the smallest number R such that

X=r=1RUr, 4

where Ur (1rR) are rank-one tensors such that Ur1,Ur2=0, (r1r2) for 1r1R and 1r2R.

The decomposition of X given by (4) is also called the orthogonal decomposition of the tensor X.

For the orthogonal rank of a tensor, the following conclusion is true.

Theorem 2.1

([11])

Let n1nD. Then, for any XRn1××nD, it holds

r(X)i=1D1ni,

where r(X) denotes the orthogonal rank of X.

Noting the fact that indices relabeling does not change the tensor orthogonal rank, then by Theorem 2.1, we have that, for any XRn1××nD, it holds

r(X)i=1Dnimax{n1,,nD}. 5

Especially, for the third order tensor, the following result was established in [11].

Lemma 2.1

([11])

Let n2. Then, for any XRn×n×2, the following holds:

r(X){2n1,if n is odd;2n,if n is even.

Upper bounds of the nuclear norm

In this section, we discuss the upper bounds on the nuclear norm of a tensor. Meanwhile, some properties and polynomial-time computable bounds related to the nuclear norm will be given.

Upper bounds given by the Frobenius norm

In this subsection, we use the orthogonal rank of a general tensor to establish the upper bounds on the nuclear norm through the Frobenius norm of this tensor.

Theorem 3.1

Let XRn1××nD. Suppose that

R=maxYRn1××nD{rank(Y)}.

Then

XRXF. 6

Proof

Let YRn1××nD be an arbitrary nonzero tensor and the orthogonal rank of Y be Ry. Suppose that

Y=r=1RyUr

is the orthogonal rank decomposition of the tensor Y, where RyR.

Then, according to the properties of the orthogonal rank decomposition and the best rank-one approximation, we have

Y2max1rRy{UrF} 7

and

YF2=r=1RyUrF2. 8

Without loss of generality, suppose that

U1F=max1rRy{UrF}.

Then it follows from (7) and (8) that

X,YY2XFYFY2XFr=1RyUrF2U1FXFRyU1F2U1F=RyXFRXF. 9

Thus, according to the arbitrariness of Y and (9), we get

maxYRn1××nD{X,YY2}RXF.

Noting the definition of the nuclear norm (Definition 2.3), the conclusion is established. □

Remark 3.1

Comparing the upper bound given by (6) with the upper bound given by (1), which is obtained in [8], the new upper bound given by (6) is tighter.

Actually, it follows from (5) that the upper bound given by (6) improves the upper bound given by (1).

More specifically, we present a simple example to show that the upper bound given by Theorem 3.1 not only can be tighter than the upper bound given by (1) but also a sharp bound.

Example 3.1

Let

A=[010100000|100010001]R3×3×2.

By Theorem 3.1 and Lemma 2.1, we get

A512+12+(1)2+12+12=5<3×3×235=30.

This means that the upper bound given by (3.1) is tighter than the upper bound given by (1).

Furthermore, by a simple computation, we get A2=1. Then it follows from the definition of the nuclear norm that

AA,AA2=AF2A2=51=5.

Thus, it holds

A=5.

Actually,

A=e1;3e2;3e1;2+e2;3e1;3e1;2+(e1;3)e1;3e2;2+e2;3e2;3e2;2+e3;3e3;3e2;2 10

is a nuclear decomposition of A, where e1;3=[1,0,0]T, e2;3=[0,1,0]T, e3;3=[0,0,1]T, e1;2=[1,0]T, and e2;2=[0,1]T. Since

A,e1;3e2;3e1;2=A,e2;3e1;3e1;2=A,(e1;3)e1;3e2;2=A,e2;3e2;3e2;2=A,e3;3e3;3e2;2,

then, according to the sufficient and necessary conditions of the nuclear norm decomposition obtained in [6], we get that (10) is a nuclear decomposition of A.

This also means that the upper bound given by Theorem 3.1 is a sharp upper bound of the nuclear norm.

Upper bounds given by nuclear norms of the unfolding matrices of a tensor

In this subsection, we present a new way to establish the upper bounds on the nuclear norm of a tensor through the nuclear norms of the unfolding matrices of this tensor.

Theorem 3.2

Let XRn1××nD. Suppose that

R˜=maxYRn2××nD{rank(Y)}.

Then

XR˜X(1). 11

Proof

Let the singular value decomposition of the matrix X(1) be

X(1)=σ1u1v1T++σSuSvST, 12

where usRn1, vsRn2××nD, us2=1, vs2=1, σs>0, and 1sS.

Then equality (12) can be expressed as the following form:

X=σ1u1V1++σSuSVS, 13

where VsRn2××nD are obtained by reordering the vector vs into (D1)th order tensor with a certain order, 1sS. Suppose that the orthogonal rank decomposition of Vs is

Vs=v1(1,s)v1(D1,s)++vPs(1,s)vPs(D1,s),1sS.

Then, by taking the expression of Vs into the right-hand side of (13), we get

X=σ1i=1P1u1vi(1,1)vi(D1,1)++σSi=1PSu1vi(1,S)vi(D1,S)=σ1i=1P1(vi(1,1)2vi(D1,1)2)u1vi(1,1)vi(1,1)2vi(D1,1)vi(D1,1)2++σSi=1PS(vi(1,s)2vi(D1,s)2)u1vi(1,S)vi(1,S)2vi(D1,S)vi(D1,S)2. 14

Noting that

VsF2=i=1Psvi(1,s)vi(D1,s)F2=i=1Psvi(1,s)F2vi(D1,s)F2=vsF2=1, 15

where PsR˜, 1sS, then it follows from the definition of the nuclear norm and (14) that

Xσ1i=1P1vi(1,1)2vi(D1,1)2++σSi=1PSvi(1,s)2vi(D1,s)2σ1P1++σSPS(by (15) and Cauchy-Schwarz inequality)σ1R˜++σSR˜=R˜X(1).

 □

Remark 3.2

Comparing the upper bound given by (11) with the upper bound given by (2), which is obtained in [8], the new upper bound given by (11) is smaller.

Actually, it follows from inequality (5) that the upper bound given by (11) improves the upper bound given by (2).

Similar to the discussion of Hu [8], the upper bounds can also be obtained by other unfolding ways and further improved by considering the multi-linear ranks of a tensor (ranks of the unfolding matrices).

Corollary 3.1

Let XRn1×n2×n3 and rd=rank(X(d)), 1d3. Then

Xmin{r2,r3}X(1)+min{r3,r1}X(2)+min{r1,r2}X(3)3.

Proof

According to the conditions of the corollary and the higher order singular value decomposition of the tensor [3], the tensor X can be expressed as

X=(W(1),W(2),W(3))X˜,

where X˜Rr1×r2×r3 and W(d)Rnd×rd satisfying that W(d)TW(d) is an identity matrix for all 1d3.

By the definition of the tensor nuclear norm (Definition 2.3), one can easily verify the following conclusions:

X˜=X, 16

and

X˜(1)=X(1). 17

It follows from Theorem 3.2 and (17) that

X˜maxYRr2×r3{rank(Y)}X˜(1)min{r2,r3}X(1).

Noting (16), we get

Xmin{r2,r3}X(1).

Similarly, we have

Xmin{r3,r1}X(2),

and

Xmin{r1,r2}X(3).

Thus, the conclusion is obtained. □

Factors affecting the upper bounds on the nuclear norm and further results

In this subsection, we discuss the factors affecting the nuclear norm of a tensor. Especially, we focus on the structure analysis of a tensor. Based on the discussion, some new upper bounds on the nuclear norms of tensors are presented.

Firstly, we give a simple example to illustrate that the nuclear norm of a tensor is closely related to the structure of this tensor.

Example 3.2

Let

A=[0110|1001]. 18

Similar to the discussion of Example 3.1, since

A=e1;2e2;2e1;2+e2;2e1;2e1;2+(e1;2)e1;2e2;2+e2;2e2;2e2;2

and

A,e1;2e2;2e1;2=A,e2;2e1;2e1;2=A,(e1;2)e1;2e2;2=A,e2;2e2;2e2;2,

then, according to the sufficient and necessary conditions of the nuclear norm decomposition obtained in [6], we get

A=4.

It is well known that the nuclear norm of a tensor is closely related to the number field [6]. Actually, the tensor A can be expressed as the following form:

A=12[1i][1i][i1]+12[1i][1i][i1].

Let

W(1)=12[1i1i],W(2)=12[1i1i],W(3)=12[i1i1].

Then it holds

(W(1),W(2),W(3))A=[2000|0002]. 19

For the sake of convenience, let Aˆ=(W(1),W(2),W(3))A. Then, using the same method as above, we have

Aˆ,e1;2e1;2e1;2=Aˆ,e2;2e2;2e2;2.

Thus Aˆ=22. Since all three matrices W(k) (1k3) are unitary matrices, based on the invariance of the Frobenius norm of a tensor under the multi-linear orthogonal transformations, we get

A=Aˆ=22.

Noting the structures of the tensors given by (18) and (19), the above derivation process implies that the nuclear norm of a tensor is closely related to the structure of this tensor. In what follows, we discuss the block diagonal tensor, which can be illustrated by Fig. 1. Furthermore, the block diagonal tensor can be expressed by using the direct sum operation “⊕” [4], which is defined as follows:

Figure 1.

Figure 1

The block diagonal tensor with three diagonal blocks

Let A=(ai1iD)Rn1××nD and B=(bj1jD)Rn1××nD, then the direct sum of A and B is an order-D tensor C=(ci1iD)=ABR(n1+n1)××(nD+nD) defined by

ci1iD={ai1iD,if 1iαnα,α=1,2,,D;bi1n1,,iDnD,if nα+1iαnα+nα,α=1,2,,D;0,otherwise.

Based on the discussion above, we present some properties of the spectral norm and nuclear norm of the tensor.

Lemma 3.1

Let X(l)Rn1(l)××nD(l), 1lL, and

X=X(1)X(L)R(l=1Ln1(l))××(l=1LnD(l)).

Then

X2=max1lL{X(l)2}. 20

Proof

According to the definition of the spectral norm of a tensor (Definition 2.3), it is easy to get

X2max1lL{X(l)2}.

Thus, the rest of the proof just needs to show

X2max1lL{X(l)2}. 21

Firstly, we consider the case of the third order tensors.

Suppose that X(l)Rn1(l)×n2(l)×n3(l) (1lL),

X=X(1)X(L)R(l=1Ln1(l))×(l=1Ln2(l))×(l=1Ln3(l)),

and σuvw is the best rank-one approximation of X, where σ=X2, uRl=1Ln1(l), vRl=1Ln2(l), w=[w1T,,wLT]TRl=1Ln3(l), wlRn3(l) (1lL), and u2=v2=w2=1. Then the following matrix

X×3wT=[X(1)×3w1TX(L)×3wLT]

is a block diagonal matrix. It follows

X2=X×3wT2=max1lL{X(l)×3wlT2}max1lL{X(l)2}.

Hence, inequality (21) is proved. This also implies that equality (20) is true for the third order tensors.

Secondly, for the case of higher order tensors with order larger than or equal to four, the same result can be established by the recursive method.

In all, the conclusion is true. □

Then, based on Lemma 3.1, the following two results related to the nuclear norms of tensors can be established.

Lemma 3.2

Let ORn1(1)××nD(1) be a zero tensor, and XRn1(2)××nD(2). Then

OX=XO=X.

Proof

Suppose that

X=p=1P|σp|

and

X=p=1Pσpxp(1)xp(D),

where for all 1pP, xp(1)xp(D) are rank-one tensors with xp(1)2==xp(D)2=1. Then

OX=p=1Pσp(xp(1)0)(xp(D)0),

where all 0s denote zero vectors with suitable dimensions. This implies

OXX.

Furthermore, assume that

X=X,Y,

where YRn1(2)××nD(2) and Y2=1.

Then, by Lemma 3.1, we have

OY2=Y2=1.

It follows from Definition 2.5 that

OXOX,OY=X,Y=X.

Thus it holds OX=X.

Using the same method, the equality XO=X can be proved. □

Lemma 3.3

Let X(l)Rn1(l)××nD(l), 1lL, and

X=X(1)X(L)R(l=1Ln1(l))××(l=1LnD(l)).

Then

X=l=1LX(l).

Proof

We just need to prove the case of L=2. For the general case, the conclusion can be obtained in a recursive way.

Let

X˜1=X(1)O(1),X˜2=O(2)X(1),

where O(1)Rn1(2)××nD(2) and O(2)Rn1(1)××nD(1) are both zero tensors.

Then, by using Lemma 3.2, we get

X=X˜1+X˜2X˜1+X˜2=X(1)+X(2). 22

Suppose that

X(l)=X(l),Y(l),andY(l)Rn1(l)××nD(l),Y(l)2=1.

Then, by Lemma 3.1, we get

Y(1)Y(2)2=1.

Thus, according to Definition 2.5, we have

X=maxYR(n1(1)+n1(2))××(nD(1)+nD(2))Y2=1{X,Y}X(1)X(2),Y(1)Y(2)=X(1)+X(2). 23

Combined (22) with (23), the results can be obtained. □

Based on the fact that the nuclear norm of a tensor is also kept invariant under the multi-linear orthogonal transformation, we get the following result.

Corollary 3.2

Let XRn1××nD. If the tensor X admits a diagonal structure under the multi-linear orthogonal transformations, then

X=p=1P|σp|,

where σp (1pP) are the diagonal elements and Pn1.

This case presented by Corollary 3.2 is consistent with the definition of the nuclear norm of the matrix case, and in this case, the nuclear norm of the tensor can be accurately calculated.

Taking into account the structure information of the tensor, some new results of the upper bounds on the nuclear norms can be obtained. For the convenience of comparison, we just present the upper bounds on the nuclear norms of tensors through the dimensions of the tensors, without considering the orthogonal rank of the tensors.

Theorem 3.3

Let XRn1××nD and L be the maximum number of diagonal blocks that the tensor X can attain under the multi-linear orthogonal transformations. Suppose that the size of each diagonal block is n1(l)××nD(l) and

n˜l=i=1Dni(l)max{n1(l)nD(l)},1lL.

Then it holds

Xl=1Ln˜lXF. 24

Proof

Assume that

X=D(X)×1W(1)×DW(D),

where

D(X)=D(1)D(L)

and W(d)Rnd×nd (1dD) are orthogonal matrices, D(l)Rn1(l)××nD(l), 1lL.

Then it follows from the invariance of the Frobenius norm of a tensor under the multi-linear orthogonal transformation that

XF2=l=1LD(l)F2. 25

Furthermore, since the nuclear norm of a tensor is also kept invariant under the multi-linear orthogonal transformation, we get

X=D(X).

Hence, by Lemma 3.3 and (25), we get

X=D(X)=l=1LD(l)l=1Ln˜lD(l)Fl=1Ln˜ll=1LD(l)F2(by Cauchy-Schwarz inequality)=l=1Ln˜lXF.

 □

Without loss of generality, suppose that

nD=max{n1,,nD}.

Since

n1=l=1Ln1(l)nD1=l=1LnD1(l),

it is easy to get

i=1Dnimax{n1,,nD}=i=1D1ni=(l=1Ln1(l))(l=1LnD1(l))l=1Ln˜l.

Thus, the upper bound given by (24) improves (1). Theorem 3.3 also shows that the upper bound on the nuclear norm can be improved by using the structural information.

Similarly, the following upper bound can also be obtained.

Theorem 3.4

Let XRn1××nD, and L be the maximum number of diagonal blocks that the tensor X can attain under the multi-linear orthogonal transformations. Suppose that the size of each diagonal block is n1(l)××nD(l), and

n˜l=i=2Dni(l)max{n2(l)nD(l)},1lL,

and

n˜=max1lL{n˜l}.

Then it holds

Xn˜X(1). 26

Proof

Similar to the proof of Theorem 3.3, assume

X=D(X)×1W(1)×DW(D),

where

D(X)=D(1)D(L)

and W(d)Rnd×nd (1dD) are orthogonal matrices.

Then it holds

X=D(X)=l=1LD(l)l=1Ln˜lD(1)(l)n˜l=1LD(1)(l)n˜l=1LD(1)=n˜X(1).

 □

Theorem 3.4 implies that the upper bound given by nuclear norms of the unfolding matrices is more closely related to the structure. For the sake of clarity, we give a simple example to illustrate.

Example 3.3

Let A be defined in Example 3.2, and

B=AA=[0100100000000000|1000010000000000|0000000000010010|0000000000100001].

Then, by Theorem 1.2, we get

B44BF=82.

It follows from Theorem 3.4 that

B22BF=42.

There has been a marked improvement in the upper bounds on the nuclear norm.

Conclusions

In this paper, we provide a new estimation method for the upper bounds on the nuclear norms and obtain some new upper bounds related to the nuclear norms. Meanwhile, it is found that the upper bounds on the nuclear norms are not only related to the dimensions of the tensor but also to the structure of the tensor. Taking into consideration the structure information of the tensor, the upper bounds on the nuclear norms can be improved.

Authors’ contributions

All three authors contributed equally to this work. All authors read and approved the final manuscript.

Funding

This research work was supported by the Natural Science Foundation of China (NSFC) (Nos: 11401286, 11671318, 11401472), Natural Science Foundation of Shaanxi Province (No: 2014JM1029), and Scientific Research Foundation of Liaocheng University.

Competing interests

All three authors declare that they have no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Xu Kong, Email: xu.kong@hotmail.com.

Jicheng Li, Email: jcli@mail.xjtu.edu.cn.

Xiaolong Wang, Email: xlwang@nwpu.edu.cn.

References

  • 1.Che M.L., Cichocki A., Wei Y.M. Neural networks for computing best rank-one approximations of tensors and its applications. Neurocomputing. 2017;267:114–133. doi: 10.1016/j.neucom.2017.04.058. [DOI] [Google Scholar]
  • 2.Cichocki A., Mandicand D., Pan A.-H., Caiafa C., Zhou G., Zhao Q., De Lathauwer L. Tensor decompositions for signal processing applications: from two-way to multiway component analysis. IEEE Signal Process. Mag. 2015;32(2):145–163. doi: 10.1109/MSP.2013.2297439. [DOI] [Google Scholar]
  • 3.De Lathauwer L., De Moor B., Vandewalle J. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 2000;21(4):1253–1278. doi: 10.1137/S0895479896305696. [DOI] [Google Scholar]
  • 4.De Silva V., Lim L.H. Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl. 2008;30(3):1084–1127. doi: 10.1137/06066518X. [DOI] [Google Scholar]
  • 5.Friedland S., Lim L.-H. The computational complexity of duality. SIAM J. Optim. 2016;26(4):2378–2393. doi: 10.1137/16M105887X. [DOI] [Google Scholar]
  • 6.Friedland S., Lim L.H. Nuclear norm of higher-order tensors. Math. Comput. 2018;87(311):1255–1281. doi: 10.1090/mcom/3239. [DOI] [Google Scholar]
  • 7.Gandy S., Recht B., Yamada I. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl. 2011;27(2):025010. doi: 10.1088/0266-5611/27/2/025010. [DOI] [Google Scholar]
  • 8.Hu S.L. Relations of the nuclear norm of a tensor and its matrix flattenings. Linear Algebra Appl. 2015;478:188–199. doi: 10.1016/j.laa.2015.04.003. [DOI] [Google Scholar]
  • 9.Kolda T.G., Bader B.W. Orthogonal tensor decomposition. SIAM J. Matrix Anal. Appl. 2001;23:243–255. doi: 10.1137/S0895479800368354. [DOI] [Google Scholar]
  • 10.Kolda T.G., Bader B.W. Tensor decompositions and applications. SIAM Rev. 2009;51(3):455–500. doi: 10.1137/07070111X. [DOI] [Google Scholar]
  • 11.Kong X., Meng D.Y. The bounds for the best rank-1 approximation ratio of a finite dimensional tensor space. Pac. J. Optim. 2015;11:323–337. [Google Scholar]
  • 12.Li Z., Nakatsukasa Y., Soma T., Uschmajew A. On orthogonal tensors and best rank-one approximation ratio. SIAM J. Matrix Anal. Appl. 2018;39(1):400–425. doi: 10.1137/17M1144349. [DOI] [Google Scholar]
  • 13.Oseledets I.V., Tyrtyshnikov E.E. Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 2009;31(5):3744–3759. doi: 10.1137/090748330. [DOI] [Google Scholar]
  • 14.Qi L., Luo Z. Tensor Analysis: Spectral Theory and Special Tensor. Philadelphia: SIAM; 2017. [Google Scholar]

Articles from Journal of Inequalities and Applications are provided here courtesy of Springer

RESOURCES