Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 May 24.
Published in final edited form as: Adv Neural Inf Process Syst. 2018 Dec;2018:9052–9060.

Learning Compressed Transforms with Low Displacement Rank

Anna T Thomas †,#, Albert Gu †,#, Tri Dao , Atri Rudra , Christopher Ré
PMCID: PMC6534145  NIHMSID: NIHMS993802  PMID: 31130799

Abstract

The low displacement rank (LDR) framework for structured matrices represents a matrix through two displacement operators and a low-rank residual. Existing use of LDR matrices in deep learning has applied fixed displacement operators encoding forms of shift invariance akin to convolutions. We introduce a rich class of LDR matrices with more general displacement operators, and explicitly learn over both the operators and the low-rank component. This class generalizes several previous constructions while preserving compression and efficient computation. We prove bounds on the VC dimension of multi-layer neural networks with structured weight matrices and show empirically that our compact parameterization can reduce the sample complexity of learning. When replacing weight layers in fully-connected, convolutional, and recurrent neural networks for image classification and language modeling tasks, our new classes exceed the accuracy of existing compression approaches, and on some tasks even outperform general unstructured layers while using more than 20X fewer parameters.

1. Introduction

Recent years have seen a surge of interest in structured representations for deep learning, motivated by achieving compression and acceleration while maintaining generalization properties. A popular approach for learning compact models involves constraining the weight matrices to exhibit some form of dense but compressible structure and learning directly over the parameterization of this structure. Examples of structures explored for the weight matrices of deep learning pipelines include low-rank matrices [15, 39], low-distortion projections [46], (block-)circulant matrices [8, 17], Toeplitz-like matrices [31, 42], and constructions derived from Fourier-related transforms [34]. Though they confer significant storage and computation benefits, these constructions tend to underperform general fully-connected layers in deep learning. This raises the question of whether broader classes of structured matrices can achieve superior downstream performance while retaining compression guarantees.

Our approach leverages the low displacement rank (LDR) framework (Section 2), which encodes structure through two sparse displacement operators and a low-rank residual term [24]. Previous work studying neural networks with LDR weight matrices assumes fixed displacement operators and learns only over the residual [42, 47]. The only case attempted in practice that explicitly employs the LDR framework uses fixed operators encoding shift invariance, producing weight matrices which were found to achieve superior downstream quality than several other compression approaches [42]. Unlike previous work, we consider learning the displacement operators jointly with the low-rank residual. Building upon recent progress on structured dense matrix-vector multiplication [14], we introduce a much more general class of LDR matrices and develop practical algorithms for using these matrices in deep learning architectures. We show that the resulting class of matrices subsumes many previously used structured layers, including constructions that did not explicitly use the LDR framework [17, 34]. When compressing weight matrices in fully-connected, convolutional, and recurrent neural networks, we empirically demonstrate improved accuracy over existing approaches. Furthermore, on several tasks our constructions achieve higher accuracy than general unstructured layers while using an order of magnitude fewer parameters.

To shed light on the empirical success of LDR matrices in machine learning, we draw connections to recent work on learning equivariant representations, and hope to motivate further investigations of this link. Notably, many successful previous methods for compression apply classes of structured matrices related to convolutions [8, 17, 42]; while their explicit aim is to accelerate training and reduce memory costs, this constraint implicitly encodes a shift-invariant structure that is well-suited for image and audio data. We observe that the LDR construction enforces a natural notion of approximate equivariance to transformations governed by the displacement operators, suggesting that, in contrast, our approach of learning the operators allows for modeling and learning more general latent structures in data that may not be precisely known in advance.

Despite their increased expressiveness, our new classes retain the storage and computational benefits of conventional structured representations. Our construction provides guaranteed compression (from quadratic to linear parameters) and matrix-vector multiplication algorithms that are quasi-linear in the number of parameters. We additionally provide the first analysis of the sample complexity of learning neural networks with LDR weight matrices, which extends to low-rank, Toeplitz-like and other previously explored fixed classes of LDR matrices. More generally, our analysis applies to structured matrices whose parameters can interact multiplicatively with high degree. We prove that the class of neural networks constructed from these matrices retains VC dimension almost linear in the number of parameters, which implies that LDR matrices with learned displacement operators are still efficiently recoverable from data. This is consistent with our empirical results, which suggest that constraining weight layers to our broad class of LDR matrices can reduce the sample complexity of learning compared to unstructured weights.

We provide a detailed review of previous work and connections to our approach in Appendix B.

Summary of contributions

  • We introduce a rich class of LDR matrices where the displacement operators are explicitly learned from data, and provide multiplication algorithms implemented in PyTorch (Section 3).1

  • We prove that the VC dimension of multi-layer neural networks with LDR weight matrices, which encompasses a broad class of previously explored approaches including the low-rank and Toeplitz-like classes, is quasi-linear in the number of parameters (Section 4).

  • We empirically demonstrate that our construction improves downstream quality when compressing weight layers in fully-connected, convolutional, and recurrent neural networks compared to previous compression approaches, and on some tasks can even outperform general unstructured layers (Section 5).

2. Background: displacement rank

The generic term structured matrix refers to an m × n matrix that can be represented in much fewer than mn parameters, and admits fast operations such as matrix-vector multiplication. The displacement rank approach represents a structured matrix MRm×n through displacement operators (ARm×m, BRn×n) defining a linear map ∇A,B : MAMMB on matrices, and a residual R, so that if

AMMB=R (1)

then M can be manipulated solely through the compressed representation (A, B, R). We assume that A and B have disjoint eigenvalues, which guarantees that M can be recovered from A, B, R (c.f. Theorem 4.3.2, Pan [37]). The rank of R (also denoted ∇A,B[M]) is called the displacement rank of M w.r.t. (A, B).2

The displacement approach was originally introduced to describe the Toeplitz-like matrices, which are not perfectly Toeplitz but still have shift-invariant structure [24]. These matrices have LDR with respect to shift/cycle operators. A standard formulation uses A = Z1, B = Z−1, where Zf=[01×(n1)fIn10(n1)×1] denotes the matrix with 1 on the subdiagonal and f in the top-right corner. The Toeplitz-like matrices have previously been applied in deep learning and kernel approximation, and in several cases have performed significantly better than competing compressed approaches [10, 31, 42]. Figure 1 illustrates the displacement (1) for a Toeplitz matrix, showing how the shift invariant structure of the matrix leads to a residual of rank at most 2.

Figure 1:

Figure 1:

Displacement equation for a Toeplitz matrix with respect to shift operators Z1, Z−1. Empty entries are zero.

A few distinct classes of useful matrices are known to satisfy a displacement property: the classic types are the Toeplitz-, Hankel-, Vandermonde-, and Cauchy-like matrices (Appendix C, Table 5), which are ubiquitous in other disciplines [37]. These classes have fixed operators consisting of diagonal or shift matrices, and LDR properties have traditionally been analyzed in detail only for these special cases. Nonetheless, a few elegant properties hold for generic operators, stating that certain combinations of (and operations on) LDR matrices preserve low displacement rank. We call these closure properties, and introduce an additional block closure property that is related to convolutional filter channels (Section 5.2).

We use the notation DA,Br to refer to the matrices of displacement rank ≤ r with respect to (A, B).

Proposition 1.

LDR matrices are closed under the following operations:

  1. Transpose/Inverse If MDA,Br, then MTDBT,ATr and M1DB,Ar.

  2. Sum If MDA,Br and NDA,Bs, then M+NDA,Br+s.

  3. Product If MDA,Br and NDB,Cs, then MNDA,Cr+s.

  4. Block Let Mij satisfy MijDAi,Bjr for i = 1 … k, j = 1 … . Then the k × ℓ block matrix (Mij)ij has displacement rank rkℓ.

Proposition 1 is proved in Appendix C.

3. Learning displacement operators

We consider two classes of new displacement operators. These operators are fixed to be matrices with particular sparsity patterns, where the entries are treated as learnable parameters.

The first operator class consists of subdiagonal (plus corner) matrices: Ai+1,i, along with the corner A0,n−1, are the only possible non-zero entries. As Zf is a special case matching this sparsity pattern, this class is the most direct generalization of Toeplitz-like matrices with learnable operators.

The second class of operators are tridiagonal (plus corner) matrices: with the exception of the outer corners A0,n−1 and An−1,0, Ai,j can only be non-zero if |ij| ≤ 1. Figure 2 shows the displacement operators for the Toeplitz-like class and our more general operators. We henceforth let LDR-SD and LDR-TD denote the classes of matrices with low displacement rank with respect to subdiagonal and tridiagonal operators, respectively. Note that LDR-TD contains LDR-SD.

Figure 2:

Figure 2:

The Zf operator (left), and our learnable subdiagonal (center) and tridiagonal (right) operators, corresponding to our proposed LDR-SD and LDR-TD classes.

Expressiveness

The matrices we introduce can model rich structure and subsume many types of linear transformations used in machine learning. We list some of the structured matrices that have LDR with respect to tridiagonal displacement operators:

Proposition 2.

The LDR-TD matrices contain:

  1. Toeplitz-like matrices, which themselves include many Toeplitz and circulant variants (including standard convolutional filters - see Section 5.2 and Appendix C, Corollary 1) [8, 17, 42].

  2. low-rank matrices.

  3. the other classic displacement structures: Hankel-like, Vandermonde-like, and Cauchy-like matrices.

  4. orthogonal polynomial transforms, including the Discrete Fourier and Cosine Transforms.

  5. combinations and derivatives of these classes via the closure properties (Proposition 1), including structured classes previously used in machine learning such as ACDC [34] and block circulant layers [17].

These reductions are stated more formally and proved in Appendix C.1. We also include a diagram of the structured matrix classes included by the proposed LDR-TD class in Figure 5 in Appendix C.1.

Our parameterization

Given the parameters A, B, R, the operation that must ultimately be performed is matrix-vector multiplication by M=A,B1[R]. Several schemes for explicitly reconstructing M from its displacement parameters are known for specific cases [38, 41], but do not always apply to our general operators. Instead, we use A, B, R to implicitly construct a slightly different matrix with at most double the displacement rank, which is simpler to work with.

Proposition 3.

Let K(A,v) denote the n × n Krylov matrix, defined to have i-th column Aiv. For any vectors g1,,gr,h1,,hrRn, then the matrix

Σi=1rK(A,gi)K(BT,hi)T (2)

has displacement rank at most 2r with respect to A−1, B.

Thus our representation stores the parameters A, B, G, H, where A, B are either subdiagonal or tridiagonal operators (containing n or 3n parameters), and G,HRn×r. These parameters implicitly define the matrix (2), which is the LDR weight layer we use.

Algorithms for LDR-SD

Generic and near-linear time algorithms for matrix-vector multiplication by LDR matrices with even more general operators, including both the LDR-TD and LDR-SD classes, were recently shown to exist [14]. However, complete algorithms were not provided, as they relied on theoretical results such as the transposition principle [6] that only imply the existence of algorithms. Additionally, the recursive polynomial-based algorithms are difficult to implement efficiently. For LDR-SD, we provide explicit and complete near-linear time algorithms for multiplication by (2), as well as substantially simplify them to be useful in practical settings and implementable with standard library operations. We empirically compare the efficiency of our implementation and unstructured matrix-vector multiplication in Figure 8 and Table 14 in Appendix E, showing that LDR-SD accelerates inference by 3.34-46.06X for n ≥ 4096. We also show results for the low-rank and Toeplitz-like classes, which have a lower computational cost. For LDR-TD, we explicitly construct the K(A,gi) and K(BT,hi) matrices for i = 1, …, r from Proposition 3 and then apply the standard O(n2) matrix-vector multiplication algorithm. Efficient implementations of near-linear time algorithms for LDR-TD are an interesting area of future work.

Theorem 1.

Define the simultaneous computation of k Fast Fourier Transforms (FFT), each with size m, to be a batched FFT with total size km.

Consider any subdiagonal matrix ARn×n and vectors g,hRn. Then K(A,g)T or K(A,g) can be multiplied by any vector x by computing 8log2(n) batched FFTs, each of total size 2n. The total number of computations is O(n log2 n).

These algorithms are also automatically differentiable, which we use to compute the gradients when learning. More complete descriptions of these algorithms are presented in Appendix C.

4. Theoretical properties of structured matrices

Complexity of LDR neural networks

The matrices we use (2) are unusual in that the parameters interact multiplicatively (namely in Ai, Bi) to implicitly define the actual layer. In contrast, fully-connected layers are linear and other structured layers, such as Fastfood and ACDC [28, 34, 46], are constant degree in their parameters. However, we can prove that this does not significantly change the learnability of our classes:

Theorem 2.

Let F denote the class of neural networks with L LDR layers, W total parameters, and piecewise linear activations. Let sign F denote the corresponding classification functions, i.e. {xsignf(x):fF}. The VC dimension of this class is

VCdim(signF)=O(LWlogW).

Theorem 2 matches the standard bound for unconstrained weight matrices [4, 22]. This immediately implies a standard PAC-learnable guarantee [44]. Theorem 2 holds for even more general activations and matrices that for example include the broad classes of [14]. The proof is in Appendix D, and we empirically validate the generalization and sample complexity properties of our class in Section 5.3.

Displacement rank and equivariance

We observe that displacement rank is related to a line of work outside the resource-constrained learning community, specifically on building equivariant (also called covariant in some contexts [5, 32]) feature representations that transform in predictable ways when the input is transformed. An equivariant feature map Φ satisfies

Φ(B(x))=A(Φ(x)) (3)

for transformations A, B (invariance is the special case when A is the identity) [16, 30, 40]. This means that perturbing the input by a transformation B before passing through the map Φ is equivalent to first finding the features Φ then transforming by A.

Intuitively, LDR matrices are a suitable choice for modeling approximately equivariant linear maps, since the residual ΦB of (3) has low complexity. Furthermore, approximately equivariant maps should retain the compositional properties of equivariance, which LDR satisfies via Proposition 1. For example, Proposition 1(c) formalizes the notion that the composition of two approximately equivariant maps is still approximately equivariant. Using this intuition, the displacement representation (1) of a matrix decomposes into two parts: the displacement operators A, B define the transformations to which the model is approximately equivariant, and the low complexity residual R controls standard model capacity.

Equivariance (3) has been used in several ways in the context of machine learning. One formulation, used for example to model ego-motions, supposes that (3) holds only approximately, and uses a fixed transformation B along with pairs of data points for both sides of (3) to learn an appropriate A [1, 30]. Another line of work uses the representation theory formalization of equivariant maps [12, 25]. We describe this formulation in more detail and show that LDR satisfies this definition as well in Proposition 7 in Appendix C.3. In contrast to previous settings, which hold one or both of A, B fixed, our formulation stipulates that Φ can be uniquely determined from A, B, and learns the latter as part of an end-to-end model. In Section 5.4 we include a visual example of latent structure that our displacement operators learn, where they recover centering information about objects from a 2D image dataset.

5. Empirical evaluation

Overview

In Section 5.1 we consider a standard setting of compressing a single hidden layer (SHL) neural network and the fully-connected (FC) layer of a CNN for image classification tasks. Following previous work [7, 42], we test on two challenging MNIST variants [27], and include two additional datasets with more realistic objects (CIFAR-10 [26] and NORB [29]). Since SHL models take a single channel as input, we converted CIFAR-10 to grayscale for this task. Our classes and the structured baselines are tested across different parameter budgets in order to show tradeoffs between compression and accuracy. As shown in Table 1, in the SHL model, our methods consistently have higher test accuracy than baselines for compressed training and inference, by 3.14, 2.70, 3.55, and 3.37 accuracy points on MNIST-bg-rot, MNIST-noise, CIFAR-10, and NORB respectively. In the CNN model, as shown in Table 1 in Appendix E, we found improvements of 5.56, 0.95, and 1.98 accuracy points over baselines on MNIST-bg-rot, MNIST-noise, and NORB respectively. Additionally, to explore whether learning the displacement operators can facilitate adaptation to other domains, we replace the input-hidden weights in an LSTM for a language modeling task, and show improvements of 0.81-30.47 perplexity points compared to baselines at several parameter budgets.

Table 1:

Test accuracy when replacing the hidden layer with structured classes. Where applicable, rank (r) is in parentheses, and the number of parameters in the architecture is in italics below each method. Comparisons to previously unexplored classic LDR types as well as additional structured baselines are included, with the ranks adjusted to match the parameter count of LDR-TD where possible. The Fastfood [46] and Circulant [8] methods do not have rank parameters, and the parameter count for these methods cannot be exactly controlled. Additional results when replacing the FC layer of a CNN are in Appendix E. Details for all experiments are in Appendix F.

Method MNIST-bg-rot MNIST-noise CIFAR-10 NORB
Unstructured 44.08
622506
65.15
622506
46.03
1058826
59.83
1054726
LDR-TD (r =1) 45.81
14122
78.45
14122
45.33
18442
62.75
14342
Toeplitz-like [42] (r = 4) 42.67
14122
75.75
14122
41.78
18442
59.38
14342
Hankel-like (r = 4) 42.23
14122
73.65
14122
41.40
18442
60.09
14342
Vandermonde-like (r = 4) 37.14
14122
59.80
14122
33.93
18442
48.98
14342
Low-rank [15] (r = 4) 35.67
14122
52.25
14122
32.28
18442
43.66
14342
Fastfood [46] 38.13
10202
63.55
10202
39.64
13322
59.02
9222
Circulant [8] 34.46
8634
65.35
8634
34.28
11274
46.45
7174

In addition to experiments on replacing fully-connected layers, in Section 5.2 we also replace the convolutional layer of a simple CNN while preserving performance within 1.05 accuracy points on CIFAR-10. In Section 5.3, we consider the effect of a higher parameter budget. By increasing the rank to just 16, the LDR-SD class meets or exceeds the accuracy of the unstructured FC layer in all datasets we tested on, for both SHL and CNN.3 Appendix F includes more experimental details and protocols. Our PyTorch code is publicly available at https://github.com/HazyResearch/structured-nets.

5.1. Compressing fully-connected layers

Image classification

Sindhwani et al. [42] showed that for a fixed parameter budget, the Toeplitz-like class significantly outperforms several other compression approaches, including Random Edge Removal [11], Low Rank Decomposition [15], Dark Knowledge [23], HashedNets [7], and HashedNets with Dark Knowledge. Following previous experimental settings [7, 42], Table 1 compares our proposed classes to several baselines using dense structured matrices for compressed training and inference when compressing the hidden layer of a single hidden layer neural network. In addition to Toeplitz-like, we implement and compare to other classic LDR types, Hankel-like and Vandermonde-like, which were previously indicated as an unexplored possibility [42, 47]. We also show results when compressing the FC layer of a 7-layer CNN based on LeNet in Appendix E, Table 7. In Appendix E, we show comparisons to additional baselines at multiple budgets, including network pruning [21] and a baseline used in [7], in which the number of hidden units is adjusted to meet the parameter budget.

At rank one (the most compressed setting), our classes with learned operators achieve higher accuracy than the fixed operator classes, and on the MNIST-bg-rot, MNIST-noise, and NORB datasets even improve on FC layers of the same dimensions, by 1.73, 13.30, and 2.92 accuracy points respectively on the SHL task, as shown in Table 1. On the CNN task, our classes improve upon unstructured fully-connected layers by 0.85 and 2.25 accuracy points on the MNIST-bg-rot and MNIST-noise datasets (shown in Table 7 in Appendix E). As noted above, at higher ranks our classes meet or improve upon the accuracy of FC layers on all datasets in both the SHL and CNN architectures.

Additionally, in Figure 3 we evaluate the performance of LDR-SD at higher ranks. Note that the ratio of parameters between LDR-SD and the Toeplitz-like or low-rank is r+1r, which becomes negligible at higher ranks. Figure 3 shows that at just rank 16, the LDR-SD class meets or exceeds the performance of the FC layer on all four datasets, by 5.87, 15.05, 0.74, and 6.86 accuracy points on MNIST-bg-rot, MNIST-noise, CIFAR-10, and NORB respectively, while still maintaining at least 20X fewer parameters.

Figure 3:

Figure 3:

Test accuracy vs. rank for unstructured, LDR-SD, Toeplitz-like, low-rank classes. On each dataset, LDR-SD meets or exceeds the accuracy of the unstructured fully-connected baseline at higher ranks. At rank 16, the compression ratio of an LDR-SD layer compared to the unstructured layer ranges from 23 to 30. Shaded regions represent two standard deviations from the mean, computed over five trials with randomly initialized weights.

Of particular note is the poor performance of low-rank matrices. As mentioned in Section 2, every fixed-operator class has the same parameterization (a low-rank matrix). We hypothesize that the main contribution to their marked performance difference is the effect of the learned displacement operator modeling latent invariances in the data, and that the improvement in the displacement rank classes—from low-rank to Toeplitz-like to our learned operators—comes from more accurate representations of these invariances. As shown in Figure 3, broadening the operator class (from Toeplitz-like at r = 1 to LDR-SD at r = 1) is consistently a more effective use of parameters than increasing the displacement rank (from Toeplitz-like at r =1 to r = 2). Note that LDR-SD (r = 1) and Toeplitz-like (r = 2) have the same parameter count.

For the rest of our experiments outside Section 5.1 we implemented the algorithms in Appendix C specifically for LDR-SD matrices, and focus on further evaluation of this class on more expensive models.

Language modeling

Here, we replace the input-hidden weights in a single layer long short-term memory network (LSTM) for a language modeling task. We evaluate on the WikiText-2 dataset, consisting of 2M training tokens and a vocabulary size of 33K [33]. We compare to Toeplitz-like and low-rank baselines, both previously investigated for compressing recurrent nets [31]. As shown in Table 2, LDR-SD improves upon the baselines for each budget tested. Though our class does not outperform the unstructured model, we did find that it achieves a significantly lower perplexity than the fixed Toeplitz-like class (by 19.94-42.92 perplexity points), suggesting that learning the displacement operator can help adapt to different domains.

Table 2:

Test perplexity when replacing input-hidden matrices of an LSTM with structured classes on WikiText-2. An unconstrained layer, with 65536 parameters, has perplexity 117.74. Parameter budgets correspond to ranks 1,2,4,8,16,24 for LDR-SD. Lower is better.

Num. Parameters LDR-SD Toeplitz-like Low-rank
2048 166.97 186.91 205.72
3072 154.51 177.60 179.46
5120 141.91 178.07 172.38
9216 143.60 186.52 144.41
17408 132.43 162.58 135.65
25600 129.46 155.73 133.37

5.2. Replacing convolutional layers

Convolutional layers of CNNs are a prominent example of equivariant feature maps.4 It has been noted that convolutions are a subcase of Toeplitz-like matrices with a particular sparsity pattern5 [8, 42]. As channels are simply block matrices6, the block closure property implies that multi-channel convolutional filters are simply a Toeplitz-like matrix of higher rank (see Appendix C, Corollary 1). In light of the interpretation of LDR of an approximately equivariant linear map (as discussed in Section 4), we investigate whether replacing convolutional layers with more general representations can recover similar performance, without needing the hand-crafted sparsity pattern.

Briefly, we test the simplest multi-channel CNN model on the CIFAR-10 dataset, consisting of one layer of convolutional channels (3 in channels, 3 out channels), followed by a fully-connected layer, followed by the softmax layer. The final accuracies are listed in Table 3. The most striking result is for the simple architecture consisting of two layers of a single structured matrix. This comes within 1.05 accuracy points of the highly specialized architecture consisting of convolutional channels + pooling + FC layer, while using fewer layers, hidden units, and parameters. The full details are in Appendix F.

Table 3:

Replacing a five-layer CNN consisting of convolutional channels, max pooling, and FC layers with two generic LDR matrices results in only slight test accuracy decrease while containing fewer layers, hidden units, and parameters. Rank (r) is in parentheses.

First hidden layer(s) Last hidden layer Hidden units Parameters Test Accuracy
3 Convolutional Channels (CC) FC 3072, 512 1573089 54.59
3CC + Max Pool FC 3072, 768, 512 393441 55.14
4CC + Max Pool FC 4096, 1024, 512 524588 60.05

Toeplitz-like (r = 16) channels Toeplitz-like (r = 16) 3072, 512 393216 57.29
LDR-SD (r = 16) channels LDR-SD (r = 16) 3072, 512 417792 59.36
Toeplitz-like (r = 48) matrix Toeplitz-like (r = 16) 3072, 512 393216 55.29
LDR-SD (r = 48) matrix LDR-SD (r = 16) 3072, 512 405504 59.00

5.3. Generalization and sample complexity

Theorem 2 states that the theoretical sample complexity of neural networks with structured weight matrices scales almost linearly in the total number of parameters, matching the results for networks with fully-connected layers [4, 22]. As LDR matrices have far fewer parameters, the VC dimension bound for LDR networks are correspondingly lower than that of general unstructured networks. Though the VC dimension bounds are sufficient but not necessary for learnability, one might still expect to be able to learn over compressed networks with fewer samples than over unstructured networks. We empirically investigate this result using the same experimental setting as Table 1 and Figure 3, and show in Table 12 (Appendix E) that the structured classes consistently have lower generalization error7 than the unstructured baseline.

Reducing sample complexity

We investigate whether LDR models with learned displacement operators require fewer samples to achieve the same test error, compared to unstructured weights, in both the single hidden layer and CNN architectures. Tables 10 and 11 in Appendix E show our results. In the single hidden layer architecture, when using only 25% of the training data the LDR-TD class exceeds the performance of an unstructured model trained on the full MNIST-noise dataset. On the CNN model, only 50% of the training data is sufficient for the LDR-TD to exceed the performance of an unstructured layer trained on the full dataset.

5.4. Visualizing learned weights

Finally, we examine the actual structures that our models learn. Figure 4(a,b) shows the heat map of the weight matrix WR784×784 for the Toeplitz-like and LDR-SD classes, trained on MNIST-bg-rot with a single hidden layer model. As is convention, the input is flattened to a vector in R784. The Toeplitz-like class is unable to determine that the input is actually a 28 × 28 image instead of a vector. In contrast, LDR-SD class is able to pick up regularity in the input, as the weight matrix displays grid-like periodicity of size 28.

Figure 4:

Figure 4:

The learned weight matrices (a,b) of models trained on MNIST-bg-rot. Unlike the Toeplitz-like matrix, the LDR-SD matrix displays grid-like periodicity corresponding to the 2D input. Figure (c) shows the values of the subdiagonal of B, reshaped as an image. The size and location of the circle roughly corresponds to the location of objects of interest in the 2D inputs. A similar centering phenomenon was found on the NORB dataset, shown in Figure 6 in Appendix E.

Figure 4(c) reveals why the weight matrix displays this pattern. The equivariance interpretation (Section 4) predicts that B should encode a meaningful transformation of the inputs. The entries of the learned subdiagonal are in fact recovering a latent invariant of the 2D domain: when visualized as an image, the pixel intensities correspond to how the inputs are centered in the dataset (Figure 4(d)). Figure 6 in Appendix E shows a similar figure for the NORB dataset, which has smaller objects, and we found that the subdiagonal learns a correspondingly smaller circle.

6. Conclusion

We substantially generalize the class of low displacement rank matrices explored in machine learning by considering classes of LDR matrices with displacement operators that can be learned from data. We show these matrices can improve performance on downstream tasks compared to compression baselines and, on some tasks, even general unstructured weight layers. We hope this work inspires additional ways of using structure to achieve both more compact and higher quality representations, especially for deep learning models which are commonly acknowledged to be overparameterized.

Acknowledgments

We thank Taco Cohen, Jared Dunnmon, Braden Hancock, Tatsunori Hashimoto, Fred Sala, Virginia Smith, James Thomas, Mary Wootters, Paroma Varma, and Jian Zhang for helpful discussions and feedback.

This material is based on research sponsored by Defense Advanced Research Projects Agency (DARPA) under agreement number FA8750-17-2-0095. We gratefully acknowledge the support of the DARPA SIMPLEX program under No. N66001-15-C-4043, DARPA FA8750-12-2-0335 and FA8750-13-2-0039, DOE 108845, National Institute of Health (NIH) U54EB020405, the National Science Foundation (NSF) under awards No. CCF-1563078 and No. CCF-1763481, the Office of Naval Research (ONR) under awards No. N00014-17-1-2266, the Moore Foundation, the Okawa Research Grant, American Family Insurance, Accenture, Toshiba, and Intel. This research was supported in part by affiliate members and other supporters of the Stanford DAWN project: Intel, Microsoft, Teradata, and VMware. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA, AFRL, NSF, NIH, ONR, or the U.S. Government.

A. Symbols and abbreviations

Table 4:

Symbols and abbreviations used in this paper.

Symbol Used For

LDR low displacement rank
LDR-SD matrices with low displacement rank with respect to subdiagonal operators
LDR-TD matrices with low displacement rank with respect to tridiagonal operators
(A, B) displacement operators
A,B[M] Sylvester displacement, AMMB
r (displacement) rank
(G, H) parameters which define the rank r residual matrix GHT, where G,HRn×r
Zf unit-f-circulant matrix, defined as Zf = [e2, e3, …, en, fe1]
K(A,v) Krylov matrix, with ith column Aiv
DA,Br matrices of displacement rank ≤ r with respect to (A, B)
Φ feature map
CC convolutional channels
FC fully-connected

B. Related work

Our study of the potential for structured matrices for compressing deep learning pipelines was motivated by exciting work along these lines from Sindhwani et al. [42], the first to suggest the use of low displacement rank (LDR) matrices in deep learning. They specifically explored applications of the Toeplitz-like class, and empirically show that this class is competitive against many other baselines for compressing neural networks on image and speech domains. Toeplitz-like matrices were similarly found to be effective at compressing RNN and LSTM architectures on a voice search task [31]. Another special case of LDR matrices are the circulant (or block-circulant) matrices, which have also been used for compressing CNNs [8]; more recently, these have also been further developed and shown to achieve state-of-the-art results on FPGA and ASIC platforms [17]. Earlier works on compressing deep learning pipelines investigated the use of low-rank matrices [15, 39]—perhaps the most canonical type of dense structured matrix—which are also encompassed by our framework, as shown in Proposition 2. Outside of deep learning, Choromanski and Sindhwani [10] examined a structured matrix class that includes Toeplitz-like, circulant, and Hankel matrices (which are all LDR matrices) in the context of kernel approximation.

On the theoretical side, Zhao et al. [47] study properties of neural networks with LDR weight matrices, proving results including a universal approximation property and error bounds. However, they retain the standard paradigm of fixing the displacement operators and varying the low-rank portion. Another natural theoretical question that arises with these models is whether the resulting hypothesis class is still efficiently learnable, especially when learning the structured class (as opposed to these previous fixed classes). Recently, Oymak [35] proved a Rademacher complexity bound for one layer neural networks with low-rank weight matrices. To the best of our knowledge, Theorem 2 provides the first sample complexity bounds for neural networks with a broad class of structured weight matrices including low-rank, our LDR classes, and other general structured matrices [14].

In Section 3 we suggest that the LDR representation enforces a natural notion of approximate equivariance and satisfies closure properties that one would expect of equivariant representations. The study of equivariant feature maps is of broad interest for constructing more effective representations when known symmetries exist in underlying data. The fact that convolutional networks induce equivariant representations, and the importance of this effect on sample complexity and generalization, has been well-analyzed [2, 12, 19, 43]. Building upon the observation that convolutional filters are simply linear maps constructed to be translation equivariant8, exciting recent progress has been made on crafting representations invariant to more complex symmetries such as the spherical rotation group [13] and egomotions [1]. Generally, however, underlying assumptions are made about the domain and invariances present in order to construct feature maps for each application. A few works have explored the possibility of learning invariances automatically from data, and design deep architectures that are in principle capable of modeling and learning more general symmetries [18, 36].

C. Properties of displacement rank

Displacement rank has traditionally been used to describe the Toeplitz-like, Hankel-like, Vandermonde-like, and Cauchy-like matrices, which are ubiquitous in disciplines such as engineering, coding theory, and computer algebra. Their associated displacement representations are shown in Table 5.

Table 5:

Traditional classes of structured matrices analyzed with displacement rank. In the Vandermonde and Cauchy cases, the displacement operators are parameterized by vRn and s,tRn respectively.

Structured Matrix M A B Displacement Rank r
Toeplitz Z1 Z−1 ≤ 2
Hankel Z1 Z0T ≤ 2
Vandermonde diag(v) Z0 ≤ 1
Cauchy diag(s) diag(t) ≤ 1

Proof of Proposition 1. The following identities are easily verified:

Transpose

BT,ATMT=(A,BM)T

Inverse

B,AM1=M1(A,BM)M1

Sum

A,B(M+N)=A,BM+A,BN

Product

A,CMN=(A,BM)N+M(B,CN)

Block The remainder

diag(A1,,Ak)MMdiag(B1,,B)

is the block matrix

(Ai,BjMij)1ik,1j.

This is the sum of kℓ matrices of rank r and thus has rank rkℓ.

Corollary 1.

A k × ℓ block matrix M, where each block is a Toeplitz-like matrix of displacement rank r, is Toeplitz-like with displacement rank rkℓ + 2k + 2.

Proof. Apply Proposition (d) where each Ak, Bk has the form Zf. Let A = diag(A1, …, Ak) and B = diag(B1, …, B). Note that A and Z1 (of the same size as A) differ only in 2k entries, and similarly B and Z−1 differ in 2 entries. Since an s-sparse matrix also has rank at most s,

Z1MMZ1=AMMB+(Z1A)MM(Z1B)

has rank at most rkℓ + 2k + 2.

Proof of Proposition 3. First consider the rank one case, R = ghT. It is easy to check that A1,ZTK(A,g) will only be non-empty in the first column, hence K(A,g)DA1,ZT1. Similarly, K(BT,h)DBT,Z1 and Proposition 1(a) implies K(BT,h)TDZT,B1. Then Theorem 1(c) implies that K(A,g)K(B,h)TDA,B2, The rank r case follows directly from Theorem 1(b).

C.1. Expressiveness

Expanding on the claim in Section 3, we formally show that these structured matrices are contained in the tridiagonal (plus corners) LDR class. This includes several types previously used in similar works.

Figure 5:

Figure 5:

Our proposed LDR-TD structured matrix class contains a number of other classes including Toeplitz-like [42] (and other classic displacement types, such as Hankel-like, Vandermonde-like, and Cauchy-like), low-rank [15], circulant [8], standard convolutional filters, and orthogonal polynomial transforms, including the Discrete Fourier and Cosine Transforms. Captions for each class show storage cost and operation count for matrix-vector multiplication.

Classic displacement rank

The Toeplitz-like, Hankel-like, Vandermonde-like, and Cauchy-like matrices are defined as having LDR with respect to A,B{Zf,ZfT,D} where D is the set of diagonal matrices [37]. (For example, [42] defines the Toeplitz-like matrices as (A, B) = (Z1, Z−1).) All of these operator choices are only non-zero along the three main diagonals or opposite corners, and hence these classic displacement types belong to the LDR-TD class.

Low-rank

A rank r matrix R trivially has displacement rank r with respect to (A, B) = (I, 0). It also has displacement rank r with respect to (A, B) = (Z1, 0), since Z1 is full rank (it is a permutation matrix) and so rank(Z1R) = rank(R) = r. Thus low-rank matrices are contained in both the LDR-TD and LDR-SD classes.

Orthogonal polynomial transforms

The polynomial transform matrix M with respect to polynomials (p0(X), …, pm−1(X)) and nodes (λ0, …, λn−1) is defined by Mij = pi(λj). When the pi(X) are a family of orthogonal polynomials, it is called an orthogonal polynomial transform.

Proposition 4.

Orthogonal polynomial transforms have displacement rank 1 with respect to tridiagonal operators.

Proof. Every orthogonal polynomial family satisfies a three-term recurrence

pi+1(X)=(aiX+bi)pi(X)+cipi1(X) (4)

where ai > 0 [9]. Let M be an orthogonal polynomial transform with respect to polynomials (pi(X))0≤i<m and nodes (λj)0≤j<n. Define the tridiagonal and diagonal matrix

A=[b0a01a0000c1a1b1a11a1000c1a1b1a100000bm2am21am2000cm1am1bm1am1]
B=diag(λ0,,λn1).

For any i ∈ {0, …, m − 2} and any j, consider entry ij of AMMB. This is

1ai[cipi1(λj)bipi(λj)+pi+1(λj)λjpi(λj)]

which is 0 by plugging λj into (4).

Thus A,BM can only non-zero in the last row, so MDA,B1.

Fourier-like transforms

Orthogonal polynomial transforms include many special cases. We single out the Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) for their ubiquity.

The N × N DFT and DCT (type II) are defined as matrix multiplication by the matrices

F=(e2πijN)ij
C=(cos[πNi(j+12)])ij

respectively.

The former is a special type of Vandermonde matrix, which were already shown to be note that Vandermonde matrices (λji)ij are themselves orthogonal polynomial transforms with pi(X) = Xi.

The latter can be written as

(Ti(cos[πN(j+12)]))ij,

where Ti are the Chebyshev polynomials (of the first kind) defined such that

Tn(X)=cos(narccosx).

Thus this is an orthogonal polynomial transform with respect to the Chebyshev polynomials.

Other constructions

From these basic building blocks, interesting constructions belonging to LDR-TD can be found via the closure properties. For example, several types of structured layers inspired by convolutions, including Toeplitz [42], circulant [8] and block-circulant [17] matrices, are special instances of Toeplitz-like matrices. We also point out a more sophisticated layer [34] in the tridiagonal LDR class, which requires more deliberate use of Proposition 1 to show.

Proposition 5.

The ACDC−1 layer, where A, D are diagonal matrices and C is the Discrete Cosine Transform [34], has displacement rank 2 with respect to tridiagonal operators.

Proof. Let T, Λ be the tridiagonal and diagonal matrix such that CDT,Λ1. Define S = ATA−1, which is also tridiagonal. Note that ADS,T0 by construction. Also note that DDΛ,Λ0 since Λ is diagonal. An application of the inverse closure rule yields CDΛ,T1. Finally, the product closure property implies that

ACDC1DS,T2.

C.2. Algorithm derivation and details

De Sa et al. recently showed that a very general class of LDR matrices have asymptotically fast matrix-vector multiplication algorithms [14]. However, parts of the argument are left to existential results. Building off De Sa et al. [14], we derive a simplified and self-contained algorithm for multiplication by LDR matrices with subdiagonal operators.

Since these matrices can be represented by the Krylov product formula (2), it suffices to show multiplication algorithms separately for matrix-vector multiplication by K(A,v)T and K(A,v).

Krylov transpose multiplication

Let ARn×n be a subdiagonal matrix, i.e. Ai+1,i are the only possible non-zero entries. Let u,vRn, we wish to compute the product K(A,v)Tu. For simplicity assume n is a power of 2.

Following [14], the vector

uTK(A,v)=[uvuAVuAn1v]

is the coefficient vector of the polynomial in X

uv+uAvX++uAn1vXn1=Σi=0uAiXiv=u(IAX)1v,

where we use the observation that An = 0.

By partitioning A into n/2 × n/2 blocks, it has the form [A00ae1en2TA1], where A0, A1 are subdiagonal matrices of half the size, a is a scalar, and ei are basis vectors. Let also u0,u1Rn2, v0,v1Rn2 denote the first and second halves of u, v.

By block matrix inversion for triangular matrices [A0CB]1=[A10B1CA1B1], this can be written as

uT(IAX)1v=[u0Tu1T][(IA0X)10(IA1X)1(ae1en2TX)(IA0X)1(IA1X)1][v0v1]=u0T(IA0X)1v0+u1T(IA1X)1v1+aX(u1T(IA1X)1e1)(en2T(IA0X)1v0)

Therefore uT (IAX)−1v can be computed from

u0T(IA0X)1v0u1T(IA1X)1v1u1T(IA1X)1e1en2T(IA0X)1v0

with an additional polynomial multiplication and 3 polynomial addition/subtractions.

A modification of this reduction shows that the 2 × 2 matrix of polynomials [u en]T (IAX)−1 [v e1] can be computed from

[u0en]T(IA0X)1[v0e1][u1en]T(IA1X)1[v1e1]

with an additional constant number of polynomial multiplications and additions.

The complete recursive algorithm is provided in Algorithm 1, where subroutine R computes the above matrix of polynomials. For convenience, Algorithm 1 uses Python indexing notation.

Algorithm 1.

Krylov Transpose (Recursive)

1: function Krylov_Transpose(ARn×n, u,vRn)
2: s ← subdiagonal(A)
3: return R(s, u, v)
4: end function
5: function R(sRn1, u, v)
6: S0 ← R(s[0 : n/2 − 1], u[0 : n/2], v[0 : n/2])
7: S1 ← R(s[n/2 : n − 1], u[n/2 : n], v[n/2 : n])
8: Ls[n21]X[S1[0,1]S0[1,0]S1[0,1]S0[1,1]S1[1,1]S0[1,0]S1[1,1]S0[1,1]]
9: return [L[0,0]+S0[0,0]+S1[0,0]L[0,1]+S0[0,1]L[1,0]+S1[1,0]L[1,1]]
10: end function

A polynomial multiplication of degree m in Step 8 can be computed as a convolution of size 2m. This reduces to two Fast Fourier Transform (FFT) calls, an elementwise multiplication in the frequency domain, and an inverse FFT. The total number of calls can be further reduced to 4 FFTs and 4 inverse FFTs.

Algorithm 1 defines a recursion tree, and in practice we compute this breadth first bottom-up to avoid recursive overhead. This also allows the FFT operations to be batched and computed in parallel. Thus the d-th layer of the algorithm (starting from the leaves) performs n2d FFT computations of size 2d+1.

This completes the proof of Theorem 1.

Krylov multiplication

[14] do not provide explicit algorithms for the more complicated problem of multiplication by K(A,v), instead justifying the existence of such an algorithm with the transposition principle. Traditional proofs of the transposition principle use circuit based arguments involving reversing arrows in the arithmetic circuit defining the algorithm’s computation graph [6].

Here we show an alternative simple way to implement the transpose algorithm using any automatic differentiation (AD) implementation, which all modern deep learning frameworks include. AD states that for any computation, its derivative can be computed with only a constant factor more operations [20].

Proposition 6 (Transposition Principle).

If the matrix MRn×n admits matrix-vector multiplication by any vector in N operations, then MT admits matrix-vector multiplication in O(N + n) operations.

Proof. Note that for any x and y, the scalar yTMx = y ⋅ (Mx) can be computed in N + n operations.

The statement follows from applying reverse-mode AD to compute MTy=x(yTMx).

Additionally, the algorithm can be optimized by choosing x = 0 to construct the forward graph.

To perform the optimization mentioned in Proposition 6, and avoid needing second-order derivatives when computing backprop for gradient descent, we provide an explicit implementation of non-transpose Krylov multiplication K(A,v). This was found by using Proposition 6 to hand-differentiate Algorithm 1.

Finally, we comment on multiplication by the LDR-TD class. Desa et al.[14] showed that these matrices also have asymptotically efficient multiplication algorithms, of the order O(rn log3 n) operations. However, these algorithms are even more complicated and involve operations such as inverting matrices of polynomials in a modulus. Practical algorithms for this class similar to the one we provide for LDR-SD matrices require more work to derive.

C.3. Displacement rank and equivariance

Here we discuss in more detail the connection between LDR and equivariance. One line of work [12, 25] has used the group representation theory formalization of equivariant maps, in which the model is equivariant to a set of transformations which form a group G. Each transformation gG acts on an input x via a corresponding linear map Tg. For example, elements of the rotation group in two and three dimensions, SO(2) and SO(3), can be represented by 2D and 3D rotation matrices respectively. Formally, a feature map Φ is equivariant if it satisfies

Φ(Tgx)=Tg(Φ(x)) (5)

for representations T, T′ of G [12, 25]. This means that perturbing the input x by a transformation gG before computing the map Φ is equivalent to first finding the features Φ and then applying the transformation. Group equivariant convolutional neural networks (G-CNNs) are a particular realization where Φ has a specific form GRd, and T, T′ are chosen in advance [12]. We use the notation Φ to distinguish our setting, where the input x is finite dimensional and Φ is linear.

Proposition 7.

If Φ has displacement rank 0 with respect to invertible A, B, then Φ is equivariant as defined by (5).

Proof. Note that if = ΦB for invertible matrices A, B (i.e. if a matrix Φ has displacement rank 0 with respect to A and B), then AiΦ = ΦBi also holds for iZ. Also note that the set of powers of any invertible matrix forms a cyclic group, where the group operation is multiplication. The statement follows directly from this fact, where the group G is Z, and the representations T and T′ of G correspond to the cyclic groups generated by A and B, respectively consisting of Ai and Bi for all iZ.

More generally, a feature map Φ satisfying (5) for a set of generators S = {gi} is equivariant with respect to the free group generated by S. Proposition 7 follows from the specific case of a single generator, i.e. S = {1}.

D. Bound on VC dimension and sample complexity

In this section we upper bound the VC dimension of a neural network where all the weight matrices are LDR matrices and the activation functions are piecewise polynomials. In particular, the VC dimension is almost linear in the number of parameters, which is much smaller than the VC dimension of a network with unstructured layers. The bound on the VC dimension allows us to bound the sample complexity to learn an LDR network that performs well among LDR networks. This formalizes the intuition that compressed parameterization reduces the complexity of the class.

Neural network model

Consider a neural network architecture with W parameters, arranged in L layers. Each layer l, has output dimension nl, where n0 is the dimension of the input data and the output dimension is nL = 1. For l = 1, …, L, let ilRnl be the input to the l-th layer. The input to the (l + 1)-th layer is exactly the output of the l-th layer. The activation functions φl are piecewise polynomials with at most p + 1 pieces and degree at most k ≥ 1. The input to the first layer is the data i1=xRn1, and the output of the last layer is a real number il+1R. The intermediate layer computation has the form:

il+1=ϕl(Mlil+bl)(applied elementwise),whereMlRnl1×nl,blRnl.

We assume the activation function of the final layer is the identity.

Each weight matrix Ml is defined through some set of parameters; for example, traditional unconstrained matrices are parametrized by their entries, and our formulation (2) is parametrized by the entries of some operator matrices Al, Bl and low-rank matrix GlHlT. We collectively refer to them all the parameters of the neural network (including the biases bl) as θRW, where W is the number of parameters.

Bounding the polynomial degree

The crux of the proof of the VC dimension bound is that the entries of MRn×m are polynomials in terms of the entries of its parameters (A, B, G, and H). of total degree at most c1mc2 for universal constants c1, c2. This allows us to bound the total degree of all of the layers and apply Warren’s lemma to bound the VC dimension.

We will first show this for the specific class of matrices that we use, where the matrix M is defined through equation (2).

Lemma 1.

Suppose that MRm×m is defined as

M=Σi=1rK(A,gi)K(BT,hi).

Then the entries of M are polynomials of the entries of A, B, G, H with total degree at most 2m.

Proof. Since K(A,gi)=[giAgiAm1gi], and each entry of Ak is a polynomial of the entries of A with total degree at most k, the entries of K(A,gi) are polynomials of the entries of A and gi with total degree at most m. Similarly the entries of K(BT,hi) are polynomials of the entries of B and hi with total degree at most m. Hence the entries of K(A,gi)K(BT,hi) are polynomials of the entries of A, B, G, H with total degree at most 2m. We then conclude that the entries of M are polynomials of the entries of A, B, G, H with total degree at most 2m. ◻

Lemma 2.

For a fixed data point x, at the l-th layer of a neural network with LDR weight matrices, each entry of Mlil + bl is a piecewise polynomial of the network parameters θ, with total degree at most dl, where

d0=0,dl=kdl1+c1nl1c2forl=1,,L,

where c1 and c2 are universal constants defined in Lemma 1. Thus entries of the output ϕl(Mlil + bl) are piecewise polynomials of θ with total degree at most kdl. Moreover,

dlc1kl1Σj=0l1njc2. (6)

Proof. We induct on l. For l = 1, since i1 = x is fixed, the entries of M1 are polynomials of θ of degree at most c1n0c2 by Lemma 1, and so the entries of M1i1 + b1 are polynomials of θ with total degree at most d1=c1n0c2. As φ is a piecewise polynomials of degree at most k, each entry the output φ1(M1i1 + b1) is a piecewise polynomial of θ with total degree at most 2n0k. The bound (6) holds trivially.

Suppose that the lemma is true for some l − 1 ≥ 1. Since the entries of il are piecewise polynomials of θ with total degree at most kdl−1 and entries of Ml are polynomials of θ with total degree at most c1nl1c2 by Lemma 1, the entries of Mlil + bl are piecewise polynomials of θ with total degree at most dl=kdl1+c1nl1c2. Thus φl(Mlil + bl) have entries that are piecewise polynomials of θ with total degree at most kdl.

We can bound

dl=kdl1+c1nl1c2kc1kl2Σj=0l2njc2+c1nl1c2c1kl1Σj=0l1njc2,

where we have used the fact that k ≥ 1, so c1nl1c2c1kl1nl1c2. This concludes the proof. ◻

Bounding the VC dimension

Now we are ready to bound the VC dimension of the neural network.

Theorem 3.

For input xX and parameter θRW, let f (x, θ) denote the output of the network. Let F be the class of functions {xf(x,θ):θRW}. Denote sign F{xsignf(x,θ):θRW}. Let Wl be the number of parameters up to layer l (i.e., the total number of parameters in layer 1, 2, …, l). Define the effective depth as

L1WΣl=1LWl,

and the total number of computation units (including the input dimension) as

UΣl=0Lnl.

Then

VCdim(signF)=O(LWlog(pU)+LLWlogk).

In particular, if k = 1 (corresponding to piecewise linear networks) then

VCdim(signF)=O(LWlog(pU))=O(LWlogW).

We adapt the proof of the upper bound from Bartlett et al. [4], Harvey et al. [22]. The main technical tool is Warren’s lemma [45], which bounds the growth function of a set of polynomials. We state a slightly improved form here from Anthony and Bartlett [3, Theorem 8.3].

Lemma 3.

Let p1, …, pm be polynomials of degree at most d in nm variables. Define

K{(sign(p1(x)),,sign(pm(x)):xRn},

i.e., K is the number of possible sign vectors given by the polynomials. Then K ≤ 2(2emd/n)n.

Proof of Theorem 3. Fixed some large integer m and some inputs x1, …, xm. We want to bound the number of sign patterns that the neural network can output for the set of input x1, …, xm:

K{(signf(x1,θ),,signf(xm,θ)):θRW}

We want to partition the parameter space RW so that for a fixed xj, the output f(xj, θ) is a polynomial on each region in the partition. Then we can apply Warren’s lemma to bound the number of sign patterns. Indeed, for any partition S={P1,,PN} of the parameter space RW, we have

KΣj=1N{(signf(x1,θ),,signf(xm,θ)):θPj}. (7)

We construct the partitions iteratively layer by layer, through a sequence S0,S1,,SL1 of successive refinements, satisfying the two properties:

  1. S0=1 and for each 1 ≤ lL − 1,
    SlSl12(2empnldlWl)Wl,
    where nl is the dimension of the output of the l-th layer, dl is the bound on the total degree of Mlil + bl as piecewise polynomials of θ as defined in Lemma 2, and Wl is the number of parameters up to layer l (i.e., the total number of parameters in layer 1, 2, …, l).
  2. For each l = 0, …, L − 1, for each element S of Sl, for each fixed data point xj (with j = 1, …, m), the entries of the output φl(Mlil + bl) when restricted to S are polynomials of θ with total degree at most kdl−1.

We can define S0=RW, which satisfies property 2, since at layer 1, the entries of i1 = xj (for fixed xj) are polynomials of θ of degree d0 = 0.

Suppose that we have constructed S0,,Sl1, and we want to define Sl. For any h ∈ [nl], j ∈ [m], and SSl1, let ph,xj,S(θ)=(Mlil+bl)hS be the h-th entry of Mlil + bl restricted to the region S. By the induction hypothesis, for each SSl1, the entries of il when restricted to S are polynomials of θ of total degree at most kdl−1. Thus by Lemma 1, the entries of Mlil + bl when restricted to S are polynomials of θ with total degree at most kdl1+c1nl1c2=dl, and depends on at most Wl many variables.

Let {t1, …, tp} be the set of breakpoints of the activation function, which has at most p pieces. For any fixed SSl1, by Lemma 3, the collection of polynomials

{ph,xj,S(θ)ti:h[nl],j[m],i[p]}

attains at most

Π2(2e(nlmp)dlWl)Wl

distinct sign patterns when θRW. Thus, we can partition RW into this many regions, such that all these polynomials have the same signs within each region. We intersect all these regions with S to obtain a partition of S into at most Π subregions. Applying this for all SSl1 gives our desired partition Sl. Thus, the required property 1 is satisfied.

Fix some SSn. Notice that, when θ is restricted to S′, all the polynomials

{ph,xj,S(θ)ti:h[nl],j[m],i[p]}

have the same sign, hence the entries of Mlil + bl lies between two breakpoints of the activation function, and so the entries of the output φl(Mlil + bl) is a fixed polynomial in Wn variables of degree no more than kdl. This implies that the entries of the input il+1 are polynomial function of Wl variables of degree no more than kdl.

This recursive construction yields a partition SL1ofRW such that for SSL1 the network output in response to any xj is a fixed polynomial of θS of degree no more than kdL1+c1nL1c2=dL (recall that we assume the activation function of the final layer is the identity). Hence by Lemma 3 again,

{(signf(x1,θ),,signf(xm,θ)):θS}2(2emkdLWL)WL.

By the property 1, we can bound the size of SL1:

SLΠl=1L12(2emnlpdlWl)Wl.

Combining the two bounds along with equation (7) yields

KΠl=1L2(2empnldlWl)Wl.

We can take logarithm and apply Jensen’s inequality, with WΣl=1LWl:

log2KL+Σl=1LWllog22empnldlWl=L+WΣl=1LWlWlog22empnldlWlL+Wlog2(Σl=1LWlW2empnldlWl)(Jensensinequality)=L+Wlog22empΣl=1LnldlW.

We can bound Σnldl using the bound on dl from Lemma 2:

Σl=1LnldlΣl=1Lnlc1kl1Σj=0l1njc2LUc1kL1Uc2c1Uc2+2kL,

where we used the fact that LU. Thus

log2KL+Wlog22c1empU2+c2kLW.

To bound the VC-dimension, recall that by definition, if VCdim(signF)=m then exists m data points x1, …, xm such that the output of the model can have 2n sign patterns. The bound on log2 K then implies

VCdim(signF)L+Wlog22c1epU2+c2kLVCdim(signF)W.

We then use Lemma 4 below, noting that 2c1epU2+c2kL ≥ 16, to conclude that

VCdim(signF)L+Wlog2(2c1epU2+c2kLlog2(2c1epU2+c2kL))=O(LWlog(pU)+LLWlogk),

completing the proof.

A bound on the VC dimension immediate yields a bound on the sample complexity of learning from this class of neural networks with LDR matrices [44].

Corollary 2.

The class of neural network with LDR matrices as weights and piecewise linear activation is (ϵ, δ)-PAC-learnable with a sample of size

O(LWlogW+log1δϵ).

Since the number of parameters W is around the square root of the number of parameters of a network with unstructured layers (assuming fixed rank of the LDR matrices), the sample complexity of LDR networks is much smaller than that of general unstructured networks.

Lemma 4 (Lemma 16 of [22]).

Suppose that 2m ≤ 2t(mr/w)w for some r ≥ 16 and mwt ≥ 0. Then, mt + w log2 (2r log2 r).

Extending to rational functions

We now show that Theorem 3 holds for matrices where the entries are rational functions—rather than polynomials—of its parameters, incurring only a constant in the bound. To define the function class sign F, we account for the possibility of poles by defining sign(a/0) = 0.

We only need to check that Lemma 2 and Lemma 3 still hold when polynomials are replaced by rational functions everywhere, and the degree of a rational function is defined as the usual deg(a/b) = max{deg a, deg b}. To show Lemma 2 still holds, it suffices that the compositional degree bound deg(f ο g) < deg(f) deg(g) holds for rational functions f, g, just as in the polynomial case. To show Lemma 3 in the case when pi = ai/bi are rational functions, we note that sign(pi(x)) = sign(ai(x)bi(x)), and furthermore deg(aibi) ≤ 2 deg(pi). Appealing to the polynomial version of Lemma 3 shows that it holds in the rational function setting with a slightly weaker upper bound K ≤ 2(4emd/n)n. This gets converted to a constant factor in the result of Theorem 3.

Next, we extend Lemma 1 by showing that generic LDR matrices have entries which are rational functions of their parameters. This immediately lets us conclude that neural networks built from any LDR matrices satisfy the VC dimension bounds of Theorem 3.

Lemma 5.

If MRm×m satisfies AMMB = GHT, then the entries of M are polynomials of the entries of A, B, G, H with total degree at most c1mc2 for some universal constants c1, c2 > 0.

Proof. The vectorization of the Sylvester equation AMMB = R is (IABI) vec(M) = vec(R), where vec denotes the vectorization operation by stacking a matrix’s columns, and ⊗ is the Kronecker product. Note that the entries of N−1 for an arbitrary matrix NRn×n has degree n in the entries of N, and R = GH has degree 2 in the entries of G, H. Therefore the entries of

vec(M)=(IABI)1vec(R)

have degree n2 + 2 in the entries of A, B, G, H.

Note that many other classes of matrices satisfy this lemma. For example, a large class of matrices satisfying a property called low recurrence width was recently introduced as a way of generalizing many known structured matrices [14]. The low recurrence width matrices are explicitly defined through a polynomial recurrence and satisfy the bounded degree condition. Additionally, Lemma 5 holds when the parameters A, B themselves are structured matrices with entries having polynomial degree in terms of some parameters. This includes the case when they are quasiseparable matrices, the most general class of LDR previously analyzed [14].

E. Additional results

E.1. Additional baselines and comparisons at multiple budgets

In Tables 6 and 7 we compare to baselines at parameter budgets corresponding to both the LDR-TD and LDR-SD classes in the SHL and CNN models. In Tables 8 and 9, we also compare to two additional baselines, network pruning [21] and a baseline used in [7], in which the number of hidden units is reduced to meet the parameter budget. We refer to this baseline as RHU (“reduced hidden units”). We show consistent improvements of LDR-SD over both methods at several budgets. We note that unlike the structured matrix methods which provide compression benefits during both training and inference, pruning requires first training the original model, followed by retraining with a fixed sparsity pattern.

Table 6:

Test accuracy when replacing the hidden layer with structured classes in the single hidden layer architecture, at parameter budgets corresponding to LDR-TD and LDR-SD rank one. Rank is in parentheses. The first group of structured methods (in orange) all have compression factors (relative to a general unstructured layer) of 98 on MNIST-bg-rot and MNIST-noise, and 128 on CIFAR-10 and NORB. The second group of structured methods (in blue) all have compression factors of 196 on MNIST-bg-rot and MNIST-noise, and 256 on CIFAR-10 and NORB.

Method MNIST-bg-rot MNIST-noise CIFAR-10 NORB
Unstructured 44.08 65.15 46.03 59.83

LDR-TD (r = 1) 45.81 78.45 45.33 62.75
Toeplitz-like [42] (r = 4) 42.67 75.75 41.78 59.38
Hankel-like (r = 4) 42.23 73.65 41.40 60.09
Vandermonde-like (r = 4) 37.14 59.80 33.93 48.98
Low-rank [15] (r = 4) 35.67 52.25 32.28 43.66

LDR-SD (r = 1) 44.74 78.80 43.29 63.78
Toeplitz-like [42] (r = 2) 42.07 74.25 40.68 57.27
Hankel-like (r = 2) 41.01 71.20 40.46 57.95
Vandermonde-like (r = 2) 33.56 50.85 28.99 43.21
Low-rank [15] (r = 2) 32.64 38.85 24.93 37.03

E.2. Sample complexity and generalization

As shown in Tables 10 and 11, we investigated how the performance of the structured and general unstructured fully-connected layers varied with the amount of training data. On the MNIST variants, we trained both the single hidden layer and CNN models with random subsamples of 25%, 50%, and 75% of the training set, with 15% of the training set used for validation in all settings. In addition, in Table 12, we compare the generalization error of structured classes with an unstructured model, and find that the structured classes have consistently lower generalization error.

E.3. Additional visualizations

In Figure 6, we visualize the learned subdiagonal on NORB along with images from the dataset.

Table 7:

Test accuracy when replacing the fully-connected layer with structured classes in the CNN architecture, at parameter budgets corresponding to LDR-TD and LDR-SD rank one. Rank is in parentheses. The first group of structured methods (in orange) all have compression factors (relative to a general unstructured layer) of 98 on MNIST-bg-rot and MNIST-noise, and 128 on CIFAR-10 and NORB. The second group of structured methods (in blue) all have compression factors of 196 on MNIST-bg-rot and MNIST-noise, and 256 on CIFAR-10 and NORB.

Method MNIST-bg-rot MNIST-noise CIFAR-10 NORB
Fully-connected 67.94 90.30 68.09 75.16

LDR-TD (r = 1) 68.79 92.55 66.63 74.23
Toeplitz-like [42] (r = 4) 63.23 91.60 67.10 72.25
Hankel-like (r = 4) 64.21 90.80 68.10 71.23
Vandermonde-like (r = 4) 61.76 90.40 63.63 72.11
Low-rank [15] (r = 4) 60.35 87.30 60.90 71.47

LDR-SD (r = 1) 67.40 92.20 65.48 73.63
Toeplitz-like [42] (r = 2) 63.63 91.45 67.15 71.64
Hankel-like (r = 2) 64.08 90.65 67.49 71.21
Vandermonde-like (r = 2) 51.38 86.50 58.00 68.08
Low-rank [15] (r = 2) 41.91 71.15 48.48 65.34

Figure 6:

Figure 6:

We visualize the learned subdiagonal of the operator B and images from the NORB dataset. We observe a centering phenomenon similar to that described in Figure 4.

On the MNIST-bg-rot dataset [27], we note that Chen et al. [7] also tested several methods on this dataset, including Random Edge Removal [11], Low Rank Decomposition [15], Dark Knowledge [23], HashedNets [7], and HashedNets with Dark Knowledge, and reported test errors of 73.17, 80.63, 79.03, 77.40, 59.20, and 58.25, where each method had 12406 parameters in the architecture. We found that our LDR-SD class, with 10986 parameters in the architecture, achieved a test error of 55.26, as shown in Table 6, outperforming all methods evaluated by Chen et al. [7]. Sindhwani et al. [42] later also tested on this dataset, and reported test errors of 68.4, 62.11, and 55.21 for Fastfood (10202 parameters), Circulant (8634 parameters), and Toeplitz-like, r = 2 (10986 parameters). LDR-SD exceeds their reported results for Fastfood and Circulant [8], but not that of Toeplitz-like. We did find that our proposed classes consistently exceeded the performance of our own implementation of Toeplitz-like on this dataset (Table 1, Figure 3, and Tables 6 and 7).

E.4. Rectangles dataset

We provide an interesting example of a case where LDR-TD and LDR-SD do not exceed the performance of the fixed operator classes in the single hidden layer architecture. In this simple dataset from Larochelle et al. [27], the task is to classify a binary image of a rectangle as having a greater length or width. We show examples of the dataset in Figure 7. On this dataset, in contrast to the more challenging datasets (MNIST-bg-rot [27], MNIST-noise [27], CIFAR-10 [26], and NORB [29]) we tested on, every structured class outperforms an unconstrained model (622506 parameters), including the circulant class [8] which compresses

Table 8:

On the MNIST variants, in the single hidden layer architecture, we compare LDR-SD, pruning [21], and a baseline which reduces the number of hidden units (denoted RHU), at multiple budgets. At each budget, we adjust the number of pruned weights or hidden units to match as closely as possible the parameter budget of LDR-SD. Parameter counts of fully-connected layers for LDR-SD and pruning at ranks 1,2,4,8,12, and 16 are 10986, 12554, 15690, 21962, 28234, and 34506 respectively, and 11126, 12714, 15890, 22242, 28594, 34946 for RHU (for which parameter count cannot be controlled exactly). As shown above, we find that the classification accuracy of LDR-SD consistently exceeds that of both methods.

Rank of LDR-SD LDR-SD Pruning [21] RHU [7]
1 44.74 40.41 37.18
2 44.46 41.18 37.60
4 47.72 42.45 37.98
8 48.76 43.52 39.77
12 48.90 43.19 40.56
16 49.51 43.58 40.70

(a) MNIST-bg-rot
Rank of LDR-SD LDR-SD Pruning [21] RHU [7]

1 78.80 67.75 62.85
2 77.95 69.35 62.55
4 78.32 68.25 63.40
8 78.63 67.25 64.45
12 78.33 67.30 63.85
16 78.08 66.95 66.10

(b) MNIST-noise

the hidden layer by 784X, and expanding the class beyond Toeplitz-like does not improve performance. We hypothesize that this is because the Toeplitz-like class may enforce the right structure, in the sense that it is sufficiently expressive to fit a perfect model on this dataset, but not expansive enough to lead to overfitting. For example, while the Toeplitz-like operators model approximate shift equivariance (discussed in Section 4 and Proposition 7 in Section C.3), the additional scaling that subdiagonal operators provide is unnecessary on these binary inputs.

Figure 7:

Figure 7:

Examples of images from the rectangles dataset. [27]

E.5. Acceleration at inference time

We empirically study the acceleration obtained at inference time with our implementation of the algorithms for multiplication by LDR-SD described in Appendix C.2. We generated random parameters for each class and ran each multiplication algorithm 1000 times to compare the speedup of each class over an unstructured multiply. Each test was repeated 10 times, and the minimum total runtime over the 10 tests was used for each class. As shown in Figure 8 and Table 14, at n ≥ 4096, our simple Python implementation is 3.34-46.06X faster than the highly optimized unstructured matrix-vector multiply (a BLAS level 2 operation). We also compare with two other structured classes, low-rank and Toeplitz-like, at r = 1, 2, 4, 8, 16. A batch size of

Table 9:

On the MNIST variants, in the CNN architecture, we compare LDR-SD, pruning [21], and a baseline which reduces the number of hidden units (denoted RHU), at multiple budgets. At each budget, we adjust the number of pruned weights or hidden units to match as closely as possible the parameter budget of LDR-SD. Parameter counts of fully-connected layers for LDR-SD and pruning at ranks 1,2,4,8,12, and 16 are 11770, 13338, 16474, 22746, 29018, and 35290 respectively, and 11935, 13525, 16705, 23065, 29425, 35785 for RHU (for which parameter count cannot be controlled exactly). As shown above, we find that the classification accuracy of LDR-SD consistently exceeds that of both methods.

Rank of LDR-SD LDR-SD Pruning [21] RHU [7]
1 67.40 64.25 64.03
2 67.53 64.05 64.67
4 67.96 65.50 66.37
8 67.21 64.12 64.70
12 68.54 65.65 65.99
16 67.00 65.59 66.47

(a) MNIST-bg-rot
Rank of LDR-SD LDR-SD Pruning [21] RHU [7]

1 92.20 90.80 90.95
2 92.75 91.65 91.00
4 91.30 90.60 91.25
8 91.95 91.05 90.65
12 92.10 90.00 90.85
16 93.20 90.55 90.40

(b) MNIST-noise

one was used in all tests. The time complexity of multiplication by low-rank and Toeplitz-like is O(nr) and O(nr log n) respectively, compared to O(nr log2 n) for LDR-SD.

Figure 8:

Figure 8:

Acceleration of n × n structured classes over unstructured matrix-vector multiply at inference time. At n ≥ 4096, LDR-SD (r = 1) achieves a speedup of 3.34-46.06X over unstructured. Data for higher ranks are shown in Table 14. The comparison to the low-rank and Toeplitz-like classes illustrates a tradeoff involved in broadening the class of structured matrices we learn over. Though LDR-SD consistently outperforms these classes on downstream quality, its computational cost of multiplication is O(nr log2 n), compared to O(nr) and O(nr log n) for low-rank and Toeplitz-like respectively. Experimental details are in Appendix E.5.

Table 10:

On the MNIST variants, in the single hidden layer architecture, we show how the number of training samples affects the performance of the unstructured model and the structured classes. Columns correspond to models trained on 25%, 50%, 75% and 100% of the training data (randomly subsampled). LDR-TD and LDR-SD consistently outperform the structured baselines at the tested subsampling ratios. On MNIST-bg-rot, LDR-TD only needs 75% of the training data to outperform the unstructured model trained on 100% of the training data. On MNIST-noise, both LDR-TD and LDR-SD only need 25% of the training data to outperform the unstructured layer. All are rank one.



Method 25% 50% 75% 100% Method 25% 50% 75% 100%


Unstructured 34.46 38.80 43.35 44.08 Unstructured 59.30 61.85 65.35 65.15


LDR-TD 34.01 39.59 44.35 45.81 LDR-TD 65.45 74.60 77.45 78.45
LDR-SD 35.64 39.78 42.72 44.74 LDR-SD 67.90 71.15 76.95 78.80
Toeplitz-like [42] 33.71 36.44 39.32 41.12 Toeplitz-like 56.15 67.75 72.30 73.95
Low-rank [15] 21.44 23.46 23.48 25.06 Low-rank 24.25 26.20 26.85 26.40


(a) MNIST-bg-rot (b) MNIST-noise

Table 11:

On the MNIST variants, in the CNN architecture, we show how the number of training samples affects the performance of the unstructured model and the structured classes. Columns correspond to models trained on 25%, 50%, 75% and 100% of the training data (randomly subsampled). LDR-TD and LDR-SD consistently outperform the structured baselines at the tested subsampling ratios. On MNIST-noise, both LDR-TD and LDR-SD only need 50% of the training data to outperform the unstructured layer. All are rank one.



Method 25% 50% 75% 100% Method 25% 50% 75% 100%


Unstructured 54.12 62.53 67.52 67.94 Unstructured 81.85 88.25 89.75 90.30


LDR-TD 53.66 62.15 67.25 68.79 LDR-TD 86.45 91.35 93.00 92.55
LDR-SD 50.72 61.92 65.93 67.40 LDR-SD 86.95 90.90 91.55 92.20
Toeplitz-like [42] 49.10 57.20 61.53 63.00 Toeplitz-like [42] 81.65 88.15 90.90 90.95
Low-rank [15] 26.98 27.97 28.97 29.63 Low-rank [15] 33.15 38.40 42.55 44.55


(a) MNIST-bg-rot (b) MNIST-noise

Table 12:

Generalization error for unstructured, LDR-TD, LDR-SD, Toeplitz-like, low-rank classes on the single hidden layer architecture. Consistent with Theorem 2, the structured classes have consistently lower generalization error than the unstructured model. All are rank one.

Method MNIST-bg-rot MNIST-noise CIFAR-10 NORB
Unstructured 55.78 21.63 34.32 40.03

LDR-TD 13.52 11.36 7.10 9.51
LDR-SD 12.87 12.65 6.29 8.68
Toeplitz-like [42] 7.98 15.80 5.59 7.87
Low-rank [15] 8.40 0.31 0.09 2.59

F. Experimental details

F.1. Image classification

In Table 15, we provide details on the datasets we use for evaluation. For all our experiments, batch sizes were chosen to be 50. NORB was downsampled to 32 × 32, and the left stereo image was used. Training was performed with stochastic gradient descent with momentum, with the number of epochs set to 50 on all datasets. 15% of the training data was used for the validation set on all experiments. We fixed momentum at 0.9 for all methods for all experiments, and performed a grid search over learning rate. Unless otherwise stated, for each method, we tested the learning rates {0.0002, 0.0005, 0.001, 0.002}, with three trials (with

Table 13:

Test accuracy when replacing the hidden layer with structured classes on the rectangles dataset [27]. Where applicable, rank (r) is in parentheses, and the number of parameters in the architecture is in italics below each method.

Method Test Accuracy
Unconstrained 91.94
622506
LDR-TD (r = 1) 98.53
14122
LDR-SD (r = 1) 98.39
10986
Toeplitz-like (r = 4) [42] 99.29
14122
Hankel-like (r = 4) 97.77
14122
Vandermonde-like (r = 4) 94.11
14122
Low-rank (r = 4) [15] 92.80
14122
Fastfood [46] 92.20
10202
Circulant [8] 95.58
8634

random initializations) per learning rate. For each trial, we test on the validation set at each epoch, and report the test accuracy of the model with the highest validation accuracy, over all learning rates, trials, and epochs.

In Figure 3, for each method and each of the four learning rates, we perform five trials with random initializations and report the average and standard deviation of the test accuracy of the learning rate with the highest average validation accuracy.

Single hidden layer architecture

In these experiments, we used an architecture consisting of a fully-connected hidden layer, followed by a fully-connected softmax layer. In order to be consistent with the architecture used in Sindhwani et al. [42], we do not use a bias term in the hidden layer.

CNN architecture

In these experiments, shown in Table 7 in Appendix E, we tested on a LeNet-based architecture. The architecture has 2 convolution/pool layers with 6 and 16 channels respectively, followed by a fully-connected layer, followed by fully-connected logit/softmax layer. We replaced the second to last fully-connected layer, which was of dimensions 784 × 784 for the MNIST-bg-rot and MNIST-noise datasets, and 1024 × 1024 for the CIFAR-10 and NORB experiments.

Replacing convolutional layers

This experiment corresponds to Table 3.

Here, we investigated whether the convolutional layers of CNNs can be learned automatically. For our experiments, we test on the simplest possible multi-channel CNN model on the CIFAR-10 dataset. The model consists of one layer of convolutional channels (3 RGB in channels, 3 out channels, stride 5), followed by a fully-connected layer and a final FC+softmax layer (total of 4 layers). We replace the convolutions with various structured matrices of the same dimensions, keeping the same 3 × 3 channel structure (e.g. it would consist of 3 · 3 = 9 square structured matrices) and number of hidden units.9

Table 14:

Acceleration of n × n structured classes over unstructured matrix-vector multiply at inference time. Experimental details are in Appendix E.5.

Rank

n 1 2 4 8 16

29 5.15 × 101 2.43 × 101 2.46 × 101 2.08 × 101 1.81 × 101
210 1.39 × 102 5.41 × 101 5.66 × 101 4.62 × 101 3.43 × 101
211 4.14 × 102 1.60 × 102 1.71 × 102 1.05 × 102 6.90 × 101
212 2.38 × 103 8.71 × 102 7.46 × 102 4.73 × 102 3.59 × 102
213 5.96 × 103 1.75 × 103 1.65 × 103 1.13 × 103 8.86 × 102
214 8.35 × 103 3.44 × 103 3.40 × 103 2.29 × 103 1.74 × 103
215 1.79 × 104 7.50 × 103 7.53 × 103 4.91 × 103 3.70 × 103

(a) Low-rank

Rank

n 1 2 4 8 16

29 3.06 × 10−1 2.60 × 10−1 2.32 × 10−1 1.86 × 10−1 1.61 × 10−1
210 7.34 × 10−1 6.21 × 10−1 5.18 × 10−1 4.00 × 10−1 3.28 × 10−1
211 1.90 × 100 1.71 × 100 1.38 × 100 1.08 × 100 8.46 × 10−1
212 1.23 × 101 1.01 × 101 7.92 × 100 5.97 × 100 4.62 × 100
213 3.34 × 101 2.73 × 101 2.26 × 101 1.52 × 101 1.23 × 101
214 6.96 × 101 5.68 × 101 4.19 × 101 3.00 × 101 2.26 × 101
215 1.49 × 102 1.19 × 102 9.07 × 101 5.46 × 101 3.82 × 101

(b) Toeplitz-like

Rank

n 1 2 4 8 16

29 6.68 × 10−2 4.63 × 10−2 4.05 × 10−2 3.10 × 10−2 2.56 × 10−2
210 1.49 × 10−1 1.20 × 10−1 9.45 × 10−2 6.73 × 10−2 5.24 × 10−2
211 4.99 × 10−1 4.32 × 10−1 3.02 × 10−1 1.94 × 10−1 1.37 × 10−1
212 3.34 × 100 2.57 × 100 1.61 × 100 1.06 × 100 7.52 × 10−1
213 9.71 × 100 6.61 × 100 4.40 × 100 2.46 × 100 1.68 × 100
214 2.12 × 101 1.41 × 101 8.38 × 100 4.35 × 100 3.00 × 100
215 4.61 × 101 2.82 × 101 1.60 × 101 8.58 × 100 5.70 × 100

(c) LDR-SD
Table 15:

Overview of the image classification datasets used in this work. For all datasets, 15% of the training set was used for the validation set.

Dataset Training Examples Test Examples Number of Classes
MNIST-bg-rot [27] 12000 50000 10
MNIST-noise [27] 12000 2000 10
CIFAR-10 [26] 50000 10000 10
NORB [29] 291600 58320 6
Rectangles [27] 1200 50000 2

The LDR classes benefit from being composed with LDR matrices of the same type (due to the composition property, Proposition 1(c)), so we additionally replace the later FC layer with the same structured matrix type.

By Proposition 1(d), channels of Toeplitz-like matrices form a larger Toeplitz-like matrix of the same size. Using this insight, we consider replacing the channel structure of the convolutional layer with either channels of structured matrices or a single wide structured matrix. (Also, note that this is able to leverage the asymptotic fast nature of our structured classes.)

Because it seems that convolutional layers are strongly dependent on pooling – our structured matrices outperform them in isolation – we compare against a version of the CNN with an additional pooling layer after the convolutional channels. Note that this comparison is the same basic four layer model with a structured matrix vs. a five layer convolutional model with pooling. Since the architectures are quite different and difficult to directly compare, we also experimented with adding more hidden units to the pooling model.

F.2. Language modeling

For a language modeling application10, we explored replacing weight matrices in a recurrent neural network with structured matrices. We evaluate on a single layer LSTM architecture, defined by the update equations:

i=σ(Wiix+bii+Whih+bhi)
f=σ(Wifx+bif+Whfh+bhf)
g=tanh(Wigx+big+Whgh+bhg)
o=σ(Wiox+bio+Whoh+bho)
c=fc+ig
h=otanh(c)

In our experiments we replace the matrices Wii, Wif, Wig, Wio with structured matrices. We use a hidden layer of size 128, and word embedding size of 128. We evaluate on the Wikitext-2 dataset, which consists of Wikipedia articles (2,088,628 training, 217,646 validation, and 245,569 test tokens). The total vocabulary is of size 33,278. We use the default hyperparameters and train using stochastic gradient descent with an initial learning rate of 20. The learning rate is annealed 4X after each epoch if performance does not improve on the validation set. Results are shown in Table 2.

Footnotes

2

Throughout this paper, we use square matrices for simplicity, but LDR is well-defined for rectangular matrices.

3

In addition to the results reported in Table 1, Figure 3 and Table 7 in Appendix E, we also found that at rank 16 the LDR-SD class on the CNN architecture achieved test accuracies of 68.48% and 75.45% on CIFAR-10 and NORB respectively.

4

Convolutions are designed to be shift equivariant, i.e. shifting the input is equivalent to shifting the output.

5

E.g. a 3 × 3 convolutional filter on an n × n matrix has a Toeplitz weight matrix supported on diagonals −1, 0 , 1, n − 1, n, n + 1, 2n − 1, ….

6

A layer consisting of k in-channels and out-channels, each of which is connected by a weight matrix of class C, is the same as a k × block matrix.

7

We use the standard measure of the difference between training and test error.

8

Shifting the input to a convolutional feature map is the same as shifting the output.

9

The convolutions are padded to ensure their input and output dimensions are equal.

References

  • [1].Agrawal Pulkit, Carreira Joao, and Malik Jitendra. Learning to see by moving. In Proceedings of the IEEE International Conference on Computer Vision, pages 37–45. IEEE, 2015. [Google Scholar]
  • [2].Anselmi Fabio, Leibo Joel Z., Rosasco Lorenzo, Mutch Jim, Tacchetti Andrea, and Poggio Tomaso. Unsupervised learning of invariant representations. Theor. Comput. Sci, 633(C):112–121, June 2016. ISSN 0304-3975. doi: 10.1016/j.tcs.2015.06.048. URL 10.1016/j.tcs.2015.06.048. [DOI] [Google Scholar]
  • [3].Anthony Martin and Bartlett Peter L. Neural network learning: theoretical foundations. Cambridge University Press, 2009. [Google Scholar]
  • [4].Bartlett Peter L, Maiorov Vitaly, and Meir Ron. Almost linear VC dimension bounds for piecewise polynomial networks. In Advances in Neural Information Processing Systems, pages 190–196, 1999. [DOI] [PubMed] [Google Scholar]
  • [5].Bronstein Michael M, Bruna Joan, LeCun Yann, Szlam Arthur, and Vandergheynst Pierre. Geometric deep learning: going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017. [Google Scholar]
  • [6].Bürgisser Peter, Clausen Michael, and Shokrollahi Mohammad A. Algebraic complexity theory, volume 315 Springer Science & Business Media, 2013. [Google Scholar]
  • [7].Chen Wenlin, Wilson James, Tyree Stephen, Weinberger Kilian, and Chen Yixin. Compressing neural networks with the hashing trick. In Bach Francis and Blei David, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2285–2294, Lille, France, 07–09 Jul 2015 PMLR. URL http://proceedings.mlr.press/v37/chenc15.html. [Google Scholar]
  • [8].Cheng Yu, Yu Felix X, Feris Rogerio S, Kumar Sanjiv, Choudhary Alok, and Chang Shi-Fu. An exploration of parameter redundancy in deep networks with circulant projections. In Proceedings of the IEEE International Conference on Computer Vision, pages 2857–2865, 2015. [Google Scholar]
  • [9].Chihara TS. An introduction to orthogonal polynomials. Dover Books on Mathematics Dover Publications, 2011. ISBN 9780486479293. URL https://books.google.com/books?id=IkCJSQAACAAJ. [Google Scholar]
  • [10].Choromanski Krzysztof and Sindhwani Vikas. Recycling randomness with structure for sublinear time kernel expansions. In Balcan Maria Florina and Weinberger Kilian Q., editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 2502–2510, New York, New York, USA, 20–22 Jun 2016 PMLR; URL http://proceedings.mlr.press/v48/choromanski16.html. [Google Scholar]
  • [11].Ciresan Dan C, Meier Ueli, Masci Jonathan, Gambardella Luca Maria, and Schmidhuber Jürgen. Flexible, high performance convolutional neural networks for image classification. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, volume 22, page 1237 Barcelona, Spain, 2011. [Google Scholar]
  • [12].Cohen Taco and Welling Max. Group equivariant convolutional networks. In International Conference on Machine Learning, pages 2990–2999, 2016. [Google Scholar]
  • [13].Cohen Taco S., Geiger Mario, Köhler Jonas, and Welling Max. Spherical CNNs. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hkbd5xZRb. [Google Scholar]
  • [14].Sa Christopher De, Gu Albert, Puttagunta Rohan, Ré Christopher, and Rudra Atri. A two-pronged progress in structured dense matrix vector multiplication. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1060–1079. SIAM, 2018. [PMC free article] [PubMed] [Google Scholar]
  • [15].Denil Misha, Shakibi Babak, Dinh Laurent, Freitas Nando De, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148–2156, 2013. [Google Scholar]
  • [16].Dieleman Sander, Fauw Jeffrey De, and Kavukcuoglu Koray. Exploiting cyclic symmetry in convolutional neural networks. In Balcan Maria Florina and Weinberger Kilian Q., editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1889–1898, New York, New York, USA, 20–22 Jun 2016 PMLR; URL http://proceedings.mlr.press/v48/dieleman16.html. [Google Scholar]
  • [17].Ding Caiwen, Liao Siyu, Wang Yanzhi, Li Zhe, Liu Ning, Zhuo Youwei, Wang Chao, Qian Xuehai, Bai Yu, Yuan Geng, et al. CirCNN: accelerating and compressing deep neural networks using block-circulant weight matrices. In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, pages 395–408. ACM, 2017. [Google Scholar]
  • [18].Gens Robert and Domingos Pedro M. Deep symmetry networks. In Advances in Neural Information Processing Systems, pages 2537–2545, 2014. [Google Scholar]
  • [19].Lee Giles C and Maxwell Tom. Learning, invariance, and generalization in high-order neural networks. Appl. Opt, 26(23):4972–4978, December 1987. doi: 10.1364/AO.26.004972. URL http://ao.osa.org/abstract.cfm?URI=ao-26-23-4972. [DOI] [PubMed] [Google Scholar]
  • [20].Griewank Andreas and Walther Andrea. Evaluating derivatives: principles and techniques of algorithmic differentiation. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, second edition, 2008. ISBN 0898716594, 9780898716597. [Google Scholar]
  • [21].Han Song, Pool Jeff, Tran John, and Dally William. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pages 1135–1143, 2015. [Google Scholar]
  • [22].Harvey Nick, Liaw Christopher, and Mehrabian Abbas. Nearly-tight VC-dimension bounds for piecewise linear neural networks. In Kale Satyen and Shamir Ohad, editors, Proceedings of the 2017 Conference on Learning Theory, volume 65 of Proceedings of Machine Learning Research, pages 1064–1068, Amsterdam, Netherlands, 07–10 Jul 2017 PMLR; URL http://proceedings.mlr.press/v65/harvey17a.html. [Google Scholar]
  • [23].Hinton Geoffrey, Vinyals Oriol, and Dean Jeff. Distilling the knowledge in a neural network. NIPS Deep Learning Workshop, 2015. [Google Scholar]
  • [24].Kailath Thomas, Kung Sun-Yuan, and Morf Martin. Displacement ranks of matrices and linear equations. Journal of Mathematical Analysis and Applications, 68(2):395–407, 1979. [Google Scholar]
  • [25].Kondor Risi and Trivedi Shubhendu. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10–15, 2018, pages 2752–2760, 2018. URL http://proceedings.mlr.press/v80/kondor18a.html. [Google Scholar]
  • [26].Krizhevsky A and Hinton G. Learning multiple layers of features from tiny images. Master’s Thesis, Department of Computer Science, University of Toronto, 2009. [Google Scholar]
  • [27].Larochelle Hugo, Erhan Dumitru, Courville Aaron, Bergstra James, and Bengio Yoshua. An empirical evaluation of deep architectures on problems with many factors of variation. In Proceedings of the 24th International Conference on Machine Learning, ICML ‘07, pages 473–480, New York, NY, USA, 2007. ACM; ISBN 10.1145/1273496.1273556 URL http://doi.acm.org/10.1145/1273496.1273556. [DOI] [Google Scholar]
  • [28].Le Quoc, Sarlos Tamas, and Smola Alexander. Fastfood - computing Hilbert space expansions in loglinear time. In Dasgupta Sanjoy and McAllester David, editors, Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pages 244–252, Atlanta, Georgia, USA, 17–19 Jun 2013 PMLR; URL http://proceedings.mlr.press/v28/le13.html. [Google Scholar]
  • [29].Yann LeCun Fu Jie Huang, and Bottou Leon. Learning methods for generic object recognition with invariance to pose and lighting. In Proceedings of the IEEE International Conference on Computer Vision, volume 2, pages II–104. IEEE, 2004. [Google Scholar]
  • [30].Lenc Karel and Vedaldi Andrea. Understanding image representations by measuring their equivariance and equivalence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 991–999, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Lu Zhiyun, Sindhwani Vikas, and Sainath Tara N. Learning compact recurrent neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 5960–5964. IEEE, 2016. [Google Scholar]
  • [32].Marcos Diego, Volpi Michele, Komodakis Nikos, and Tuia Devis. Rotation equivariant vector field networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 5058–5067, 2017. [Google Scholar]
  • [33].Merity Stephen, Xiong Caiming, Bradbury James, and Socher Richard. Pointer sentinel mixture models. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Byj72udxe. [Google Scholar]
  • [34].Moczulski Marcin, Denil Misha, Appleyard Jeremy, and Freitas Nando de. ACDC: a structured efficient linear layer. In International Conference on Learning Representations, 2016. [Google Scholar]
  • [35].Oymak Samet. Learning compact neural networks with regularization. In Dy Jennifer and Krause Andreas, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3966–3975, Stockholmsmässan, Stockholm, Sweden, 10–15 Jul 2018 PMLR. URL http://proceedings.mlr.press/v80/oymak18a.html. [Google Scholar]
  • [36].Pal Dipan K and Savvides Marios. Non-parametric transformation networks. arXiv preprint arXiv:1801.04520, 2018. [Google Scholar]
  • [37].Pan Victor Y. Structured matrices and polynomials: unified superfast algorithms. Springer Science & Business Media, 2012. [Google Scholar]
  • [38].Pan Victor Y and Wang Xinmao. Inversion of displacement operators. SIAM Journal on Matrix Analysis and Applications, 24(3):660–677, 2003. [Google Scholar]
  • [39].Tara N Sainath Brian Kingsbury, Sindhwani Vikas, Arisoy Ebru, and Ramabhadran Bhuvana. Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6655–6659. IEEE, 2013. [Google Scholar]
  • [40].Schmidt Uwe and Roth Stefan. Learning rotation-aware features: From invariant priors to equivariant descriptors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2050–2057. IEEE, 2012. [Google Scholar]
  • [41].Simoncini Valeria. Computational methods for linear matrix equations. SIAM Review, 58(3):377–441, 2016. [Google Scholar]
  • [42].Sindhwani Vikas, Sainath Tara, and Kumar Sanjiv. Structured transforms for small-footprint deep learning. In Advances in Neural Information Processing Systems, pages 3088–3096, 2015. [Google Scholar]
  • [43].Sokolic Jure, Giryes Raja, Sapiro Guillermo, and Rodrigues Miguel. Generalization error of invariant classifiers. In Singh Aarti and Zhu Jerry, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 1094–1103, Fort Lauderdale, FL, USA, 20–22 Apr 2017 PMLR; URL http://proceedings.mlr.press/v54/sokolic17a.html. [Google Scholar]
  • [44].Vapnik Vladimir. Statistical learning theory. 1998. Wiley, New York, 1998. [Google Scholar]
  • [45].Warren Hugh E. Lower bounds for approximation by nonlinear manifolds. Transactions of the American Mathematical Society, 133(1):167–178, 1968. [Google Scholar]
  • [46].Yang Zichao, Moczulski Marcin, Denil Misha, Nando de Freitas Alex Smola, Song Le, and Wang Ziyu. Deep fried convnets. In Proceedings of the IEEE International Conference on Computer Vision, pages 1476–1483, 2015. [Google Scholar]
  • [47].Zhao Liang, Liao Siyu, Wang Yanzhi, Li Zhe, Tang Jian, and Yuan Bo. Theoretical properties for neural networks with weight matrices of low displacement rank. In Precup Doina and Teh Yee Whye, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 4082–4090, International Convention Centre, Sydney, Australia, 06–11 Aug 2017 PMLR; URL http://proceedings.mlr.press/v70/zhao17b.html. [Google Scholar]

RESOURCES