Abstract
We consider efficiency in the implementation of deep neural networks. Hardware accelerators are gaining interest as machine learning becomes one of the drivers of high-performance computing. In these accelerators, the directed graph describing a neural network can be implemented as a directed graph describing a Boolean circuit. We make this observation precise, leading naturally to an understanding of practical neural networks as discrete functions, and show that the so-called binarized neural networks are functionally complete. In general, our results suggest that it is valuable to consider Boolean circuits as neural networks, leading to the question of which circuit topologies are promising. We argue that continuity is central to generalization in learning, explore the interaction between data coding, network topology, and node functionality for continuity and pose some open questions for future research. As a first step to bridging the gap between continuous and Boolean views of neural network accelerators, we present some recent results from our work on LUTNet, a novel Field-Programmable Gate Array inference approach. Finally, we conclude with additional possible fruitful avenues for research bridging the continuous and discrete views of neural networks.
This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.
Keywords: neural network, computing, accelerator, field-programmable gate array
1. Introduction
This paper considers the development of deep neural networks in the supervised learning setting [1]. Inspired by the recent rise of interest in specialized hardware accelerators for deep neural networks [2], we shall take a fresh look at the question of suitable network topologies and basic node functionalities for such accelerators.
We shall begin by defining the supervised learning problem. Let denote the set of possible inputs to a machine learning inference function and denote the set of possible outputs. Imagine that we have an oracle function , mapping every possible input to the corresponding ideal output y = r(x). Generally, we will be interested in inference via a family of parametrically defined functions f(p;x), with parameters drawn from some set . We will often write fp(x) when we wish to consider the case where the parameter value p has been fixed. These functions will not, in general, produce the ideal output for all possible inputs, and therefore we need to consider some notion of inaccuracy, or ‘loss’, ℓ, which measures the difference between the ideal output and the actually computed output as ℓ(fp(x), r(x)). For simplicity, we assume in this article that ℓ is a metric [3] defined on . We are generally interested in average-case behaviour of these parametric functions ‘in the wild’, on any data that may frequently appear as input in real usage. Lifting the metric on to the following metric defined on functions :
| 1.1 |
where the expectation is over the input space, we can then pose the question of supervised training as the following optimization problem of selecting parameters to minimize the distance to an oracle function:
| 1.2 |
There are some practical problems, however. Firstly, it is unlikely that we have access to or knowledge of the distribution of or to an oracle function r, except through a finite set of samples, known as the training set. Secondly, as Scheinberg notes [4], the loss function ℓ desired in practice (e.g. an indicator function) may give rise to a computationally intractable optimization problem. As a result, it is common to aim instead to solve the training problem,
| 1.3 |
where (xi, yi) are the training data—inputs for which the ideal output is known—and ℓ′ is some suitable, often convex, loss function.
The actual accuracy of the resulting function fp* can then be evaluated on some other set of data (x′i, y′i)—the test data, as a proxy for m(fp*, r), to obtain the test error:
| 1.4 |
It turns out that this setting, therefore, imposes particular restrictions on the family of parametrized functions f, because we wish p*—which was selected based only on the training data—to also work well for the test data, as well as ensuring several other properties to be discussed. This fundamental problem: the design of families of parametrized functions for this purpose is the key subject of study of this paper. In particular, we address here the case where the functions fp map from one finite set to another, which is always the practical setting in a finite-precision computer. By considering the discrete problem explicitly, several new insights are developed, which may be of value to those researching highly efficient machine inference.
The structure of this paper is as follows:
Section 2 introduces a model of computation defined by typed graphs. We use this model to develop a deeper understanding of the computation of inference functions in deep neural networks, discussing suitable choices for such functions. The model also lets us reason about their approximation by discrete functions, and hence the potential for hardware implementations of such computations. In §3, we present an abstract view of the typical digital design process for hardware accelerators of numerical functions. We then show that a known family of extremely quantized neural networks is functionally complete. This result runs counter to standard thinking in hardware-accelerated neural networks, and we shall consider the reason for this apparent contradiction. Section 4 revisits the question of appropriate inference functions from §2, but now in the discrete setting. We argue for a trinity of topology, node functionality and metrics as interacting to determine the efficient inference computation and pose some open questions regarding the extent to which these factors can be decoupled. In §5, we consider an approach for efficient field-programmable gate array (FPGA) inference, known as LUTNet, recently published by my research group as an example of the initial work bridging the continuous and discrete setting. Finally, §6 draws conclusions and points to several fruitful avenues for further research. This paper makes use of a variety of notations, summarized at the end of the paper.
2. Networks and inference functions
(a). A graphical approach
A graphical approach is universally used to describe—formally or informally—the computations performed by deep neural networks. In this section, we shall develop a slightly unorthodox but very general formalism, which will be of use throughout the paper. Our aim here is to distinguish the syntactic description of neural networks as graphs from the semantic interpretation as functions. This distinction will be important because the transformations applied to develop a realization of a neural network as a programme or a piece of digital hardware are primarily based on the syntactic representation.
Definition 2.1. —
An edge e is simply a unique label, together with a set such as or , which can be interpreted as the type of data carried by the edge in a network.
Example 2.2. —
In figure 1, x, y, w1, w2, c and d are all edges, which we will take to be of real type.
Figure 1.
A simple network consisting of two vertices. One vertex has two parameters and two activation inputs and has function . The other vertex has no parameters and one activation input and has function .
Definition 2.3. —
A vertex v is a tuple v = (param, in, out, func) of ordered lists of edges param, in and out, together with a function func from the Cartesian product of the sets defined by param and in to that of those defined by out.
Example 2.4. —
In figure 1, there is a vertex .
The purpose of distinguishing param from in is to identify those values (parameters) that are intended to be determined once, offline, versus those values (activations) that are intended to be changed each time the graph is used in inference—we use a semicolon to separate parameters from activations, for readability purposes. In this example, w1 and w2 are parameters, commonly called weights, while x and y are not. There is one other vertex shown in the figure; this vertex has an empty parameter list.
Definition 2.5. —
A network N is a pair of vertices together with a distinguished set of edges priout such that:
- —
No edge appearing in the param list of any vertex also appears in the out list of any vertex.
- —
No edge appears in the out list of more than one vertex.
- —
All edges in priout appear in the list out list of exactly one vertex.
We will refer to parameters of the network to mean the list of all edges appearing in the param list of any vertex; inputs to the network to mean the list of all edges appearing in the in list of some vertex but not in the out list of any vertex; and outputs of the network to be the set priout. We will often refer to the ‘leaf functions’ of a network, meaning the collection of functions func of all the vertices in the network.
Example 2.6. —
The network shown in figure 1 consists of the two vertices previously described, together with a set of primary outputs. One possibility for such a set is , but there are other choices, depending on which edges are required to be observable at the network output. The parameters of this network are w1 and w2, and the inputs to the network are x and y.
Definition 2.7. —
We say that a network N implements a function defined through the natural function composition of the individual vertex functions, i.e. is a function from the Cartesian products of parameters and inputs of the network to the Cartesian product of the outputs of the network, defined inductively, with vertex functions as the leaf functions.
Example 2.8. —
For the network N shown in figure 1, .
For simplicity, we will consider computations corresponding to acyclic networks—including the very significant class of Convolutional Neural Networks [5]—however, the formalism can easily be extended to cyclic networks (e.g. LSTMs [6]) by lifting computation over the types illustrated above to computations over streams of those types [7]. This generalization does not affect the following material. Equally, it is trivial to make networks hierarchical by generalizing functions computed to also allow sub-networks, but this will not be required in the sequel.
(b). Functions for inference
What kind of functions form good candidates for machine learning? And what basic functionality should be implemented by nodes in a network N for this purpose? In practical terms, for deep learning today, the most common leaf functions are the inner products, , sigmoid and softmax [1]. However, it is worth considering the various factors that determine this choice now and in the future. Informally, functions should:
-
❶
Generalize well: once the parameter p is selected based on the training data, fp(x) should also tend to perform well over unseen test data.
-
❷
Be cheap to compute: the cost (speed, energy) of evaluating the function at inference time should be low.
-
❸
Be sufficiently general/expressive: the functions should be capable of approximating a wide variety of oracle functions r.
-
❹
Be easy to learn: optimization algorithms used to address the training problem described in §1 should be both cheap to execute and also rarely give rise to values of parameter that are grossly sub-optimal with respect to the training set.
Strang [8] argues that continuous piecewise linear (CPL) functions have tended to perform well, explaining the importance of the inner product and functions in today’s networks, as CPL functions are precisely those that are implemented by networks with these vertices. Strang argues that continuity is the key to generalization, which intuitively makes sense: if an untrained input is very close to a trained one, it seems reasonable to expect the corresponding outputs of the network to be very close in turn.
To make this intuition precise requires us to equip the input and output sets with metrics, d and e, respectively, allowing us to define what it means for inputs and outputs to be ‘close’. We can then consider the inference function fp as a function from an input metric space to an output metric space . We have a choice of options to define continuity; we shall use Lipschitz continuity [3], for reasons that will become apparent in the next section.
Definition 2.9. —
Suppose , where is equipped with a metric d and is equipped with a metric e. Let . The function f is k-Lipschitz if for all , e(f(a), f(b)) ≤ k d(a, b).
Definition 2.10. —
A function f is Lipschitz if it is k-Lipschitz for some k.
Example 2.11. —
For computation over with metrics determined by a suitable norm in that space, the function is Lipschitz and the inner products are Lipschitz, and thus by composition, networks constructed from these two functions are Lipschitz [3] and, therefore, good candidates for generalizing beyond training data.
Whether the inner products and functions are cheap to compute (property ❷) depends upon our model of computation; in the abstract Blum–Shub–Smale model for real computation, this is certainly the case [9]. It is now well known that a wide variety of neural networks, including those implementing CPL functions are universal approximators, and hence sufficiently general [10,11] (property ❸). This leaves the question of whether such functions are ‘easy to learn’ (property ❹). This is still an active area of research; however, theoretical insights, such as [12] combined with practical experience, suggest that this is indeed the case.
So, while CPL functions over the reals appear to be very promising, practical computers do not compute over the reals. In practice, finite-precision datatypes are (almost) always used to approximate computation over the reals, and the picture of appropriate inference functions has the potential to change considerably in this setting. We examine this question in the next section.
3. Discrete inference
We shall refer to a network where the types of all activations are as a real network, where the types of all activations are for finite as a finite-precision network, and where the types of all activations are as a Boolean network. Boolean networks correspond exactly to combinational digital circuits, and so hold a special place from an implementation perspective.
Figure 2 illustrates the standard digital design process for the development of a Boolean network approximating a given real network G1. The first step is that of quantization. Here, real data types associated with edges in G1 are replaced by finite precision data types . Typical examples are single-precision IEEE floating-point arithmetic [14] as well as various fixed-point arithmetics. Consequently, the functions func performed by each node in the network must also be quantized, hence it is common to require G1’s node functions to be drawn from a basic set of operators for which this function quantization process can be performed automatically or is defined by some standard as, e.g. {*, − , + , /} are for IEEE floating-point arithmetic. The quantization process induces a change in function: in general, and so has been the subject of a considerable amount of work in the DNN literature, with modern machine inference architectures often offering choices of precision that trade performance for accuracy of computation [2], e.g. [15]. The main distinguishing features of this setting compared to classical finite precision quantization results [16] are due to the metric m introduced in §1: both its inherently stochastic nature and distance to an oracle r rather than distance to the underlying real function being the primary concern, i.e. the ideal quantization is one that by selecting minimizes rather than . In practice, however, it is typical to initially select each element of the quantized parameter independently, effectively relying on the repeated application of the triangle inequality applied syntactically to the graph to ensure remains small, further relying on the triangle inequality property of m to ensure the distance to the oracle does not grow considerably. Sometimes, this initial choice is refined through a process known as re-training [2].
Figure 2.

An abstract view of a typical digital design process. Inclusion maps are indicated by and isomorphisms by . Starting from a specification graph G1, the designer constructs a network G2 operating on finite-precision datatypes, typically fixed or floating point, as described in the text. A ‘synthesis tool’ then automatically creates a Boolean network G3, known as a ‘netlist’. The netlist implements the function in the sense that . Here, we distinguish and from and because the inclusions are often not surjective, giving rise to the well-studied problem of ‘Boolean don’t-cares’ [13]. The lower two sections of this diagram, therefore, commute, while the top section ‘approximately commutes’.
The second step of the process is to convert the finite-precision network to a Boolean network for implementation. This process is fully automated in modern digital design tools. First, each vertex in the finite-precision graph is replaced by a Boolean network defined for that particular node’s function, for a predefined encoding of the elements of into elements of , e.g. the IEEE floating-point storage standard [14] which encodes each single-precision floating-point number as a k = 32-bit vector of Boolean values; this part of the process is known in digital design as ‘core generation’. Second, logic synthesis tools [13] are applied to rewrite the graph to reduce its implementation cost as a circuit. The result of this process is a Boolean network G3 which can be directly implemented as a digital logic circuit. The computation implemented by G3 corresponds exactly to that implemented by G2 in the sense that , where denotes the restriction of the function to the domain of ϕi, i.e. the middle section of the diagram commutes.
It can, therefore, be seen that in a standard digital design process, the only part of the process where an approximation is induced (the upper section of figure 2) is not associated with topological changes to the network, while the only part of the process where topological changes are induced (the middle section of figure 2) is not associated with approximation. This observation will be of importance in the sequel.
The abstract process described in figure 2 is illustrated for a concrete example in figure 3. The small inset figure corresponds to the topology of G1, the original specification graph, where each vertex is associated with a function given by . Fixing w1 and w2 to specific values, quantizing the computation to a 4-bit fixed-point arithmetic and synthesizing the result produces the large main figure, corresponding to G3, where each vertex is associated with a 1- or 2-input Boolean function. Clearly, there are some key differences between these networks apart from their datatypes: G3 has an irregular structure compared to G1 and has clusters of tightly interconnected ‘neighbourhoods’, roughly corresponding to the Boolean networks introduced for each fixed-point arithmetic function in G2. However, by maintaining the entire design process within the same graph formalism, we can exploit the similarities: both are directed graphs operating on typed data, with nodes which can be considered as parametric functions—for the real network the parametric functions are dot products with parameters given by weights, for the Boolean network they are Boolean functions with the parameter indicating which function from or has been selected by the logic synthesis tool.
Figure 3.

A Boolean network (main figure) corresponding to a real network (embedded figure). In the real network, nodes with inedges all correspond to a function given by for some—possibly distinct—parameter w. In the Boolean network, nodes correspond to simple logic functions from or produced by a synthesis tool [17], implementing a 4-bit fixed-point quantization of the real network. Tightly interconnected regions can be seen, corresponding to the Boolean implementation of individual arithmetic operations. Rendering of both graphs is via Gephi [18], with colouring by ‘community’. (Online version in colour.)
(a). Binarized neural networks
Driven by the desire to reduce energy consumption and improve performance as much as possible, an extreme form of fixed-point arithmetic has been used in the so-called binarized neural networks (BNNs) [19]. In these neural networks, both the weights and the activation signals are constrained to be drawn from { − 1, + 1}, resulting in extremely efficient implementations [20]. A classical function , given by implemented by the component of a deep neural network, is aggressively quantized to given by for wT x ≥ c, and otherwise. The key to the implementation efficiency of such functions comes from the near-elimination of hardware-expensive multiplication operations: multiplication in a vector scalar product is reduced to a Boolean exclusive (XNOR) function. Meanwhile, the addition in the scalar product is reduced to a calculation of Hamming weight (population count), which admits efficient implementations [21].
Although BNNs have received a lot of attention, the general view in the implementation community is that neural networks constructed in this way are not universally able to implement as good quality classification on complex datasets compared to more precise data representations. This observation has led to manufacturers including configurable finite-precision datapaths typically down to 4-bit [15] or 8-bit [22]. It is instructive to pursue an alternative view, which we shall now develop.
Definition 3.1. —
A set {f1, …, fn} of Boolean functions is functionally complete if every Boolean function f can be obtained as a finite composition of these functions [23].
Through an appropriate pair of encodings , and , it therefore follows that any function between finite sets can be implemented by a Boolean network using a functionally complete set of Boolean functions at its vertices, similar to figure 2, i.e. for some Boolean network G.
Theorem 3.2. —
The set of node functions in a Boolean implementation of BNNs is functionally complete.
Proof. —
We shall use the bijection defined by , . Clote & Kranakis [23] provide necessary and sufficient conditions for a set of Boolean functions to be functionally complete; one well known such set is , together with the constants ⊥ and . The equivalences below can easily be shown through enumeration:
3.1 □
Note that it is, therefore, always possible to construct a real-valued DNN which, when quantized to produce a BNN, implements any Boolean function, including those Boolean functions that would have been derived via traditional design techniques (figure 2) using any finite-precision datatype , i.e. BNNs easily satisfy our property ❸. The theorem, therefore, challenges the received wisdom that BNNs are not always able to produce the required accuracy on a classification task. So, why this apparent discrepancy in practice? The issue is not with the computational generality of BNNs, but rather with the traditional design technique, which is unable to adapt the topology of the network to the requirements of the underlying datatype.
Corollary 3.3. —
Accuracy-optimal network topology depends on the finite-precision datatype.
This corollary leads to a conjecture on future design methods for efficient neural networks, which generalizes some empirical observations, e.g. that reducing precision can be compensated by increasing network depth [24] or width [25]. Today, digital circuits are universally implemented using CMOS technology [26], whether in a microprocessor or a custom circuit design. CMOS circuits form extremely efficient implementations of nonlinear operations with a single output bit. This contrasts sharply with the standard nodes of real-valued DNNs, the inner product and the , which are piecewise linear but arbitrarily precise. The usual approach to this dichotomy is to use wide enough finite-precision datatypes to make the hardware emulate the real-valued model: but at what cost?
Conjecture 3.4. —
In the future, efficient neural network topologies will be driven by both the topology of the data and by the nature of the discrete representation of the activations. The current separation between approximation (without topological changes) and topological changes (without approximation) will not survive the drive for efficient computation.
4. Boolean networks for Lipschitz functions
Since we have demonstrated the link between topology and data representation in deep neural networks, a natural question arises: which topologies may form good choices for learning Boolean functions? Perhaps, one may even remove the level of abstraction in figure 2, which would then become equivalent to learning the arithmetic.
In §2, we discussed the properties of inference functions in a continuous setting; we shall now extend this discussion to Boolean networks. The aim of this section is to focus on property ❶: how can we develop Boolean networks exhibiting good generalization?
We explained, following Strang, the centrality of continuity to generalization in §2. The advantage of working with Lipschitz continuity is that we can directly transfer this idea to the Boolean setting. Here, every function is Lipschitz, since we may take the Lipschitz constant , so it is not meaningful to talk about continuity in absolute terms, but rather about the value of the Lipschitz constant. We shall, therefore, study the question which Boolean networks give rise to k-Lipschitz functions? The intuition here is that the lower the Lipchitz constant, the better the function meets the desirable property that small input perturbations cause at most small output perturbations.
Before investigating a concrete example of a simple Boolean circuit in this context, let us consider typical ways to define a metric on the Boolean vectors forming the inputs and outputs of a circuit. It will be helpful to define as , . Although not strictly necessary, it is typical to consider the metrics induced by the norms of encoded data, e.g. d(a, b) = ||ϕi(a) − ϕi(b) || for the domain, where . Here, we may interpret ϕi as denoting a real vector represented by the Boolean inputs. A trivial example would be ϕi(a1, a0) = 2φ(a1) + φ(a0), a representation of a two-bit scalar integer in standard binary arithmetic. A more complex scalar encoding corresponds to IEEE single- or double-precision floating point, as explicitly given in the standard [14].
It is instructive to consider the most basic typical arithmetic circuit, known as a ripple-carry adder, shown in figure 4 [27]. Each leaf node implements a Boolean function known as a full adder: , where denotes Boolean XOR. We can consider this circuit as implementing a function . If we define as the function mapping vectors of Boolean values to the number they represent in a standard binary integer encoding:
| 4.1 |
then it can be seen why the Boolean network is referred to as an adder: +° (wn, wn, φ) = wn+1° f, where + denotes the standard integer addition. In the formalism of figure 2, ϕi = (wn, wn, φ), ϕo = wn+1.
Figure 4.
Lipschitz properties and network topology. (a) An n-bit adder network. (b) Reversing ‘carry’ direction.
Let us equip the input and output spaces with suitable metrics, e.g. those induced by the 1-norm of the difference in their word-level representation:
| 4.2 |
Lemma 4.1. —
The function implemented by a ripple-carry adder is 1-Lipschitz for any n.
Proof. —
4.3 ▪
How does this 1-Lipschitz property arise? Note that from the topology of the network alone, we cannot conclude anything useful about the minimal Lipschitz constant of the function implemented; replacing the function of the leaf nodes with an alternative function results in a minimal Lipschitz constant of 2n rather than 1. Changing the metrics—equivalent in the norm-induced case to encoding the input or output with a different number system—could equally impact the Lipschitz properties. Finally, a different network topology based on the same full-adder leaf nodes could clearly lead to a different minimal Lipschitz constant. Thus, the minimal Lipschitz constant exhibited by a function implemented by a network will generally depend on three things: the topology of the network, the leaf node functionality and the encoding/metrics associated with the inputs and outputs of the network. Even if we assume the latter to be fixed, the interaction between the former two features is not ideal if we wish to learn the functionality of nodes in the network: local decisions on Boolean functionality can potentially have a global impact on the generalization behaviour of a network.
Learning from the n-bit adder example, one natural approach to generating functions with low Lipschitz constant appears to be to reverse the direction of the ‘carry’ edges ci. If these edges are reversed, then no path exists between ai, bi, ci and sj or cj for any j > i, meaning that changes in low-significance input bits cannot impact high-significance output bits. This topology is appealing because it corresponds directly to most-significant-digit-first arithmetic, a universal approach to computation pioneered by Ercegovac [28] in the 1970s for computer arithmetic: through a suitable change in the encoding wk, this topology can be used to implement all the basic arithmetic operators [29]. However, such a topology does not guarantee a particular Lipschitz constant for the metrics defined in (4.3), because small changes in the input metric can still correspond to large changes in the most-significant-digit: one sees this, for example, with the transition 011111 → 100000, a change of one but with a most-significant-digit bit flip. To avoid this issue, one must either change the encoding of the network inputs and outputs or place restrictions on the Boolean functionality of the nodes. The former approach—selecting an optimal encoding of the input space as Booleans—is an open problem. A trivial but inefficient solution would be to use a unary encoding. More efficient solutions could potentially draw deeply from the area of combinatorial Gray codes [30], i.e. methods for generating combinatorial objects (such inputs of a discrete-valued neural network, drawn from ), so that successive objects differ by a small degree. As noted by Savage [30], Gray codes are not preserved under bijection, and it is exactly this property that could suggest implementation-appropriate coding.
Open Problem 4.2. —
For future deep neural networks, what input and output codings are commensurate with the properties of good inference functions identified in §2, and how do they depend on the input probability space and oracle function?
The author performed the following simple experiment to investigate the latter approach, i.e. restricting Boolean functionality to ensure a certain Lipschitz constant for fixed topology and metrics. Consider the simple topology shown in figure 4b with associated metrics d((c, a1, a0), (c′, a1′, a0′)) = |w3(c, a1, a0) − w3(c′, a1′, a0′)| and e((s1, s0, q), (s1′, s0′, q′)) = |w3(s1, s0, q) − w3(s1, s0, q′)|. There are (24 × 24)2 choices for the Boolean functionality of (f1, f0). If we assume that neither constant functions nor those in are of interest, then there are 100 choices for each of f1 and f0. A complete enumeration identifies 376 pairs of Boolean functions f1, f0 for which the network implements a 2-Lipschitz function. One may go further and ask whether we can identify a set of choices for the function of f1 and the function of f0 such that we may arbitrarily choose functions from these two sets while maintaining the 2-Lipschitz property, effectively decoupling the choice of leaf functionality from topology. We shall refer to such sets as a ‘functional decoupling’ for the given topology, value of k and metrics.
Definition 4.3. —
Given a network N with enumerated vertices ni, implementing a k-Lipschitz function , a functional decoupling is a tuple of sets Si such that for every vertex, may be replaced by any element of Si independently, while maintaining the k-Lipschitz property.
Consider a bipartite graph with node set , where N1 is in one-to-one correspondence with the set of choices for function f1 and N0 similarly for function f0, and in which edges {n1, n0} correspond to the pairs of functions resulting in a network implementing a 2-Lipschitz function. A biclique [31] of this graph corresponds to a decoupled set. Using the algorithm of Gillis & Gilneur [32] reveals such a biclique of size (6, 10) for this topology,1 i.e. any combination of these choices of node function results in a 2-Lipschitz network function.
Open Problem 4.4. —
Given metrics on input and output, a Lipschitz constant k, and a network topology, is there a useful characterization of exactly which functions can be implemented by a network with this topology using only leaf functions drawn from functional decouplings?
The significance of this problem is that it would help us to characterize the extent to which it is useful to consider promising network topologies separately from leaf functions.
5. The discrete-continuous divide: preliminary work
One of today’s most promising platforms for practical realization of very high-performance deep neural networks today is the FPGA [33]. These architectures provide an interesting case study for exploring some of the ideas presented in this paper, because there is a natural choice for the set of leaf functions implemented in a network: the set [23] of K-input Boolean functions, where K is a device-specific parameter. This is a natural choice because the underlying architecture is actually built of small physical Boolean lookup tables, each programmable to implement any one of the functions in , together with programmable interconnect able to connect these lookup tables in an effectively arbitrary topology (K = 6 is common).
Wang et al. [34] have recently begun to explore the potential for using the additional flexibility provided by these lookup tables. In this initial work—which we call LUTNet—we begin by taking a reasonably traditional approach, following [35]: some standard DNN benchmarks from the literature are quantized to use single-bit weights from { − 1, + 1}, and retrained to improve classification accuracy. In the resulting network, many of the vertices have function , usually as a part of the standard inner product common in DNNs. We observe that such computation is inefficient, because the basic lookup tables are not being used to their full potential: in the extreme, we have hardware capable of implementing any function from used solely to implement 2-input XNOR gates. We, therefore, modify the network in the following way. First, we replace the vertex functions { − 1, + 1} × { − 1, + 1} → { − 1, + 1} given by by the strictly more general class of functions consisting of all functions (isomorphic to) , where the parameter selects the particular function. To use the additional support of these functions (K two-valued activations rather than just one), we heuristically allocate the additional inputs to connect to other nodes in the network with low values of weight before quantization. After initially setting the new functions to reproduce the original, i.e. selecting the parameters from to be precisely those regenerating the function , we then retrain the network using standard Stochastic Gradient Descent (SGD) methods. Finally, we simplify the network topology through a standard ‘pruning’ technique [36]. The intuition of this process is that the nonlinear generality of the class may compensate for the pruning, resulting in a higher accuracy for a given number of Boolean network nodes. This is indeed what we observe in figure 5, which represents the classification error rate on the test set versus network area (in LUTs) for classification of the CIFAR-10 [37] dataset containing 60 000 32 × 32 colour images of 10 different classes, using the CNV neural network model [20] as the baseline topology from which the modifications described above are made to the largest layer—in the case of CNV, this is a sizeable convolutional layer with 256 outputs, operating with 3 × 3 kernels [34]. For this network, we see a reduction in the area consumption of approximately 50% compared to the baseline implementation operating at the same classification accuracy.
Figure 5.

Area-accuracy trade-off for pruned ReBNet [35] (times symbol), 2-LUTNet (open circle), 4-LUTNet (plus symbol) and 6-LUTNet (filled circle) with the CNV network and CIFAR-10 dataset. Each point is a representative of a distinct pruning threshold. The dashed line shows the baseline accuracy for unpruned ReBNet. (Online version in colour.)
Using SGD in this discrete setting requires a lifting to a continuous interpolation, as described in [34]. LUTNet is, thus, a representative of one direction in which to cross the discrete-continuous divide; some possible approaches to crossing in the opposite direction are explored in §6.
6. Future directions
It is the central thesis of this paper that there is much to learn by viewing neural networks and digital circuits as two embodiments of typed operations on graphs.
The topic of determining a good neural network topology is still in its infancy [1]. We have shown that there are additional dimensions to this problem: finite-precision data representation and the metrics determining ‘closeness’ of input and output also have a direct impact on efficient network topologies. Coupling these two concerns would seem to be a significant avenue for fruitful research in deep learning.
While the literature on learning the appropriate parameters for predefined neural network topologies has developed rapidly in recent years [1], systematic algorithmic approaches to learn neural network topologies from data are only recently appearing [38] and the underlying theory is limited. This mirrors the situation in the automated synthesis of digital circuits before the 1990s: the automated synthesis of logic circuits consisting of two layers (one of AND gates with optional input inversion, followed by one of OR gates) had been understood theoretically [39] and practically [40] before the 1990s, but only during that decade did the technology to optimize multilevel (‘deep’) Boolean networks emerge [13]. There may be a considerable scope for the crossover between the electronic design automation community and the deep learning community based on this work. Recently, there has been a resurgence of interest in the problem of exact (i.e. optimal) logic synthesis [41], which—albeit it in a different setting—also needs to simultaneously explore the topology and node functionality, and is stymied by the resulting computational complexity. This suggests a possible avenue for future development is to lift the progress being made in this area to richer data types.
The problem of placing bounds on the number of graph nodes drawn from a certain basis set required to meet a given quality of classification, e.g. via metric (1.1), could potentially be a very interesting topic for further theoretical study. There is a rich literature on circuit complexity bounds [23], and it may be possible to combine these ideas with probabilistic notions from Minimum Length Descriptions [42] to bound minimal circuit sizes.2
The k-Lipschitz property used in this paper is a global property, yet it may seem more natural to consider local properties. Extending the approach to networks that implement some form of locally k-Lipschitz functions with high probability, when the input is viewed as a random variable, may be a fruitful way forward. In addition to reasoning about the generalization behaviour of neural networks, prior work has shown that Lipschitz continuity can play a role in the regularization of neural network models [43] and that minimal Lipschitz constants are hard to compute [44] a posteriori. These results are suggestive that a holistic approach to topology and node functionality is appropriate, as argued in this paper.
The path seems open to investigate a variety of coding techniques for network inputs and outputs that give rise to desirable properties regarding generalization as well as the efficiency of implementation. In a different context, Dietterich & Bakiri [45] consider distributed output coding for classification, and it may be the case that coding theory and combinatorial enumeration approaches [30] have the potential to shed significant light on the key elements of an efficient inference function discussed in this article.
In addition to exploring suitable classes of Boolean function, for example, by attacking Open Problem 2 described in §4, there may be a value in generalizing nodes to exhibit a nondeterministic behaviour. In particular, stochastic rounding has recently appeared as a promising avenue in both training of deep neural networks [46] and in the simulation of biologically plausible neural models [47].
Finally, we have focused entirely on deep neural networks in this article. There are, of course, many other classical machine learning techniques [48]. We should note that once an inference algorithm for one of these classical methods has been decided upon, the algorithm can be typically expressed as a network (in the sense of §2) corresponding to the data-flow graph [49] of the algorithm. Just as LUTNet, described in §5, uses BNNs as a starting point for re-training, it is equally possible to use this network as a starting point for re-training or topological exploration.
Notation
denotes the reals, and the set of Boolean truth values, where ⊥ denotes false and denotes true. is used to denote the rectified linear unit function . denotes the sigmoid function . We denote the function composition by °. denotes the set of all functions from to . The set of integers is denoted by , and the set of integers bounded in absolute value n by . The following Boolean connectives are used: ¬ denotes negation, denotes conjunction, ∨ denotes disjunction and denotes exclusive or (XOR).
Acknowledgements
The author acknowledges Mr Erwei Wang for his help in producing figure 3 and Dr Christos Bouganis for the comments on the initial draft and for first interesting me in modern deep learning. One of the anonymous reviewers of the original manuscript made some insightful suggestions for future work, and I would like to thank this reviewer for his/her generous suggestions.
Footnotes
The latter interesting suggestion originated from an anonymous reviewer of the original manuscript.
Data accessibility
Code to support §5 is available at https://github.com/constantinides/rethinking
Competing interests
My research chair is part-funded by Imagination Technologies Ltd, and I have cited a blog post from the company in this work.
Funding
This work was financially supported by the Engineering and Physical Sciences Research Council(EP/P010040/1), Imagination Technologiesand the Royal Academy of Engineering.
Reference
- 1.Goodfellow I, Bengio Y, Courville A. 2016. Deep learning. Cambridge, MA: MIT Press. [Google Scholar]
- 2.Wang E, Davis J, Zhao R, Ng H-C, Niu X, Luk W, Cheung P, Constantinides G. 2019. Deep neural network approximation for custom hardware: where we’ve been, where we’re going. ACM Comput. Surv. 52 . [Google Scholar]
- 3.Searcóid MO. 2007. Metric spaces. Berlin, Germany: Springer. [Google Scholar]
- 4.Scheinberg K. 2016. Evolution of randomness in optimization methods for supervised machine learning. SIAG/OPT views and news (ed. S Wild), vol. 24, pp. 1–7. http://wiki.siam.org/siag-op/index.php/View_and_News.
- 5.LeCun Y. 1989. Generalization and network design strategies. University of Toronto, Technical Report. CRG-TR-89-4.
- 6.Hochreiter S, Schmidhuber J. 1997. Long short-term memory. Neural Comput. 9, 1735–1780. ( 10.1162/neco.1997.9.8.1735) [DOI] [PubMed] [Google Scholar]
- 7.Kahn G. 1974. The semantics of a simple language for parallel programming. In Proc. IFIP Congress on Information Processing, Stockholm, Sweden, 5–10 August 1974. Amsterdam, Netherlands: North-Holland.
- 8.Strang G. 2018. The functions of deep learning. SIAM News, December.
- 9.Blum L, Shub M, Smale S. 1989. On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines. Bull. Am. Math. Soc. 21, 1–46. ( 10.1090/S0273-0979-1989-15750-9) [DOI] [Google Scholar]
- 10.Hornik K. 1991. Approximation capabilities of multilayer feedforward networks. Neural Netw. 4, 251–257. ( 10.1016/0893-6080(91)90009-T) [DOI] [Google Scholar]
- 11.Neal R. 1994. Priors for infinite networks. University of Toronto, Technical Report. CRG-TR-94-1.
- 12.Du SS, Zhai X, Póczos B, Singh A. 2018. Gradient descent provably optimizes over-parameterized neural networks. CoRR (http://arxiv.org/abs/quant-ph/1810.02054).
- 13.Brayton R, Hachtel G, Sangiovanni-Vincentelli A. 1990. Multilevel logic synthesis. Proc. IEEE 78, 264–300. ( 10.1109/5.52213) [DOI] [Google Scholar]
- 14.IEEE standard for floating-point arithmetic. IEEE Std. 754-2008. 2008.
- 15.Har-Even B. 2018 PowerVR Series2NX: Raising the bar for embedded AI. www.imgtec.com/blog/powervr-series2nx-raising-the-bar-for-embedded-ai/.
- 16.Higham N. 2002. Accuracy and stability of numerical algorithms. Philadelphia, PA: Society for Industrial and Applied Mathematics. [Google Scholar]
- 17.Wolf C. 2013 Yosys open synthesis suite. www.clifford.at/yosys/.
- 18.Gephi 2017 The open graph viz platform. http://gephi.org/.
- 19.Courbariaux M, Bengio Y. 2016. Binarized neural networks: training deep neural networks with weights and activations constrained to +1 or −1. CoRR (http://arxiv.org/abs/quant-ph/abs/1602.02830).
- 20.Umuroglu Y, Fraser NJ, Gambardella G, Blott M, Leong P, Jahre M, Vissers K. 2017. FINN: A framework for fast, scalable binarized neural network inference. In Proc. ACM/SIGDA Int. Symp. on Field-Programmable Gate Arrays, Monterey, CA, 22–24 February 2017, pp. 65–74. New York, NY: ACM.
- 21.Warren H. 2012. Hacker’s delight, 2nd edn Reading, MA: Addison Wesley. [Google Scholar]
- 22.Triggs R. 2018 A closer look at Arm’s machine learning hardware. www.androidauthority.com/arm-project-trillium-842770/.
- 23.Clote P, Kranakis E. 2002. Boolean functions and computation models. Berlin, Germany: Springer. [Google Scholar]
- 24.Venkatesh G, Nurvitadhi E, Marr D. 2017. Accelerating deep convolutional neural networks using low precision and sparsity. In Proc. IEEE ICASSP, New Orleans, LA, 5–9 March 2017. Piscataway, NJ: IEEE.
- 25.Su J. 2018. Artificial neural networks acceleration on field-programmable gate arrays considering model redundancy. Imperial College London, Technical Report.
- 26.Weste N, Harris D. 2002. CMOS VLSI design: a circuits and system perspective. London, UK: Pearson. [Google Scholar]
- 27.Koren I. 2001. Computer arithmetic algorithms. Natick, MA: A. K. Peters. [Google Scholar]
- 28.Trivedi KS, Ercegovac MD. 1977. On-line algorithms for division and multiplication. IEEE Trans. Comput. 26, 681–687. ( 10.1109/TC.1977.1674901) [DOI] [Google Scholar]
- 29.Ercegovac M, Lang T. 2003. Digital arithmetic. Los Altos, CA: Morgan Kaufmann. [Google Scholar]
- 30.Savage C. 1997. A survey of combinatorial Gray codes. SIAM Rev. 39, 605–629. ( 10.1137/S0036144595295272) [DOI] [Google Scholar]
- 31.Bondy J. 1976. Graph theory with applications. Amsterdam, the Netherlands: Elsevier. [Google Scholar]
- 32.Gillis N, Glineur F. 2014. A continuous characterization of the maximum-edge biclique problem. J. Global Optim. 58, 439–464. ( 10.1007/s10898-013-0053-2) [DOI] [Google Scholar]
- 33.Hauck S, DeHon A. 2007. Reconfigurable computing: the theory and practice of FPGA-based computation. Los Altos, CA: Morgan Kaufmann. [Google Scholar]
- 34.Wang E, Davis J, Cheung P, Constantinides G. 2019. LUTNet: Rethinking inference in FPGA soft logic. In Proc. IEEE Int. Symp. Field-programmable Custom Computing Machines, Seaside, CA, 24–26 February 2019. New York, NY: ACM.
- 35.Ghasemzadeh M, Samragh M, Koushanfar F. 2018. ReBNet: residual Binarized neural network. In Proc. IEEE Int. Symp. on Field-Programmable Custom Computing Machines, Boulder, CO, 29 April–1 May 2018. Piscataway, NJ: IEEE.
- 36.Han S, Pool J, Tran J, Dally WJ. 2015. Learning Both Weights and Connections for Efficient Neural Network. In Conf. on Neural Information Processing Systems. Toronto, Canada: University of Toronto. www.cs.toronto.edu/kriz/learning-features-2009-TR.pdf.
- 37.Krizhevsky A. 2009. Learning multiple layers of features from tiny images. University of Toronto, Technical Report.
- 38.Zoph B, Le QV. 2016. Neural architecture search with reinforcement learning. CoRR (http://arxiv.org/abs/quant-ph/1611.01578). [Online]. Available: http://arxiv.org/abs/1611.01578.
- 39.Quine W. 1952. The problem of simplifying truth functions. Am. Math. Mon. 59, 521–531. ( 10.1080/00029890.1952.11988183) [DOI] [Google Scholar]
- 40.Ruddell R, Sangiovanni-Vincentelli A. 1987. Multiple-valued minimization for PLA optimization. IEEE Trans. Computer-Aided Des. 6, 727–750. ( 10.1109/TCAD.1987.1270318) [DOI] [Google Scholar]
- 41.Haaswijk W, Mishchenko A, Soeken M, Micheli GD. 2018. SAT based exact synthesis using DAG topology families. In Proc. Design Automation Conf., San Francisco, CA, 24–29 June 2018. New York, NY: ACM.
- 42.Rissanen J. 1978. Modeling by shortest data description. Automatica 14, 465–471. ( 10.1016/0005-1098(78)90005-5) [DOI] [Google Scholar]
- 43.Gouk H, Frank E, Pfahringer B, Cree M. 2018. Regularisation of neural networks by enforcing Lipschitz continuity. CoRR (http://arxiv.org/abs/quant-ph/1804.04368). [Online]. Available: https://arxiv.org/abs/1804.04368.
- 44.Scaman K, Virmaux A. 2018. Lipschitz regularity of deep neural networks: Analysis and efficient estimation. In Proc. Neural Information Processing Systems, Montréal, Canada, 3–8 December 2018. pp. 3839–3848. https://papers.nips.cc/paper/7640-lipschitz-regularity-of-deep-neural-networks-analysis-and-efficient-estimation.pdf.
- 45.Dietterich T, Bakiri G. 1995. Solving multiclass learning problems via error-correcting output codes. J. Artif. Intell. Res. 2, 263–286. ( 10.1613/jair.105) [DOI] [Google Scholar]
- 46.Gupta S, Agrawal A, Gopalakrishnan K, Narayanan P. 2015. Deep learning with limited numerical precision. In Proc. 32nd Int. Conf. on Machine Learning, Lille, France, 6–11 July 2015. pp. 1737–1746. http://proceedings.mlr.press/v37/gupta15.pdf.
- 47.Hopkins M, Mikaitis M, Lester D, Furber S. 2019. Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural odes. CoRR (http://arxiv.org/abs/quant-ph/1904.11263). [DOI] [PMC free article] [PubMed]
- 48.Murphy K. 2012. Machine learning: a probabilistic perspective. Cambridge, MA: MIT Press. [Google Scholar]
- 49.Nielson F, Nielson H, Hankin C. 2010. Principles of program analysis. Berlin, Germany: Springer. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Code to support §5 is available at https://github.com/constantinides/rethinking


