Skip to main content
Springer logoLink to Springer
. 2019 Nov 18;62(3):417–444. doi: 10.1007/s10851-019-00919-7

A Convex Variational Model for Learning Convolutional Image Atoms from Incomplete Data

A Chambolle 1, M Holler 2,, T Pock 3
PMCID: PMC7138786  PMID: 32300265

Abstract

A variational model for learning convolutional image atoms from corrupted and/or incomplete data is introduced and analyzed both in function space and numerically. Building on lifting and relaxation strategies, the proposed approach is convex and allows for simultaneous image reconstruction and atom learning in a general, inverse problems context. Further, motivated by an improved numerical performance, also a semi-convex variant is included in the analysis and the experiments of the paper. For both settings, fundamental analytical properties allowing in particular to ensure well-posedness and stability results for inverse problems are proven in a continuous setting. Exploiting convexity, globally optimal solutions are further computed numerically for applications with incomplete, noisy and blurry data and numerical results are shown.

Keywords: Variational methods, Learning approaches, Inverse problems, Functional lifting, Convex relaxation, Convolutional Lasso, Machine learning, Texture reconstruction

Introduction

An important task in image processing is to achieve an appropriate regularization or smoothing of images or image-related data. In particular, this is indispensable for most application-driven problems in the field, such as denoising, inpainting, reconstruction, segmentation, registration or classification. Also beyond imaging, for general problem settings in the field of inverse problems, an appropriate regularization of unknowns plays a central role as it allows for a stable inversion procedure.

Variational methods and partial-differential-equation-based methods can now be regarded as classical regularization approaches of mathematical image processing (see, for instance, [5, 42, 54, 62]). An advantage of such methods is the existence of a well-established mathematical theory and, in particular for variational methods, a direct applicability to general inverse problems with provable stability and recovery guarantees [31, 32]. While in particular piecewise smooth images are typically well described by such methods, their performance for oscillatory- or texture-like structures, however, is often limited to predescribed patterns (see, for instance, [27, 33]).

Data-adaptive methods such as patch- or dictionary-based methods (see, for instance, [2, 13, 22, 23, 37]) on the other hand are able to exploit redundant structures in images independent of an a priori description and are, at least for some specific tasks, often superior to variational- and PDE-based methods. In particular, methods based on (deep) convolutional neural networks are inherently data adaptive (though data adaptation takes place in a preprocessing/learning step) and have advanced the state of the art significantly in many typical imaging applications in the past years [38].

Still, for data-adaptive approaches, neither a direct applicability to general inverse problems nor a corresponding mathematical understanding of stability results or recovery guarantees are available to the extend they are with variational methods. One reason for this lies in the fact that, for both classical patch- or dictionary-based methods and neural-network-based approaches, data adaptiveness (either online or in a training step) is inherently connected to the minimization of a non-convex energy. Consequently, standard numerical approaches such as alternating minimization or (stochastic) gradient descent can, at best, only be guaranteed to deliver stationary points of the energy, hence suffering from the risk of delivering suboptimal solutions.

The aim of this work is to provide a step toward bridging the gap between data-adaptive methods and variational approaches. As a motivation, consider a convolutional Lasso problem [39, 65] of the form

min(ci)i,(pi)iλDi=1kcipi,u0+i=1kci1s.t.piC. 1

Here, the goal is to learn image atoms (pi)i (constrained to a set C) and sparse coefficient images (ci)i which, via a convolution, synthesize an image corresponding to the given data u0 (with data fidelity being measured by D). This task is strongly related to convolutional neural networks in many ways, see [46, 61, 65] and the paragraph Connections to deep neural networks below for details. A classical interpretation of this energy minimization is that it allows for a sparse (approximate) representation of (possible noisy) image data, but we will see that this synopsis can be extended to include a forward model for inverse problems and a second image component of different structures. In any case, the difficulty here is non-convexity of the energy in (1), which complicates both analysis and its numerical solution. In order to overcome this, we build on a tensorial-lifting approach and subsequent convex relaxation. Doing so and starting from (1), we introduce a convex variational method for learning image atoms from noisy and/or incomplete data in an inverse problems context. We further extend this model by a semi-convex variant that improves the performance in some applications. For both settings, we are able to prove well-posedness results in function space and, for the convex version, to compute globally optimal solutions numerically. In particular, classical stability and convergence results for inverse problems such as the ones of [31, 32] are applicable to our model, providing a stable recovery of both learned atoms and images from given, incomplete data.

Our approach allows for a joint learning of image atoms and image reconstruction in a single step. Nevertheless, it can also be regarded purely as an approach for learning image atoms from potentially incomplete data in a training step, after which the learned atoms can be further incorporated in a second step, e.g., for reconstruction or classification. It should also be noted that, while we show some examples where our approach achieves a good practical performance for image reconstruction compared to the existing methods, the main purpose of this paper is to provide a mathematical understanding rather than an algorithm that achieves the best performance in practice.

Related Works Regarding the existing literature in the context of data-adaptive variational learning approaches in imaging, we note that there are many recent approaches that aim to infer either parameter or filters for variational methods from given training data, see, e.g., [14, 30, 36]. A continuation of such techniques more toward the architecture of neural networks is so-called variational networks are so-called [1, 35] where not only model parameters but also components of the solution algorithm such as stepsizes or proximal mappings are learned. We also refer to [41] for a recent work on combining variational methods and neural networks. While for some of those methods also a function space theory is available, the learning step is still non-convex and the above approaches can in general only be expected to provide locally optimal solutions.

In contrast to that, in a discrete setting, there are many recent directions of research which aim to overcome suboptimality in non-convex problems related to learning. In the context of structured matrix factorization (which can be interpreted as the underlying problem of dictionary learning/sparse coding in a discrete setting), the authors of [29] consider a general setting of which dictionary learning can be considered as a special case. Exploiting the existence of a convex energy which acts as lower bound, they provide conditions under which local optima of the convex energy are globally optimal, thereby reducing the task of globally minimizing a non-convex energy to finding local optima with certain properties. In a similar scope, a series of works in the context of dictionary learning (see [4, 55, 59] and the references therein) provide conditions (e.g., assuming incoherence) under which minimization algorithms (e.g., alternating between dictionary and coefficient updates) can be guaranteed to converge to a globally optimal dictionary with high probability. Regarding these works, it is important to note that, as discussed in Sect. 2.1 (see also [26]), the problem of dictionary learning is similar but yet rather different to the problem of learning convolutional image atoms as in (1) in the sense that the latter is shift-invariant since it employs a convolution to synthesize the image data (rather than comparing with a patch matrix). While results on structured matrix decomposition that allow for general data terms (such as [29]) can be applied also to convolutional sparse coding by including the convolution in the data term, this is not immediate for dictionary learning approaches.

Although having a different motivation, the learning of convolutional image atoms is also related to blind deconvolution, where one aims to simultaneously recover a blurring kernel and a signal from blurry, possibly noisy measurements. While there is a large literature on this topic (see [15, 20] for a review), in particular lifting approaches that aim at a convex relaxation of underlying bilinear problem in a discrete setting are related to our work. In this context, the goal is often to obtain recovery guarantees under some assumptions on the data. We refer to [40] for a recent overview, to [3] for a lifting approach that poses structural assumptions on both the signal and the blurring kernel and uses a nuclear-norm-based convex relaxation, and to [21] for a more generally applicable approach that employs the concept of atomic norms [19]. Moreover, the work [40] studies the joint blind deconvolution and demixing problem, which has the same objective as (1) of decomposing a signal into a sum of convolutions but is motivated in [40] from multiuser communication. There, the authors again pose some structural assumptions on the underlying signal but, in contrast to previous works, deal with recovery guarantees of non-convex algorithms, which are computationally more efficient than those addressing a convex relaxation.

Connections to Deep Neural Networks Regarding a deeper mathematical understanding of deep convolutional neural networks, establishing a mathematical theory for convolutional sparse coding is particularly relevant due to a strong connection of the two methodologies. Indeed, it is easy to see that for instance in case D(u,u0)=12u-u022 and (pi)i is fixed, the numerical solution of the convolutional sparse coding problem (1) via forward-backward splitting with a fixed number of iterations is equivalent to a deep residual network with constant parameters. Similarly, recent results (see [46, 47, 58]) show a strong connection of thresholding algorithms for multilayer convolutional sparse coding with feed-forward convolutional neural networks. In particular, this connection is exploited to transfer reconstruction guarantees from sparse coding to the forward pass of deep convolutional neural networks.

In this context, we also highlight [61], which very successfully employs filter learning in convolutional neural networks as regularization prior in image processing tasks. That is, [61] uses simultaneous filter learning and image synthesis for regularization, without prior training. The underlying architecture is strongly related to the energy minimization approach employed here, and again we believe that a deeper mathematical analysis of the latter will be beneficial to explain the success of the former.

Another direct relation to deep neural networks is given via deconvolutional neural networks as discussed in [65], which solve a hierarchy of convolutional sparse coding problems to obtain a feature representation of given image data. Last but not least, we also highlight that the approach discussed in this paper can be employed as feature encoder (again potentially also using incomplete/indirect data measurements), which provides a possible preprocessing step that is very relevant in the context of deep neural networks.

Outline of the Paper

In Sect. 2, we present the main ideas for our approach in a formal setting. This is done from two perspectives, once from the perspective of a convolutional Lasso approach and once from the perspective of patch-based methods. In Sect. 3, we then carry out an analysis of the proposed model in function space, where we motivate our approach via convex relaxation and derive well-posedness results. Section 4 then presents the model in a discrete setting and the numerical solution strategy, and Sect. 5 provides numerical results and a comparison to the existing methods. At last, an “Appendix” provides a brief overview on some results for tensor spaces that are used in Sect. 3. We note that, while the analysis of Sect. 3 is an important part of our work, the paper is structured in a way such that readers only interested in the conceptual idea and the realization of our approach can skip Sect. 3 and focus on Sects. 2 and 4.

A Convex Approach to Image Atoms

In this section, we present the proposed approach to image-atom-learning and texture reconstruction, where we focus on explaining the main ideas rather than precise definitions of the involved terms. For the latter, we refer to Sect. 3 for the continuous model and Sect. 4 for the discrete setting.

Our starting point is the convolutional Lasso problem [18, 65], which aims to decompose a given image u as a sparse linear combination of basic atoms (pi)i=1k with coefficient images (ci)i=1k by inverting a sum of convolutions as follows

min(ci)i,(pi)ii=1kci1s.t.u=i=1kcipi,pi21fori=1,,k.

It is important to note that, by choosing the (ci)i to be composed of delta peaks, this allows to place the atoms (pi)i at any position in the image. In [65], this model was used in the context of convolutional neural networks for generating image atoms and other image-related tasks. Subsequently, many works have dealt with the algorithmic solution of the resulting optimization problem, where the main difficulty lies in the non-convexity of the atom-learning step, and we refer to [28] for a recent review.

Our goal is to obtain a convex relaxation of this model that can be used for both, learning image atoms from potentially noisy data and image reconstruction tasks such as inpainting, deblurring or denoising. To this aim, we lift the model to the tensor product space of coefficient images and image atoms, i.e., the space of all tensors C=icipi with cipi being a rank-1 tensor such that (cipi)(x,y)=ci(x)pi(y). We refer to Fig. 1 for a visualization of this lifting in a one-dimensional setting, where both coefficients and image atoms are vectors and cipi corresponds to a rank-one matrix. Notably, in this tensor product space, the convolution cipi can be written as linear operator K^ such that K^C(x)=iK^(cipi)(x)=ipi(x-y)ci(y)dy. Exploiting redundancies in the penalization of (ci1)i and the constraint pi21, i=1,k and rewriting the above setting in the lifted tensor space, as discussed in Sect. 3, we obtain the following minimization problem as convex relaxation of the convolutional Lasso approach

minCC1,2s.t.u=K^C,

where ·1,2 takes the 1-norm and 2-norm of C in coefficient and atom direction, respectively. Now while a main feature of the original model was that the number of image atoms was fixed, this is no longer the case in the convex relaxation and would correspond to constraining the rank of the lifted variable C (defined as the minimal number of simple tensors needed to decompose C) to be below a fixed number. As convex surrogate, we add an additional penalization of the nuclear norm of C in the above objective functional (here we refer to the nuclear norm of C in the tensor product space which, in the discretization of our setting, coincides with the classical nuclear norm of a matrix reshaping of C). Allowing also for additional linear constraints on C via a linear operator M^, we arrive at the following convex norm that measures the decomposability of a given image u into a sparse combination of atoms as

Nν(u)=minCνC1,2+(1-ν)Cs.t.u=K^C,M^C=0.

Interestingly, this provides a convex model for learning image atoms, which for simple images admitting a sparse representation seems quite effective. In addition, this can in principle also be used as a prior for image reconstruction tasks in the context of inverse problems via solving for example

minuλ2Au-u022+Nν(u),

with u0 given some corrupted data, A a forward operator and λ>0 a parameter.

Fig. 1.

Fig. 1

Visualization of the atom-lifting approach for 1D images. The green (thick) lines in the atom matrix correspond to nonzero (active) atoms and are placed in the image at the corresponding positions

Both the original motivation for our model and its convex variant have many similarities with dictionary learning and patch-based methods. The next section strives to clarify similarities and difference and provides a rather interesting, different perspective on our model.

A Dictionary-Learning-/Patch-Based Methods’ Perspective

In classical dictionary-learning-based approaches, the aim is to represent a resorted matrix of image patches as a sparse combination of dictionary atoms. That is, with uRNM a vectorized version of an image and D=(D1,,Dl)TRl×nm a patch matrix containing l vectorized (typically overlapping) images patches of size nm, the goal is to obtain a decomposition D=cp, where cRl×k is a coefficient matrix and pRk×nm is a matrix of k dictionary atoms such that ci,j is the coefficient for the atom pj,· in the representation of the patch Di. In order to achieve a decomposition in this form, using only a sparse representation of dictionary atoms, a classical approach is to solve

minc,pλ2cp-D22+c1s.t.pC,

where C potentially puts additional constraints on the dictionary atoms, e.g., ensures that pj,·21 for all j.

A difficulty with such an approach is again the bilinear and hence non-convex nature of the optimization problem, leading to potentially many non-optimal stationary points and making the approach sensitive to initialization.

As a remedy, one strategy is to consider a convex variant (see, for instance, [6]). That is, rewriting the above minimization problem (and again using the ambiguity in the product cp to eliminate the L2 constraint) we arrive at the problem

minC:rank(C)kλ2C-D22+C1,2,

where C1,2=iCi,·2. A possible convexification is then given as

minCλ2C-D22+νC1,2+(1-ν)C, 2

where · is the nuclear norm of the matrix C.

A disadvantage of such an approach is that the selection of patches is a priori fixed and that the lifted matrix C has to approximate each patch. In the situation of overlapping patches, this means that individual rows of C have to represent different shifted versions of the same patch several times, which inherently contradicts the low-rank assumption.

It is now interesting to see our proposed approach in relation to these dictionary learning methods and the above-described disadvantage: Denote again by K^ the lifted version of the convolution operator, which in the discrete setting takes a lifted patch matrix as input and provides an image composed of overlapping patches as output. It is then easy to see that K^, the adjoint of K^, is in fact a patch selection operator and it holds that K^K^=I. Now using K^, the approach in (2) can be rewritten as

minCλ2C-K^u22+νC1,2+(1-ν)C, 3

where we remember that u is the original image. Considering the problem of finding an optimal patch-based representation of an image as the problem of inverting K^, we can see that the previous approach in fact first applies a right inverse of K^ and then decomposes the data. Taking this perspective, however, it seems much more natural to consider instead an adjoint formulation as

minCλ2K^C-u22+νC1,2+(1-ν)C. 4

Indeed, this means that we do not fix the patch decomposition of the image a priori but rather allow the method itself to optimally select the position and size of patches. In particular, given a particular patch at an arbitrary location, this patch can be represented by using only one line of C and the other lines (corresponding to shifted versions) can be left empty. Figure 2 shows the resulting improvement by solving both of the above optimization problems for a particular test image, where the parameters are set such that the data error of both methods, i.e., K^C-u22, is approximately the same. As can be seen, solving (3), which we call patch denoising, does not yield meaningful dictionary atoms as the dictionary elements need to represent different, shifted version of the single patch that makes up the image. In contrast to that, solving (4), which we call patch reconstruction, allows to identify the underlying patch of the image and the corresponding patch matrix is indeed row sparse. In this context, we also refer to [26] which makes similar observations and differs between patch analysis (which is related to (3)) and patch synthesis, which is similar to (4); however, it does not consider a convolutional- but rather a matrix-based synthesis operator.

Fig. 2.

Fig. 2

Patch-based representation of test images. Left: original image, middle: nine most important patches for each method (top: patch denoising, bottom: patch reconstruction), right: section of the corresponding patch matrices

The Variational Model

Now while the proposed model can, in principle, describe any kind of image, in particular its convex relaxation seems best suited for situations where the image can be described by only a few, repeating atoms, as would be, for instance, the case with texture images. In particular, since we do not include rotations in the model, there are many simple situations, such as u being the characteristic function of a disk, which would in fact require an infinite number of atoms. To overcome this, it seems beneficial to include an additional term which is rotationally invariant and takes care of piecewise smooth parts of the image. Denoting R to be any regularization functional for piecewise smooth data and taking the infimal convolution of this functional with our atom-based norm, we then arrive at the convex model

minu,vλ2Au-u022+μ1R(u-v)+μ2Nν(v),

for learning image data and image atom kernels from potentially noisy or incomplete measurements.

A particular example of this model can be given when choosing R=TV, the total variation function [51]. In this setting, a natural choice for the operator M in the definition of Nν is to take the pointwise mean of the lifted variable in atom direction, which corresponds to constraining the learned atoms to have zero mean and enforces some kind of orthogonality between the cartoon and the texture part in the spirit of [43]. In our numerical experiments, in order to obtain an even richer model for the cartoon part, we use the second-order total generalized variation function (TGVα2) [7, 9] as cartoon prior and, in the spirit of a dual TGVα2 norm, use M to constrain the 0th and 1st moments of the atoms to be zero.

We also remark that, as shown in the analysis part of Sect. 3, while an 1/2-type norm on the lifted variables indeed arises as convex relaxation of the convolutional Lasso approach, the addition of the nuclear norm is to some extent arbitrary and in fact, in the context of compressed sensing, it is known that a summation of two norms is suboptimal for a joint penalization of sparsity and rank [44]. (We refer to Remark 5 for an extended discussion.) Indeed, our numerical experiments also indicate that the performance of our method is to some extent limited by a suboptimal relaxation of a joint sparsity and rank penalization. To account for that, we also tested with semi-convex potential functions (instead of the identity) for a penalization of the singular values in the nuclear norm. Since this provided a significant improvement in some situations, we also include this more general setting in our analysis and the numerical results.

The Model in a Continuous Setting

The goal of this section is to define and analyze the model introduced in Sect. 2 in a continuous setting. To this aim, we regard images as functions in the Lebesgue space Lq(Ω) with a bounded Lipschitz domain ΩRd, dN and 1<q2. Image atoms are regarded as functions in Ls(Σ), with ΣRd a second (smaller) bounded Lipschitz domain (either a circle or a square around the origin) and s[q,] an exponent that is a priori allowed to take any value in [q,], but will be further restricted below. We also refer to “Appendix” for further notation and results, in particular in the context of tensor product spaces, that will be used in this section.

As described in Sect. 2, the main idea is to synthesize an image via the convolution of a small number of atoms with corresponding coefficient images, where we think of the latter as a sum of delta peaks that define the locations where atoms are placed. For this reason, and also due to compactness properties, the coefficient images are modeled as Radon measures in the space M(ΩΣ), the dual of C0(ΩΣ), where we denote

ΩΣ:={xRdthere existsyΣs.t.x-yΩ},

i.e., the extension of Ω by Σ. The motivation for using this extension of Ω is to allow atoms also to be placed arbitrarily close to be boundary (see Fig. 1). We will further use the notation r=r/(r-1) for an exponent r(1,) and denote duality pairings between Lr and Lr and between M(Ω) and C0(Ω) by (·,·), while other duality pairings (e.g., between tensor spaces) are denoted by ·,·. By ·r,·M, we denote standard Lr and Radon norms whenever the domain of definition is clear from the context, otherwise we write ·Lr(ΩΣ),·M(ΩΣ), etc.

The Convolutional Lasso Prior

As a first step, we deal with the convolution operator that synthesizes an image from a pair of a coefficient image and an image atom in function space. Formally, we aim to define K:M(ΩΣ)×Ls(Σ)Lq(Ω) as

K(c,p)(x):=ΩΣp(x-y)dc(y),

where we extend p by zero outside of Σ. An issue with this definition is that, in general, p is only defined Lebesgue almost everywhere and so we have to give a rigorous meaning to the integration of p with respect to an arbitrary Radon measure. To this aim, we define the convolution operator via duality (see [52]). For cM(ΩΣ), pLs(Σ) we define by Kc,p the functional on C(Ω¯) as dense subset of Lq(Ω) as

Kc,p(h):=RdRdh~(z+y)p~(z)dzdc~(y),

where g~ always denotes the zero extension of the function or measure g outside their domain of definition. Now we can estimate with Θ>0

Kc,p(h)RdRd|h~(z+y)||p~(z)|dzd|c~|(y)h~Lq(Rd)p~Lq(Rd)c~M(Rd)ΘpscMhq.

Hence, by density we can uniquely extend Kc,p to a functional in Lq(Ω)Lq(Ω) and we denote by [Kc,p] the associated function in Lq(Ω). Now in case p is integrable w.r.t. c and xΩΣp(x-y)dc(y)Lq(Ω), we get by a change of variables and Fubini’s theorem that for any hC(Ω¯)

Kc,p(h)=RdRdh~(x)p~(x-y)dxdc~(y)=Ωh(x)ΩΣp(x-y)dc(y)dx.

Hence we get that in this case, [Kc,p](x)=ΩΣp(x-y)dc(y) and defining K:M(ΩΣ)×Ls(Ω)Lq(Ω) as

K(c,p):=[Kc,p]

we get that K(cp) coincides with the convolution of c and p whenever the latter is well defined. Note that K is bilinear and, as the previous estimate shows, there exists Θ>0 such that K(c,p)qΘcMps. Hence, KB(M(ΩΣ)×Ls(Σ),Lq(Ω)), the space of bounded bilinear operators (see “Appendix”).

Using the bilinear operator K and denoting by kN a fixed number of atoms, we now define the convolutional Lasso prior for an exponent s[q,] and for uLq(Ω) as

Ncl,s(u)=inf(ci)i=1kM(ΩΣ)(pi)i=1kLs(Σ)i=1kciMs.t.pis1,Mpi=0i=1,,k,u=i=1kK(ci,pi)inΩ, 5

and set Ncl,s(u)= if the constraint set above is empty. Here, we include an operator ML(Ls(Σ),Rm) in our model that optionally allows to enforce additional constraints on the atoms. A simple example of M that we have in mind is an averaging operator, i.e., Mp:=|Σ|-1Σp(x)dx; hence, the constraint that Mp=0 corresponds to a zero-mean constraint.

A Convex Relaxation

Our goal is now to obtain a convex relaxation of the convolutional Lasso prior. To this aim, we introduce by K^ and M^:=IM the lifting of the bilinear operator K and the linear operators I and M, with IL(M(ΩΣ),M(ΩΣ)) being the identity, to the projective tensor product space Xs:=M(ΩΣ)πLs(Σ) (see “Appendix”). In this space, we consider a reformulation as

Ncl,s(u)=infCXsCπ,k,Ms.t.u=K^CinΩ, 6

where

Cπ,k,M:=infi=1kciMpisC=i=1kcipiwithMpi=0fori=1,,k.

Note that this reformulation is indeed equivalent. Next we aim to derive the convex relaxation of Ncl,s in this tensor product space. To this aim, we use the fact that for a general function g:XsR¯, its convex, lower semi-continuous relaxation can be computed as the biconjugate g:XsR¯, where g(x)=supxXsx,x-g(x) and g(x)=supxXsx,x-g(x).

First we consider a relaxation of the functional ·π,k,M. In this context, we need an additional assumption on the constraint set ker(M), which is satisfied, for instance, if s=2 or for M=0, in particular will be fulfilled by the concrete setting we use later on.

Lemma 1

Assume that there exists a continuous, linear, norm-one projection onto ker(M). Then, the convex, lower semi-continuous relaxation of ·π,k,M:XsR¯ is given as

CCπ+Iker(M^)(C),

where Iker(M^)(C)=0 if M^C=0 and Iker(M^)(C)= else, and ·π is the projective norm on Xs given as

Cπ=infi=1ciMpisC=i=1cipi.

Proof

Our goal is to compute the biconjugate of ·π,k,M. First we note that

·π+Iker(M^)·π,k,M·π,1,M,

and consequently

·π+Iker(M^)·π,k,M·π,1,M.

Hence, the assertion follows if we show that ·π,1,M·π+Iker(M^). To this aim, we first show that ·π,M·π+Iker(M^), where we set Cπ,M=Cπ,,M. Let CXs be such that M^C=0 and take (ci)i, (pi)i be such that Cπi=1ciMpis-ϵ for some ϵ>0 and C=i=1cipi. Then, with P the projection to ker(M) as in the assumption, we get that

0=M^C=i=1ciMpi=i=1ciM(pi-Ppi).

Now remember that, according to [53, Theorem 2.9], we have (M(ΩΣ)πRm)=B(M(ΩΣ)×Rm) with the norm BB:=sup{|B(x,y)|xM1,y21}. Taking arbitrary ψM(Ω), ϕ(Rm), we get that B:(c,p)ψ(c)ϕ(p)B(M(ΩΣ)×Rm) and hence

0=B^(M^C)=i=1B^(ciM(pi-Ppi))=i=1ψ(ci)ϕ(M(pi-Ppi))=ϕi=1ψ(ci)M(pi-Ppi)=ϕMi=1ψ(ci)(pi-Ppi)

and since ϕ was arbitrary, it follows that M(i=1ψ(ci)(pi-Ppi))=0. Finally, by closedness of Rg(I-P) we get that i=1ψ(ci)(pi-Ppi)=0 and, since M(ΩΣ) has the approximation property (see [24, Section VIII.3]), from [53, Proposition 4.6], it follows that i=1ci(pi-Ppi)=0, hence C=i=1ciPpi and by assumption i=1ciMpisi=1ciMPpis. Consequently, ·π+Iker(M^)·π,M-ϵ, and since ϵ was arbitrary, the claimed inequality follows.

Now we show that ·π,M·π,1,M, from which the claimed assertion follows by the previous estimate and taking the convex conjugate on both sides. To this aim, take (Cn)nXs such that

·π,M(B)=supCXsB,C-Cπ,M=limnB,Cn-Cnπ,M

and take (cin)i, (pin)i such that Mpin=0 for all ni and

Cn=i=1cinpinandi=1cinMpinsCnπ,M+1/n.

We then get

·π,M(B)=limnB,Cn-Cnπ,MlimnB,i=1cinpin-i=1cinMpins+1/n=limnlimmB,i=1mcinpin-i=1mcinMpins+1/nlimnsupmsup(ci)i=1m,(pi)i=1mMpi=0B,i=1mcipi-i=1mciMpis+1/n=supmsup(ci)i=1m,(pi)i=1mMpi=0i=1mB(ci,pi)-ciMpis=supmmsupc,pMp=0B(c,p)-cMps.

Now it can be easily seen that the last expression equals 0 in case |B(c,p)|cMps for all cp with Mp=0. In the other case, we can pick c^,p^ with Mp^=0 and Θ>1 such that B(c^,p^)>Θc^Mp^s and get for any λ>0 that

supc,pMp=0B(c,p)-cMpsB(λc^,p^)-λc^Mp^sλ(Θ-1)asλ.

Hence, the last line of the above equation is either 0 or infinity and equals

supc,pMp=0B(c,p)-cMps=supCB,C-Cπ,1,M=·π,1,M.

This result suggests that the convex, lower semi-continuous relaxation of (6) will be obtained by replacing ·π,k,M with the projective tensor norm ·π on Xs and the constraint M^C=0. Our approach to show this will in particular require us to ensure lower semi-continuity of this candidate for the relaxation, which in turn requires us to ensure a compactness property of the sublevel sets of the energy appearing in (6) and closedness of the constraints. To this aim, we consider a weak* topology on Xs and rely on a duality result for tensor product spaces (see “Appendix”), which states that, under some conditions, the projective tensor product Xs=M(ΩΣ)πLs(Σ) can be identified with the dual of the so-called injective tensor product C0(ΩΣ)iLs(Σ). The weak* topology on Xs is then induced by pointwise convergence in Xs as dual space of C0(ΩΣ)iLs(Σ). Different from what one would expect from the individual spaces, however, this can only be ensured for the case s< which excludes the space L(Σ) for the image atoms. This restriction will also be required later on in order to show well-posedness of a resulting regularization approach for inverse problems, and hence, we will henceforth always consider the case that s[q,) and use the identification (C0(ΩΣ)iLs(Σ))=^M(ΩΣ)πLs(Σ) (see “Appendix”).

As a first step toward the final relaxation result, and also as a crucial ingredient for well-posedness results below, we show weak* continuity of the operator K^ on the space Xs.

Lemma 2

Let s[q,). Then, the operator K^:XsLq(Ω) is continuous w.r.t. weak* convergence in Xs and weak convergence in Lq(Ω). Also, for any ϕC0(Ω)Lq(Ω) it follows that K^ϕC0(ΩΣ)iLs(Σ) and, via the identification C0(ΩΣ)iLs(Σ)=^C0(ΩΣ,Ls(Σ)) (see “Appendix”), can be given as Kϕ(t)=[xϕ(t+x)].

Proof

First we note that for any ψCc(Ω), the function ψ^ defined as ψ^(t)=[xψ(t+x)] (where we extend ψ by 0 to Rd) is contained in Cc(ΩΣ,Ls(Σ)). Indeed, continuity follows since by uniform continuity for any ϵ>0 there exists a δ>0 such that for any rRd with |r|δ and tΩΣ with t+rΩΣ

ψ^(t+r)-ψ^(t)s=Σ|ψ(t+r+x)-ψ(t+x)|sdx1/sϵ|Σ|1/s.

Also, taking KΩ to be the support of ψ we get, with KΣ the extension of K by Σ, for any tΩΣ\KΣ that ψ(t+x)=0 for any xΣ and hence ψ^=0 in Ls(Σ) and ψ^Cc(ΩΣ,Ls(Σ)).

Now for ϕC0(ΩΣ), taking (ϕn)nCc(ΩΣ) to be a sequence converging to ϕ, we get that

ϕ^-ϕn^C0(ΩΣ,Ls(Σ))=suptΣ|ϕ(t+x)-ϕn(t+x)|sdx1/sϕ-ϕn|Σ|1/s0.

Thus, ϕ^ can be approximated by a sequence of compactly supported functions and hence ϕ^C0(ΩΣ,Ls(Σ)). Fixing now u=cpXs, we note that for any ψC0(ΩΣ,Ls(Σ)), the function tΣψ(t)(x)p(x)dx is continuous; hence, we can define the linear functional

Fu(ψ):=ΩΣΣψ(t)(x)p(x)dxdc(t)

and get that Fu is continuous on C0(ΩΣ,Ls(Σ)). Then, since ϕ^C0(ΩΣ)iLs(Σ) it can be approximated by a sequence of simple tensors (i=1mnxinyin)n in the injective norm, which coincides with the norm in C0(ΩΣ,Ls(Σ)) and, using Lemma 22 in “Appendix”, we get

u,ϕ^=limnu,i=1mnxinyin=limni=1mn(c,xin)(p,yin)=limni=1mnΩΣΣxin(a)yin(b)p(b)dbdc(a)=limnFui=1mnxinyinn=Fu(ϕ^)=ΩΣΣϕ(a+b)p(b)dbdc(a)=(K(c,p),ϕ)=(K^u,ϕ)

Now by density of simple tensors in the projective tensor product, it follows that Kϕ=ϕ^. In order to show the continuity assertion, take (un)n weak * converging to some uXs. Then by the previous assertion we get for any ϕCc(ΩΣ) that

(Kun,ϕ)=un,Kϕu,Kϕ=(Ku,ϕ),

hence (Kun)n weakly converges to Ku on a dense subset of Ls(Ω) which, together with boundedness of (Kun)n, implies weak convergence.

We will also need weak*-to-weak* continuity of M^, which is shown in the following lemma in a slightly more general situation than needed.

Lemma 3

Take s[q,) and assume that ML(Ls(Σ),Z) with Z a reflexive space and define M^:=IπML(Xs,M(ΩΣ)πZ), where I is the identity on M(ΩΣ). Then M^ is continuous w.r.t. weak* convergence in both spaces.

Proof

Take (un)nX weak* converging to some uX and write un=limki=1kxinyin. We note that, since Z is reflexive, it satisfies in particular the Radon Nikodým property (see “Appendix”) and hence C(ΩΣ)iZ can be regarded as predual of M(ΩΣ)πZ and we test with ϕψC(ΩΣ)iZ. Then

M^un,ϕψ=limki=1k(xin,ϕ)(Myin,ψ)=limki=1k(xin,ϕ)(yin,Mψ)=un,ϕMψu,ϕMψ=M^u,ϕψ,

where the convergence follows since MψLs(Σ), the predual of Ls(Σ), and hence, ϕMψC(ΩΣ)iLq(Σ).

Now we can obtain the convex, lower semi-continuous relaxation of Ncl,s.

Lemma 4

With the assumptions of Lemma 1 and s[q,), the convex, l.s.c. relaxation of Ncl,s is given as

Ncl,s(u)=infCXsCπs.t.u=K^CinΩ,M^C=0. 7

Proof

Again we first compute the convex conjugate:

Ncl,s(v)=supu(u,v)-Ncl,s(u)=supCXs(K^C,v)-Cπ,k,M=supCXsC,K^v-Cπ,k,M=K^vπ,k,M.

Similarly, we see that N(v)=·π+Iker(M^)(K^v), where

N(u)=infCXsCπs.t.u=K^CinΩ,M^C=0.

Now in the proof of Lemma 1, we have in particularly shown that ·π+Iker(M^)=·π,k,M; hence, if we show that N is convex and lower semi-continuous, the assertion follows from N(u)=N(u)=Ncl,s(u). To this aim, take a sequence (un)n in Lq(Ω) converging weakly to some u for which, without loss of generality, we assume that

limnN(un)=lim infnN(un)<.

Now with (Cn)n such that CnπN(un)+n-1, M^Cn=0 and un=K^Cn we get that (Cnπ)n is bounded. Since Xs admits a separable predual (see “Appendix”), this implies that (Cn)n admits a subsequence (Cni)i weak* converging to some C. By weak* continuity of K^ and M^ we get that u=K^C and M^C=0, respectively, and by weak* lower semi-continuity of ·π, it follows that

N(u)Cπlim infiCniπlim infiN(uni)+ni-1limiN(uni)=lim infN(un),

which concludes the proof.

This relaxation results suggest to use N(·) as in Equation (7) as convex texture prior in the continuous setting. There is, however, an issue with that, namely that such a functional cannot be expected to penalize the number of used atoms at all. Indeed, taking some C=i=1lcipi and assume that Cπ=i=1lciMpis. Now note that we can split any summand ci0pi0 as follows: Write ci0=ci01+ci02 with disjoint support such that ci0M=ci01M+ci02M. Then, we can rewrite

ci0pi0=ci01pi0+ci02pi0

which gives a different representation of C by increasing the number of atoms without changing the cost of the projective norm. Hence, in order to maintain the original motivation of the approach to enforce a limited number of atoms, we need to add an additional penalty on C for the lifted texture prior.

Adding a Rank Penalization

Considering the discrete setting and the representation of the tensor C as a matrix, the number of used atoms corresponds to the rank of the matrix, for which it is well known that the nuclear norm constitutes a convex relaxation [25]. This construction can in principle also be transferred to general tensor products of Banach spaces via the identification (see Proposition 23 in “Appendix”)

C=i=1xiyiXπYTCL(X,Y)whereTC(x)=i=1xi(x)yi

and the norm

Cnuc=TCnuc=infi=1σiTC(x)=i=1σixi(x)yis.t.xiX1,yiY1.

It is important to realize, however, that the nuclear norm of operators depends on the underlying spaces and in fact coincides with the projective norm in the tensor product space (see Proposition 23). Hence, adding the nuclear norm in Xs does not change anything, and more generally, whenever one of the underlying spaces is equipped with an L1-type norm, we cannot expect a rank-penalizing effect (consider the example of the previous section).

On the other hand, going back to the nuclear norm of a matrix in the discrete setting, we see that it relies on orthogonality and an inner product structure and that the underlying norm is the Euclidean inner product norm. Hence, an appropriate generalization of a rank-penalizing nuclear norm needs to be built on a Hilbert space setting. Indeed, it is easy to see that any operator between Banach spaces with a finite nuclear norm is compact, and in particular for any TL(H1,H2) with finite nuclear norm and H1,H1 Hilbert spaces, there are orthonormal systems (xi)i, (yi)i and uniquely defined singular values (σi)i such that

Tx=i=1σi(x,xi)yiand in additionTnuc=i=1σi.

Motivated by this, we aim to define an L2-based nuclear norm as extended real-valued function on Xs as convex surrogate of a rank penalization. To this aim, we consider from now on the case s=2. Remember that the tensor product XY of two spaces XY is defined as the vector space spanned by linear mappings xy on the space of bilinear forms on X×Y, which are given as xy(B)=B(x,y). Now since L2(ΩΣ) can be regarded as subspace of M(ΩΣ), also L2(ΩΣ)L2(Σ) can be regarded as subspace of M(ΩΣ)L2(Σ). Further, defining for CL2(ΩΣ)L2(Σ),

Cπ,L2L2:=infi=1nxi2yi2C=i=1nxiyi,xiL2(ΩΣ),yiL2(Σ),nN,

we get that ·πΘ·π,L2L2 for a constant Θ>0, and hence, also the completion L2(ΩΣ)πL2(Σ) can be regarded as subspace of M(ΩΣ)πL2(Σ). Further, L2(ΩΣ)πL2(Σ) can be identified with the space of nuclear operators N(L2(ΩΣ),L2(Σ)) as above such that

Cπ,L2L2=i=1σi(TC)with(σi(TC))ithe singular values ofTC.

Using this, and introducing a potential function ϕ:[0,)[0,), we define for CX2,

Cnuc,ϕ:=i=1ϕ(σi(TC))ifCL2(ΩΣ)πL2(Σ),else. 8

We will mostly focus on the case ϕ(x)=x, in which ·nuc,ϕ coincides with an extension of the nuclear norm and can be interpreted as convex relaxation of the rank. However, since we observed a significant improvement in some cases in practice by choosing ϕ to be a semi-convex potential function, i.e., a function such that ϕ+τ|·|2 is convex for τ sufficiently small, we include the more general situation in the theory.

Remark 5

(Sparsity and low-rank) It is important to note that Cnuc,ϕ< restricts C to be contained in the smoother space L2(ΩΣ)πL2(Σ) and in particular does not allow for simple tensors i=1kcipi with the ci’s being composed of delta peaks. Thus, we observe some inconsistency of a rank penalization via the nuclear norm and a pointwise sparsity penalty, which is only visible in the continuous setting via regularity of functions. Nevertheless, such an inconsistency has already been observed in the finite-dimensional setting in the context of compressed sensing for low-rank AND sparse matrices, manifested via a poor performance of the sum of a nuclear norm and 1 norm for exact recovery (see [44]). As a result, there exist many studies on improved, convex priors for the recovery of low-rank and sparse matrices, see, for instance, [19, 49, 50]. While such improved priors can be expected to be highly beneficial for our setting, the question does not seem to be solved in such a way that can be readily applied in our setting.

One direct way to circumvent this inconsistency would be to include an additional smoothing operator for C as follows: Take SL(M(ΩΣ),M(ΩΣ)) such that range(S)L2(ΩΣ) to be a weak*-to-weak* continuous linear operator and define the operator S^:X2X2 as S^:=SIL2, where IL2 denotes the identity in L2(Σ). Then one could alternatively also use SCnuc as alternative for penalizing the rank of C while still allowing C to be a general measure. Indeed, in the discrete setting, by choosing S also to be injective, we even obtain the equality rank(SC)=rank(C) (where we interpret C and SC as matrices). In practice, however, we did not observe an improvement by including such a smoothing and thus do not include S^ in our model.

Remark 6

(Structured matrix completion) We would also like to highlight the structured-matrix-completion viewpoint on the difficulty of low-rank and sparse recovery. In this context, the work [29] discusses conditions for global optimality of solutions to the non-convex matrix decomposition problem

minURN×k,VRn×k(Y,UVT)+i=1kθ(Ui,Vi) 9

where measures the loss w.r.t. some given data and θ(·,·) allows to enforce structural assumptions on the factors UV. For this problem, [29] shows that rank-deficient local solutions are global solutions to a convex minorant obtained allowing k to become arbitrary large (formally, choose k=) and, consequently, also globally optimal for the original problem. Choosing (Y,UVT)=0 if Y=K^(UVT) and infinity else, where K^ is a discrete version of the lifted convolution operator, and θ(Ui,Vi)=Ui2Vi1, we see that (for simplicity ignoring the optional additional atom constraints) the convolutional Lasso prior Ncl,s of Equation (5) can be regarded as special case of (9). Viewed in this way, our above results show that the convex minorant obtained with k= is in fact the convex relaxation, i.e., the largest possible convex minorant, of the entire problem including the data- and the convolution term. But again, as discussed in Sect. 3.2, we cannot expect a rank-penalizing effect of the convex relaxation obtained in this way. An alternative, as mentioned in [29], would be to choose θ(Ui,Vi)=Ui22+Vi22+γVi1. Indeed, while in this situation it is not clear if the convex minorant with r= is the convex relaxation, the former still provides a convex energy from which one would expect a rank-penalizing effect. Thus, this would potentially be an alternative approach that could be used in our context with the advantage of avoiding the lifting but the difficulty of finding rank-deficient local minima of a non-convex energy.

Well-Posedness and a Cartoon–Texture Model

Including Cnuc,ϕ for CX2 as additional penalty in our model, we ultimately arrive at the following variational texture prior in the tensor product space X2:=M(ΩΣ)πL2(Σ), which is convex whenever ϕ is convex, in particular for ϕ(x)=|x|.

Nν(v)=infCX2νCπ+(1-ν)Cnuc,ϕs.t.M^C=0,v=K^CinΩ, 10

where ν(0,1) is a parameter balancing the sparsity and the rank penalty.

In order to employ Nν as a regularization term in an inverse problems setting, we need to obtain some lower semi-continuity and coercivity properties. As a first step, the following lemma, which is partially inspired by techniques used in [10, Lemma 3.2], shows that, under some weak conditions on ϕ, ·nuc,ϕ defines a weak* lower semi-continuous function on X2.

Lemma 7

Assume that ϕ:[0,)[0,) is lower semi-continuous, non-decreasing, that

  • ϕ(x) for x and that

  • there exist ϵ,η>0 such that ϕ(x)ηx for 0x<ϵ.

Then, the functional ·nuc,ϕ:X2R¯ defined as in (8) is lower semi-continuous w.r.t. weak* convergence in X2.

Proof

Take (Cn)nX2 weak* converging to some CX2 for which, w.l.o.g., we assume that

lim infnCnnuc,ϕ=limnCnnuc,ϕ.

We only need to consider the case that (Cnnuc,ϕ)n is bounded, otherwise the assertion follows trivially. Hence, we can write TCn(x)=i=1σin(xin,x)yin such that Cnnuc,ϕ=i=1ϕ(σin). Now we aim to bound (Cnπ,L2L2)n in terms of (Cnnuc,ϕ)n. To this aim, first note that the assumptions in ϕ imply that for any ϵ>0 there is η>0 such that ϕ(x)ηx for all x<ϵ. Also, ϕ(σin)Cnnuc,ϕ for any in and via a direct contradiction argument it follows that there exists ϵ^>0 such that σin<ϵ^ for all in. Picking η^ such that ϕ(x)η^x for all x<ϵ^, we obtain

Cnnuc,ϕ=i=1ϕ(σin)η^i=1σin=η^Cnπ,L2L2,

hence (Cn)n is also bounded as a sequence in L2(ΩΣ)πL2(Σ) and admits a (non-relabeled) subsequence weak* converging to some C^L2(ΩΣ)πL2(Σ), with L2(ΩΣ)iL2(Σ) being the predual space. By the inclusion C0(ΩΣ)iL2(Σ)L2(ΩΣ)iL2(Σ) and uniqueness of the weak* limit, we finally get C^=CL2(ΩΣ)iL2(Σ) and can write TCx=i=1σi(xi,x)yi and Cnuc,ϕ=i=1σi. By lower semi-continuity of ·nuc, this would suffice to conclude in the case ϕ(x)=x. For the more general case, we need to show a pointwise lim-inf property of the singular values. To this aim, note that by the Courant–Fischer min–max principle (see, for instance, [12, Problem 37]) for any compact operator TL(H1,H2) with H1,H2 Hilbert spaces and λk the k-th singular value of T sorted in descending order, we have

λk=supdim(V)=kminxV,x=1TxH2.

Now consider kN fixed. For any subspace V with dim(V)=k, the minimum in the equation above is achieved, and hence, we can denote xV to be a minimizer and define FV(T):=TxVH2 such that λk=supdim(V)=kFV(T). Since weak* convergence of a sequence (Tn) to T in L2(ΩΣ)πL2(Σ) implies in particular Tn(x)T(x) for all x, by lower semi-continuity of the norm ·H2 it follows that FV is lower semi-continuous with respect to weak* convergence. Hence, this is also true for the function Tλk(T) by being the pointwise supremum of a family of lower semi-continuous functional. Consequently, for the sequence (TCn)n it follows that σklim infnσkn. Finally, by monotonicity and lower semi-continuity of ϕ and Fatou’s lemma we conclude

TCnuc,ϕ=kϕ(σk)kϕ(lim infnσkn)=klim infnϕ(σkn)lim infnkϕ(σkn)lim infnTCnnuc,ϕ.

The lemma below now establishes the main properties of Nν that in particular allow to employ it as regularization term in an inverse problems setting.

Lemma 8

The infimum in the definition of (10) is attained and Nν:Lq(Ω)R¯ is convex and lower semi-continuous. Further, any sequence (vn)n such that Nν(vn) is bounded admits a subsequence converging weakly in Lq(Ω).

Proof

The proof is quite standard, but we provide it for the readers convenience. Take (vn)n to be a sequence such that (Nν(vn))n is bounded. Then, we can pick a sequence (Cn)n in X2 such that M^Cn=0, vn=K^Cn and

νCnπνCnπ+(1-ν)Cnnuc,ϕNν(vn)+n-1

This implies that (Cn)n admits a subsequence (Cni)i weak* converging to some CX2. Now by continuity of M^ and K^ we have that M^C=0 and that (vni)i=(K^Cni)i is bounded. Hence also (vni)i admits a (non-relabeled) subsequence converging weakly to some v=K^C. This already shows the last assertion. In order to show lower semi-continuity, assume that (vn)n converges to some v and, without loss of generality, that

lim infnNν(vn)=limnNν(vn).

Now this is a particular case of the argumentation above; hence, we can deduce with (Cn)n as above that

Nν(v)νCπ+(1-ν)Cnuc,ϕlim infiνCniπ+(1-ν)Cninuc,ϕlim infiNν(vni)+ni-1=lim infnNν(vn)

which implies lower semi-continuity. Finally, specializing even more to the case that (vn)n is the constant sequence (v)n, also the claimed existence follows.

In order to model a large class of natural images and to keep the number of atoms needed in the above texture prior low, we combine it with a second part that models cartoon-like images. Doing so, we arrive at the following model

minu,vLq(Ω)λD(Au,f0)+s1(μ)R(u-v)+s2(μ)Nν(v) P

where we assume R to be a functional that models cartoon images, D(·,f0):YR¯ is a given data discrepancy, AL(Lq(Ω),Y) a forward model and we define the parameter balancing function

s1(μ)=1-min(μ,0),s2(μ)=1+max(μ,0). 11

Now we get the following general existence result.

Proposition 9

Assume that R:Lq(Ω)R¯ is convex, lower semi-continuous and that there exists a finite-dimensional subspace ULq(Ω) such that for any uLq(Ω), vU, wU,

vqΘR(v),andR(u+w)=R(u)

with Θ>0 and U denoting the complement of U in Lq(Ω). Further assume that AL(Lq(Ω),Y), D(·,f0) is convex, lower semi-continuous and coercive on the finite-dimensional space A(U) in the sense that for any two sequences (un1)n, (un2)n such that (un1)nU, (un2)n is bounded and (D(A(un1+un2),f0))n is bounded, also (Aun1q)n is bounded. Then, there exists a solution to (P).

Remark 10

Note that, for instance, in case D satisfies a triangle inequality, the sequence (un2) in the coercivity assumption is not needed, i.e., can be chosen to be zero.

Proof

The proof is rather standard, and we provide only a short sketch. Take ((un,vn))n a minimizing sequence for (P). From Lemma 8, we get that (vn)n admits a (non-relabeled) weakly convergent subsequence. Now we split un=un1+un2U+U and vn=vn1+vn2U+U and by assumption get that un2-vn2q is bounded. But since (vnq)n is bounded, so is (vn2q)n and consequently also (un2q)n. Now we split again un1=un1,1+un1,2ker(A)U+(ker(A)U), where the latter denotes the complement of (ker(A)U)inU, and note that also (un1,2+un2,vn) is a minimizing sequence for (P). Hence, it remains to show that (un1,2)n is bounded in order to get a bounded minimizing sequence. To this aim, we note that (un1,2)n(ker(A)U)U and that A is injective on this finite-dimensional space. Hence, un1,2qΘ~Aun1,2q for some Θ~>0, and by the coercivity assumption on the data term we finally get that (un1,2q)n is bounded. Hence, also (un1,2+un2)n admits a weakly convergent subsequence in Lq(Ω) and by continuity of A as well as lower semi-continuity of all involved functionals existence of a solution follows.

Remark 11

(Choice of regularization) A particular choice of regularization for R in (P) that we consider in this paper is R=TGVα2, with TGVα2 the second-order total generalized variation functional [9], qd/(d-1) and

Mp:=Σp(x)dx,Σp(x1,x2)x1dx,Σp(x1,x2)x2dx.

Since in this case [7, 11]

uqΘTGVα2(u)

with Θ>0 and for all uP1(Ω), the complement of the first-order polynomials, and TGVα2 is invariant on first-order polynomials, the result of Proposition 9 applies.

Remark 12

(Norm-type data terms) We also note that the result of Proposition 9 in particular applies to D(w,f0):=1rw-f0rr for any r[1,) or D(w,f0):=w-f, where we extend the norms by infinity to Lq(Ω) whenever necessary. Indeed, lower semi-continuity of these norms is immediate for both rq and r>q, and since the coercivity is only required on a finite-dimensional space, it also holds by equivalence of norms.

Remark 13

(Inpainting) At last we also remark that the assumptions of Proposition 9 also hold for an inpainting data term defined as

D(w,f0):=0ifw=f0a.e. onωΩelse,

whenever ω has non-empty interior. Indeed, lower semi-continuity follows from the fact that Lq convergent sequences admit pointwise convergent subsequences and the coercivity follows from finite dimensionality of U and the fact that ω has non-empty interior.

Remark 14

(Regularization in a general setting) We also note that Lemma 8 provides the basis for employing either Nν directly or its infimal convolution with a suitable cartoon prior as in Proposition 9 for the regularization of general (potentially nonlinear) inverse problems and with multiple data fidelities, see, for instance, [31, 32] for general results in that direction.

The Model in a Discrete Setting

This section deals with the discretization of the proposed model and its numerical solution. For the sake of brevity, we provide only the main steps and refer to the publicly available source code [16] for all details.

We define U=RN×M to be the space of discrete grayscale images, W=R(N+n-1)×(M+n-1) to be the space of coefficient images and Z=Rn×n to be the space of image atoms for which we assume n<min{N,M} and, for simplicity, only consider a square domain for the atoms. The tensor product of a coefficient image cW and a atom pZ is given as (cp)i,j,r,s=ci,jpr,s and the lifted tensor space is given as the four-dimensional space X=R(N+n-1)×(M+n-1)×n×n.

Texture Norm The forward operator K being the lifting of the convolution cp and mapping lifted matrices to the vectorized image space is then given as

(KC)i,j=r,s=1n,nCi+n-r,j+n-s,r,s

and we refer Fig. 1 for a visualization in the one-dimensional case. Note that by extending the first two dimensions of the tensor space to N+n-1, M+n-1 we allow to place an atom at any position where it still effects the image, also partially outside the image boundary.

Also we note that, in order to reduce dimensionality and accelerate the computation, we introduce a stride parameter ηN in practice which introduces a stride on the possible atom positions. That is, the lifted tensor space and forward operator are reduced in such a way that the grid of possible atom positions in the image is essentially {(ηi,ηj)i,jN,(ηi,ηj){1,,N}×{1,,M}}. This reduces the dimension of the tensor space by a factor η-2, while for η>1 it naturally does not allow for arbitrary atom positions anymore and for η=n it corresponds to only allowing non-overlapping atoms. In order to allow for atoms being placed next to each other, it is important to choose η to be a divisor of the atom-domain size n and we used n=15 and η=3 in all experiments of the paper. In order to avoid extensive indexing and case distinctions, however, we only consider the case η=1 here and refer to the source code [16] for the general case.

A straightforward computation shows that, in the discrete lifted tensor space, the projective norm corresponding to discrete ·1 and ·2 norms for the coefficient images and atoms, respectively, is given as a mixed 1-2 norm as

Cπ=C1,2=i,j=1N,Mr,s=1n,nCi,j,r,s2.

The nuclear norm for a potential ϕ on the other hand reduces to the evaluation of ϕ on the singular values of a matrix reshaping of the lifted tensors and is given as

Cnuc,ϕ=i=1nnϕ(σi),with(σi)ithe singular values ofB=[C(NM,nn)].

where [C(NM,nn)] denotes a reshaping of the tensor C to a matrix of dimensions NM×nn. For the potential function ϕ, we consider two choices: Mostly we are interested in ϕ(x)=x which yields a convex texture model and enforces sparsity of the singular values. A second choice we consider is ϕ:[0,)[0,) given as

ϕ(x)=x-ϵδx2x[0,12ϵ](1-δ)x+δ4ϵelse, 12

where δ<1, δ1 and ϵ>0, see Fig. 3. It is easy to see that ϕ fulfills the assumptions of Lemma 7 and that ϕ is semi-convex, i.e., ϕ+ρ|·|2 is convex for ρ>δϵ. While the results of Sect. 3 hold for this setting even without the semi-convexity assumption, we cannot in general expect to obtain an algorithm that provably delivers a globally optimal solution in the semi-convex (or generally non-convex) case. The reason for using a semi-convex potential rather than a arbitrary non-convex one is twofold: First, for a suitably small stepsize τ the proximal mapping

proxτ,ϕ(u^)=argminuu-u^222τ+ϕ(u)

Fig. 3.

Fig. 3

Visualization of the potential ϕ.

is well defined and hence proximal-point-type algorithms are applicable at least conceptually. Second, since we employ ϕ on the singular values of the lifted matrices C, it will be important for numerical feasibility of the algorithm that the corresponding proximal mapping on C can be reduced to a proximal mapping on the singular values. While this is not obvious for a general choice of ϕ, it is true (see Lemma 15) for semi-convex ϕ with suitable parameter choices.

Cartoon Prior As cartoon prior we employ the second-order total generalized variation functional which we define for fixed parameters (α0,α1)=(2,1) and a discrete image uU as

TGVα2(u)=minvU2α1u-v1+α0Ev1.

Here, and E denote discretized gradient and symmetrized Jacobian operators, respectively, and we refer to [8] and the source code [16] for details on a discretization of TGVα2. To ensure a certain orthogonality of the cartoon and texture parts, we further define the operator M that incorporates atom constraints, to evaluate the 0th and 1st moments of the atoms, which in the lifted setting yields

(MC)i,j:=r,s=1n,nCi,j,r,s,r,s=1n,nrCi,j,r,s,r,s=1n,nsCi,j,r,s.

The discrete version of (P) is then given as

minuU,CXMC=0λD(Au,f0)+s1(μ)TGVα2(u-KC)+s2(μ)νC1,2+(1-ν)Cnuc,ϕ, DP

where the parameter balancing functions s1,s2 are given as in (11) and the model depends on three parameters λ,μ,ν, with λ defining the trade-off between data and regularization, μ defining the trade-off between the cartoon and the texture parts and ν defining the trade-off between sparsity and low rank of the tensor C.

Numerical Solution For the numerical solution of (DP), we employ the primal–dual algorithm of [17]. Since the concrete form of the algorithm depends on whether the proximal mapping of the data term uD(Au,f0) is explicit or not, in order to allow for a unified version as in Algorithm 1, we replace the data term D(Au,f0) by

D1(Au,f0)+D2(u,f0)

where we assume the proximal mappings of vDi(v,f0) to be explicit and, depending on the concrete application, set either D1 or D2 to be the constant zero function.

Denoting by g(v):=supw(v,w)-g(w) the convex conjugate of a function g, with (·,·) being the standard inner product of the sum of all pointwise products of entries of v and w, we reformulate (DP) to a saddle-point problem as

graphic file with name 10851_2019_919_Equ107_HTML.gif

Here, the dual variables (p,q,d,r,m)(U2,U3,A(U),U,U3) are in the image space of the corresponding operators, IS(z)=0 if zS and IS(z)= else, {·δ}:={zzδ} with z=(z1,,zl)=supi,js=1l(zi,js)2 a pointwise infinity norm on zUl. The operator E and the functional G are given as

E(u,v,C)=(u-KC-v,Ev,Au,C,MC),G(x)=G(u,v,C)=λD2(u,f0)+s2(μ)(1-ν)Cnuc,ϕ

and F(y)=F(p,q,d,r,m) summarizes all the conjugate functionals as above. Applying the algorithm of [17] to this reformulation yields the numerical scheme as in Algorithm 1.graphic file with name 10851_2019_919_Figd_HTML.jpg

Note that, we set either D1(·,f0)0 such that the dual variable d is constant 0 and line 9 of the algorithm can be skipped, or we set D2(·,f0)0 such that the proximal mapping in line 13 reduces to the identity. The concrete choice of D1,D2 and the proximal mappings will be given in the corresponding experimental sections. All other proximal mappings can be computed explicitly and reasonably fast: The mappings projα1 and projα1 can be computed as pointwise projections to the L-ball (see, for instance, [8]) and the mapping proxσ,(s2(μ)ν·1,2) is a similar projection given as

proxσ,(s2(μ)ν·1,2)(C)i,j,l,s=Ci,j,l,s/max1,(l,s=1n,nCi,j,l,s2)1/2/(s2(μ)ν).

Most of the computational effort lies in the computation of proxτ,s2(μ)(1-ν)·nuc,ϕ, which, as the following lemma shows, can be computed via an SVD and a proximal mapping on the singular values.

Lemma 15

Let ϕ:[0,)[0,) be a differentiable and increasing function and τ,ρ>0 be such that xx22τ+ρϕ(x) is convex on [0,). Then, the proximal mapping of ρ·nuc,ϕ for parameter τ is given as

proxτ,ρ·nuc,ϕ(C)=[(Udiag((proxτ,ρϕ(σi))i)VT)((N,M,n,n))]

where [C(NM,nn)]=UΣVT is the SVD of [C(NM,nn)] and for x00

proxτ,ρϕ(x0)=minx|x-x0|22τ+ρϕ(|x|).

In particular, in case ϕ(x)=x we have

proxτ,ρϕ(x0)=0if0x0τρ,x0-τρelse,

and in case

ϕ(x)=x-ϵδx2ifx[0,12ϵ],(1-δ)x+δ4ϵelse,

we have that xx22τ+ρϕ(x) is convex whenever τ12ϵδρ and in this case

proxτ,ρϕ(x0)=0if0x0τρ,x0-τρ1-2ϵδτρifτρ<x012ϵ+τρ(1-δ),x0-τρ(1-δ)if12ϵ+τρ(1-δ)<x0.

Proof

At first note that it suffices to consider ρ·nuc,ϕ as a function on matrices and show the assertion without the reshaping operation. For any matrix B, we denote by B=UBΣBVBT the SVD of B and ΣB=diag((σiB)i) contains the singular values sorted in non-increasing order, where ΣB is uniquely determined by B and UB,VB are chosen to be suitable orthonormal matrices.

We first show that G(B):=B222τ+ρBnuc,ϕ is convex. For λ[0,1], B1,B2 matrices, we get by subadditivity of the singular values (see, for instance, [60]) that

G(λB1+(1-λ)B2)=i12τ(σiλB1+(1-λ)B2)2+ρϕ(σiλB1+(1-λ)B2)i12τ(λσiB1+(1-λ)σiB2)2+ρϕ(λσiB1+(1-λ)σiB2)iλ2τ(σiB1)2+1-λ2τ(σiB2)2+ρλϕ(σiB1)+ρ(1-λ)ϕ(σiB2)λG(B1)+(1-λ)G(B1).

Now with H(B):=B-B0222τ+ρBnuc,ϕ we get that H(B)=G(B)-12τ(2(B,B0)+B022), and thus, also H is convex. Hence, first-order optimality conditions are necessary and sufficient and we get (using the derivative of the singular values as in [45]) with DH the derivative of H that B=proxτ,ρ·nuc,ϕ(B0) is equivalent to

0=DH(B)=(B-B0)+τρUBdiag((ϕ(σiB))i)VBT=-B0+UB(ΣB+τρdiag((ϕ(σiB))i))VBT

and consequently to

σiB0=σiB+τρϕ(σiB)

which is equivalent to

σBi=proxτ,ρϕ(σiB0)

as claimed. The other results follow by direct computation.

Note also that, in Algorithm 1, KC+ returns the part of the image that is represented by the atoms (the “texture part”) and RSV(C+) stand for right-singular values of [(C+)(NM,nn)] and returns the image atoms. For the sake of simplicity, we use a rather high, fixed number of iterations in all experiment but note that, alternatively, a duality-gap-based stopping criterion (see, for instance, [8]) could be used.

Numerical Results

In this section, we present numerical results obtained with the proposed method as well as its variants and compare to existing methods. We will mostly focus on the setting of (DP), where ϕ(x)=|x|, and we use different data terms D. Hence, the regularization term is convex and consists of TGVα2 for the cartoon part and a weighted sum of a nuclear norm and 1,2 norm for the texture part. Besides this choice of regularization (called CT-cvx), we will compare to pure TGVα2 regularization (called TGV), the setting of (DP) with the semi-convex potential ϕ as in (12) (call CT-scvx) and the setting of (DP) with TGV replaced by I{0}, i.e., only the texture norm is used for regularization, and ϕ(x)=|x| (called TXT). Further, in the last subsection, we also compare to other methods as specified there. For CT-cvx and CT-scvx, we use the algorithm described in the previous section (where convergence can only be ensured for CT-cvx), and for the other variants we use an adaption of the algorithm to the respective special case.

We fix the size of the atom domain to 15×15 pixel and the stride to 3 pixel (see Sect. 4) for all experiments and use four different test images (see Fig. 4): The first two are synthetic images of size 120×120, containing four different blocks of size 60×60, whose size is a multiple of the chosen atom-domain size. The third and fourth images have size 128×128 (not being a multiple of the atom-domain size), and the third image contains four sections of real images of size 64×64 each (again not a multiple of the atom-domain size). All but the first image contain a mixture of texture and cartoon parts. The first four subsections consider only convex variants of our method (ϕ(x)=|x|), and the last one considers the improvement obtained with a non-convex potential ϕ and also compares to other approaches.

Fig. 4.

Fig. 4

Different test images we will refer to as: Texture, Patches, Mix, Barbara

Regarding the choice of parameters for all methods, we generally aimed to reduce the number of varying parameters for each method as much as possible such that for each method and type of experiment, at most two parameters need to be optimized. Whenever we incorporate the second-order TGV functional for the cartoon part, we fix the parameters (α0,α1) to (2,1). The method CT-cvx then essentially depends on the three parameters λ,μ,ν. We experienced that the choice of ν is rather independent of the data and type of experiments; hence, we leave it fixed for all experiments with incomplete or corrupted data, leaving our method with two parameters to be adapted: λ defining the trade-off between data and regularization and μ defining the trade-off between cartoon and texture regularization. For the semi-convex potential, we choose ν as with the convex one, fix δ=0.99 and use two different choices of ϵ, depending on the type of experiment, hence again leaving two parameters to be adapted. A summary of the parameter choice for all methods is provided in Table 2.

Table 2.

Parameter choice for all methods and experiments used in the paper. Here, λ always defines the trade-off between data fidelity and regularization, μ defined the trade-off between cartoon and texture, ν defined the trade-off between the 1/2 norm and the penalization of singular values and ϵ defines the degree of non-convexity for the semi-convex potential. Whenever a parameter was optimized over a certain range for each experiment, we write opt

CT-cvx CT-scvx TGV TXT BM3D CL CDL
λ μ ν λ μ ν ϵ λ λ ν λ λ μ λ
Dcp. opt 0.95 0.75
Inp. opt 0.975 - opt 0.975 0.1 0.975
Den. opt opt 0.975 opt opt 0.975 2.0 opt opt 0.975 opt opt opt opt
Dcv. opt opt 0.975 opt

We also note that, whenever we tested a range of different parameters for any method presented below, we show the visually best results in the figure. Those are generally not the ones delivering the best result in terms of peak-signal-to-noise ratio, and for the sake of completeness we also provide in Table 1 the best PSNR result obtained with each method and each experiment over the range of tested parameters.

Table 1.

Best PSNR result achieved with each method for the parameter test range as specified in Table 2

Texture Patches Mix Barbara
Inpainting
TGV 10.32 19.28 20.19 20.58
TXT/ CT-cvx 17.59 25.55 23.38 23.48
CT-scvx 32.74 23.6
Denoising
TGV 11.83 23.96 23.74 23.99
TXT/ CT-cvx 16.06 25.91 26.07 25.0
CT-scvx 29.4 25.56
CL 29.09 25.14
BM3D 30.82 28.15
CDL 27.96 25.24
Deconvolution
TGV 23.72 23.14
CT-cvx 24.52 23.34

The best result for each experiment is written in bold

Image-Atom-Learning and Texture Separation

As first experiment, we test the method CT-cvx for learning image atoms and texture separation directly on the ground truth images. To this aim, we use

D10,D2(u,f0)=I{0}(u-f0),

and the proximal mapping of D2 is a simple projection to f0. The results can be found in Fig. 5, where for the pure texture image we used only the texture norm (i.e., the method TXT) without the TGV part for regularization.

Fig. 5.

Fig. 5

Cartoon–texture decomposition (rows 2–4) and nine most important learned atoms for different test images and the methods TXT (row 1) and CT-cvx (rows 2–4)

It can be observed that the proposed method achieves a good decomposition of cartoon and texture and also is able to learn the most important image structure effectively. While there are some repetitions of shifted structures in the atoms, the different structures are rather well-separated and the first nine atoms corresponding to the nine largest singular values still contain the most important features of the texture parts.

Inpainting and Learning from Incomplete Data

This section deals with the task of inpainting a partially available image and learning image atoms from these incomplete data. For reference, we also provide results with pure TGVα2 regularization (the method TGV). The data fidelity in this case is

D10,D2(u,f0)=I{vvi,j=(f0)i,jfor(i,j)E}(u),

with E the index set of known pixels and the proximal mapping of D2 is a projection to f0 on all points in E. Again we use only the texture norm for the first image (the method TXT) and the cartoon–texture functional for the others.

The results can be found in Fig. 6. For the first and third images, 20% of the pixels were given, while for the other two, 30% were given. It can be seen that our method is generally still able to identify the underlying pattern of the texture part and to reconstruct it reasonably well. Also the learned atoms are reasonable and are in accordance with the ones learned from the full data as in the previous section. In contrast to that, pure TGV regularization (which assumes piecewise smoothness) has no chance to reconstruct the texture patterns. For the cartoon part, both methods are comparable. It can also be observed that the target-like structure in the bottom right of the second image is not reconstructed well and also not well identified with the atoms (only the eighth one contains parts of this structure). The reason might be that due to the size of the repeating structure there is not enough redundant information available to reconstruct it from the missing data. Concerning the optimal PSNR values of Table 1, we can observe a rather strong improvement with CT-cvx compared to TGV.

Fig. 6.

Fig. 6

Image inpainting from incomplete data. From left to right: Data, TGV-based reconstruction, proposed method (only TXT in first row), nine most important learned atoms. Rows 1, 3: 20% of pixels, rows 2, 4: 30% of pixels

Learning and Separation Under Noise

In this section, we test our method for image-atom-learning and denoising with data corrupted by Gaussian noise (with standard deviation 0.5 and 0.1 times the image range for the Texture and the other images, respectively). Again we compare to TGV regularization in this section (but also to other methods in Sect. 5.5) and use the texture norm for the first image (the method TXT). The data fidelity in this case is

D10,D2(u,f0)=12u-f022

and proxτ,λD2(·,f0)(u)=(u+τλf0)/(1+τλ).

The results are provided in Fig. 7. It can be observed that also under the presence of rather strong noise, our method is able to learn some of the main features of the image within the learned atoms. Also the quality of the reconstructed image is improved compared to TGV, in particular for the right-hand side of the Mix image, where the top left structure is only visible in the result obtained with CT-cvx. On the other hand, the circle of the Patches image obtained with CT-cvx contains some artifacts of the texture part. Regarding the optimal PSNR values of Table 1, the improvement with CT-cvx compared to TGV is still rather significant.

Fig. 7.

Fig. 7

Denoising and atom learning. From left to right: Noisy data, TGV-based reconstruction, proposed method (only TXT for the first image), nine most important learned atoms

Deconvolution

This section deals with the learning of image features and image reconstruction in an inverse problem setting, where the forward operator is given as a convolution with a Gaussian kernel (standard deviation 0.25, kernel size 9×9 pixels), and the data are degraded by Gaussian noise with standard deviation 0.025 times the image range. The data fidelity in this case is

D1(u,f0)=12Au-f022,D20,

with A being the convolution operator, and proxσ,(λD1(·,f0))(u)=(u-σf0)/(1+σ/λ).

We show results for the Mix and the Barbara image and compare to TGV in Fig. 8. It can be seen that the improvement is comparable to the denoising case. In particular, the method is still able to learn reasonable atoms from the given, blurry data and in particular for the texture parts the improvement is quite significant.

Fig. 8.

Fig. 8

Reconstruction from blurry and noisy data. From left to right: Data, TGV, proposed, learned atoms

Comparison

This section compares the method CT-cvx to its semi-convex variant CT-scvx and to other methods. At first, we consider the learning of atoms from incomplete data and image inpainting in Fig. 9. It can be seen there that for the Patches image, the semi-convex variant achieves an almost perfect results: It is able to learn exactly the three atoms that compose the texture part of the image and to inpaint the image very well. For the Barbara image, where more atoms are necessary to synthesize the texture part, the two methods yield similar results and also the atoms are similar. These results are also reflected in the PSNR values of Table 1, where CT-scvx is more that 7 decibel better for the Patches image and achieves only a slight improvement for Barbara.

Fig. 9.

Fig. 9

Comparison of CT-cvx and CT-scvx for inpainting with 30% of the pixels given. From left to right: Data, convex, semi-convex, convex atoms (top), semi-convex atoms (bottom)

Next we consider the semi-convex variant CT-scvx for denoising the Patches and Barbara images of Fig. 7. In this setting, also other methods are applicable and we compare to an own implementation of a variant of the convolutional Lasso algorithm (called CL), to BM3D denoising [22] (called BM3D) and to a reference implementation for convolutional dictionary learning (called CDL). For CL, we strive to solve the non-convex optimization problem

minu,(ci)i,(pi)iTVρu-i=1kcipi+i=1kci1+u-f022s.t.pi21,pi=0fori=1,,k

where (ci)i are coefficient images, pi are atoms and k is the number of used atoms. Note that we use the same boundary extension, atom-domain size and stride variable than in the methods CT-cvx, CT-scvx, and that TVρ denotes a discrete TV functional with a slight smoothing of the L1 norm to make it differentiable (see the source code [16] for details). For the solution, we use an adaption of the algorithm of [48]. For BM3D, we use the implementation obtained from [34]. For CDL, we use a convolutional dictionary learning implementation provided by the SPORCO library [56, 64], and more precisely, we adapted the convolutional dictionary learning example (cbpdndl_cns_gry) which uses a dictionary learning algorithm (dictlrn.cbpdndl.ConvBPDNDictLearn) based on the ADMM consensus dictionary update [28, 57]. Note that CDL addresses the same problem as CL, however, instead of including the TV component the image is high-pass-filtered prior to dictionary learning using Tikhonov regularization.

Remark 16

We note that, while we provide the comparison to BM3D in order to have a reference on achievable denoising quality, we do not aim to propose an improved denoising method that is comparable to BM3D. In contrast to BM3D, our method constitutes a variational (convex) approach, that is generally applicable for inverse problems and for which we were able to provide a detailed analysis in function space such that in particular stability and convergence results for vanishing noise can be proven. Furthermore, beyond mere image reconstruction, we regard the ability of simultaneous image-atom-learning and cartoon–texture decomposition as an important feature of our approach.

Results for the Patches and Barbara image can be found in Fig. 10, where for CL and CDL we allowed for three atoms for the Patches images an tested 3, 5, and 7 atoms for the Barbara image, showing the best result that was obtained with 7 atoms. Note that for all methods, parameters were chosen and optimized according to Table 2 and for the CDL method we also tested the standard setting of 64 atoms of size 8×8, which performed worse than the choice of 7 atoms. In this context, it is important to note that the implementation of CDL was designed for dictionary learning from a set of clean training images, for which it makes sense to learn a large number of atoms. When “misusing” the method for joint learning and denoising, it is natural that the number of admissible atoms needs to be constraint to achieve a regularizing effect.

Fig. 10.

Fig. 10

Comparison of different methods for denoising the Patches and Barbara images from Fig. 7. First row for each image, from left to right: Noisy data, BM3D, CDL. Second row for each image, from left to right: CL, CT-cvx and CT-scvx. The four most important learned atoms are shown right to the image, if applicable

Looking at Fig. 10, it can be seen that, as with the inpainting results, CT-scvx achieves a very strong improvement compared to CT-cvx for the Patches image (obtaining the atoms almost perfectly) and only a slight improvement for the Barbara image. Regarding the Patches image, the CL and CDL methods perform similar but slightly worse than CT-scvx. While there, also the three main features are identified correctly, they are not centered which leads to artifacts in the reconstruction and might be explained by the methods being stuck in a local minimum. For this image, the result of BM3D is comparable but slightly smoother than the ones of CT-scvx. In particular, the target-like structure in the bottom left is not very well reconstructed with BM3D but suffers from less remaining noise. For the Barbara image, BM3D delivers the best result, but a slight over-smoothing is visible. Regarding the PSNR values of Table 1, BM3D performs best and CT-scvx second best (better that CL and CDL), where in accordance with the visual results the difference of BM3D and CT-scvx for the Patches image is not as high as with Barbara.

Discussion

Using lifting techniques, we have introduced a (potentially convex) variational approach for learning image atoms from corrupted and/or incomplete data. An important part of our work is the analysis of the proposed model, which shows well-posedness results in function space for a general inverse problem setting. The numerical part shows that indeed our model can effectively learn image atoms from different types of data. While this works well also in a convex setting, moving to a semi-convex setting (which is also captured by our theory) yields a further, significant improvement. While the proposed method can also be regarded solely as image reconstruction method, we believe its main feature is in fact the ability to learn image atoms from incomplete data in a mathematically well-understood framework. In this context, it is important to note that we expect our approach to work well whenever the non-cartoon part of the underlying image is well described with only a few filters. This is natural, since we learn only from a single dataset and allowing for a large number of different atoms will remove the regularization effect of our approach.

As discussed in introduction, the learning of convolutional image atoms is strongly related to a deep neural networks, in particular also when using a multilevel setting. Motivated by this, future research questions are an extension of our method in this direction as well as is exploration for classification problems.

Acknowledgements

Open access funding provided by Austrian Science Fund (FWF). MH acknowledges support by the Austrian Science Fund (FWF) (Grant J 4112). TP is supported by the European Research Council under the Horizon 2020 program, ERC starting Grant Agreement 640156.

Biographies

Antonin Chambolle

studied at École Normale Supérieure (Paris) and has a PhD (1993) in applied mathematics from U. Paris Dauphine, supervised by Jean-Michel Morel. After a post-doc at SISSA, Trieste, he has worked as a CNRS Junior Scientist at U. Paris Dauphine and then since 2003 as a CNRS Junior and then Senior Scientist at CMAP, Ecole Polytechnique, Paris. He has also been a French Government Fellow at Churchill College, U. Cambridge (DAMTP) in 2015-16. His research, mostly in mathematical analysis, focus on calculus of variations, free boundary and free discontinuity problems, interface motion, and numerical optimization with applications to (fracture) mechanics or imaging.graphic file with name 10851_2019_919_Figa_HTML.jpg

Martin Holler

received his MSc (2005–2010) and his PhD (2010–2013) with a “promotio sub auspiciis praesidentis rei publicae” in Mathematics from the University of Graz. After research stays at the University of Cambridge, UK, and the Ecole Polytechnique, Paris, he currently holds a University Assistant position at the Insititue of Matheamtics and Scientific Commputing of the University of Graz. His research interests include inverse problems and mathematical image processing, in particular the development and anlysis of mathematical models in this context as well as applications in biomedical imaging, image compression and beyond.graphic file with name 10851_2019_919_Figb_HTML.jpg

Thomas Pock

received his MSc (1998–2004) and his PhD (2005–2008) in Computer Engineering (Telematik) from Graz University of Technology. After a Post-doc position at the University of Bonn, he moved back to Graz University of Technology where he has been an Assistant Professor at the Institute for Computer Graphics and Vision. In 2013 Thomas Pock received the START price of the Austrian Science Fund (FWF) and the German Pattern recognition award of the German association for pattern recognition (DAGM) and in 2014, Thomas Pock received an starting grant from the European Research Council (ERC). Since June 2014, Thomas Pock is a Professor of Computer Science at Graz University of Technology. The focus of his research is the development of mathematical models for computer vision and image processing as well as the development of efficient convex and non-smooth optimization algorithms.graphic file with name 10851_2019_919_Figc_HTML.jpg

A Appendix: Tensor Spaces

We recall here some basic results on tensor products of Banach spaces that will be relevant for our work. Most of these results are obtained from [24, 53], to which we refer to for further information and a more complete introduction to the topic.

Throughout this section, let always (X,·X),(Y,·Y),(Z,·Z) be Banach spaces. By X, we denote the analytic dual of X, i.e., the space of bounded linear functionals from X to R. By L(X,Y) and B(X×Y,Z), we denote the spaces of bounded linear and bilinear mappings, respectively, where the norm for the latter is given by BB=sup{B(x,y)ZxX1,yY1}. In case the image space is the reals, we write L(X) and B(X×Y).

Algebraic Tensor Product The tensor product xy of two elements xX, yY can be defined as a linear mapping on the space of bilinear forms on X×Y via

xy(A)=A(x,y),

The algebraic tensor product XY is then defined as the subspace of the space of linear functionals on B(XY) spanned by elements xy with xX, yY.

Tensor Norms We will use two different tensor norms, the projective and the injective tensor norm (also known as the largest and smallest reasonable cross norm, respectively). The projective tensor norm on XY is defined for CXY as

Cπ:=infi=1nxiXyiYC=i=1nxiyi,nN.

Note that indeed ·π is a norm and xyπ=xXyY (see [53, Proposition 2.1]). We denote by XπY the completion of the space XY equipped with this norm. The following result gives a useful representation of elements in XπY and their projective norm.

Proposition 17

For CXπY and ϵ>0, there exist bounded sequences (xn)nX, (yn)nY such that

C=i=1xnynandi=1xnXynY<Cπ+ϵ.

In particular,

Cπ=infi=1xiXyiYC=i=1xiyi.

Now for the injective tensor norm, we note that elements of the tensor product XY can be viewed as bounded bilinear forms on X×Y by associating with a tensor C=i=1nxiyi the bilinear form BC(ϕ,ψ)=i=1nϕ(xi)ψ(yi), where this association is unique (see [53, Section 1.3]). Hence, XY can be regarded as a subspace of B(X×Y) and the injective tensor norm is the norm induced by this space. Thus, for C=i=1nxiyi the injective tensor norm ·i is given as

Ci=supi=1nϕ(xi)ψ(yi)ϕX1,ψY1

and the injective tensor product XiY is defined as the completion of XY with respect to this norm.

Tensor Lifting The next result (see [53, Theorem 2.9]) shows that there is a one-to-one correspondence between bounded bilinear mappings from X×Y to Z and bounded linear mappings from XπY to Z.

Proposition 18

For BB(X×Y,Z) there exists a unique linear mapping B^:XπYZ such that B^(xy)=B(x,y). Further, B^ is bounded and the mapping BB^ is an isometric isomorphism between the Banach spaces B(X×Y,Z) and L(XπY,Z).

Using this isometry, for BB(X×Y,Z) we will always denote by B^ the corresponding linear mapping on the tensor product.

The following result is provided in [53, Proposition 2.3] and deals with the extension of linear operators to the tensor product.

Proposition 19

Let SL(X,W), TL(Y,Z). Then there exists a unique operator SπT:XπYWπZ such that SπT(xy)=(Sx)(Ty). Furthermore, SπT=ST.

Tensor Space Isometries The following proposition deals with duality of the injective and the projective tensor products. To this aim, we need the notion of Radon Nikodým property and approximation property, which we will not define here but rather refer to [53, Sections 4 and 5] and [24]. For our purposes, it is important to note that both properties hold for Lr-spaces with r(1,), the Radon Nikodým property holds for reflexive spaces, but while we cannot expect the Radon Nikodým property to hold for L and M, the approximation property does.

Lemma 20

Assume that either X or Y has the Radon Nikodým property and that either X or Y has the approximation property. Then

(XiY)=^XπY

and for simple tensors C=i=1nxiyiXiY and C=i=1mxiyiXπY the duality pairing is given as

C,C=i=1nj=1mxj,xiyj,yi

Proof

The identification of the duals is shown in [53, Theorem 5.33]. For the duality paring, we first note that the action of an element CXπY on XiY is given as the action of the associated bilinear form BC [53, Section 3.4], which for simple tensors C=i=1nxiyi can be given as

BC,C=i=1nBC(xi,yi).

Now in case also C is a simple tensor, i.e., C=i=1mxiyi, the action of this bilinear form can be given more explicitly [53, Section 1.3], which yields

BC,C=i=1ni=1mxi,xiyi,yi.

The duality between the injective and projective tensor products will be used for compactness assertions on subsets of the latter. To this aim, we note in the following lemma that separability of the individual space transfers to the tensor product. As a consequence, in case X and Y satisfy the assumption of Lemma 20 and both admit a separable predual, also XπY admits a separable predual and hence bounded sets are weakly* compact.

Lemma 21

Let XY be separable spaces. Then both XiY and XπY are separable.

Proof

Take X and Y to be dense countable subsets of X and Y, respectively. First note that it suffices to show that any simple tensor xy can be approximated arbitrarily close by xy with xX, yY. But this is true since (using [53, Propositions 2.1 and 3.1])

xy-xyxy-xy+xy-xy=xy-y+yx-x,

where · denotes either the projective or the injective norm.

The following result, which can be obtained by direct modification of the result shown at the beginning of [53, Section 3.2], provides an equivalent representation of the injective tensor product in a particular case.

Lemma 22

Denote by Cc(ΩΣ,X) the space of compactly supported continuous functions mapping from ΩΣ to X and denote by C0(ΩΣ,X) its completion with respect to the norm ϕ:=suptΩΣXX. Then, we have that

C0(ΩΣ)iX=^C0(ΩΣ,X)

where the isometry is given as the completion of the isometric mapping J:C0(ΩΣ)XC0(ΩΣ,X) defined for C=i=1nfixi as

JC(t):=i=1nfi(t)xi.

Next we consider the identification of tensor products with linear operators which is provided in the following proposition [53, Corollary 4.8].

Proposition 23

Define the mapping J:XπYL(X,Y) as

C=i=1ϕnynLC:XYwhereLC(x)=i=1ϕn(x)yn.

Then, J is well-defined and has unit norm. Defining N(X,Y)L(X,Y) as the range of J, equipped with the norm

Tnuc=inf{i=1ϕnXynYT(x)=i=1ϕn(x)yn},

we get that N(X,Y) is a Banach space, called the space of nuclear operators. If further either X or Y has the approximation property, then J is an isometric isomorphism, that is, we can identify

XπY=N(X,Y)

It is easy to see that nuclear operators are compact and that we can equivalently write

Tnuc=inf{i=1σiT(x)=i=1σiϕi(x)yi,ϕiX1,yiY1}.

Also, in a Hilbert space setting (see [63] for details), it is a classical result that for any compact TL(H1,H2) with (H1,(·,·)),(H2(·,·)) Hilbert spaces there exist orthonormal systems (xi)i, (yi)i and uniquely defined singular values (σi)i:=(σi(T))i such that

Tx=i=1σi(x,xi)yi.

In addition, in case T has finite nuclear norm, it follows that Tnuc=i=1σi.

Footnotes

The Institute of Mathematics and Scientific Computing is a member of NAWI Graz (http://www.nawigraz.at) and BioTechMed Graz (http://www.biotechmed.at).

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

A. Chambolle, Email: antonin.chambolle@cmap.polytechnique.fr

M. Holler, Email: martin.holler@uni-graz.at

T. Pock, Email: pock@icg.tugraz.at

References

  • 1.Adler J, Öktem O. Learned primal-dual reconstruction. IEEE Trans. Med. Imaging. 2018;37(6):1322–1332. doi: 10.1109/TMI.2018.2799231. [DOI] [PubMed] [Google Scholar]
  • 2.Aharon M, Elad M, Bruckstein A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006;54(11):4311–4322. [Google Scholar]
  • 3.Ahmed A, Recht B, Romberg J. Blind deconvolution using convex programming. IEEE Trans. Inf. Theory. 2013;60(3):1711–1732. [Google Scholar]
  • 4.Arora, S., Ge, R., Ma, T., Moitra, A.: Simple, efficient, and neural algorithms for sparse coding. In: Grünwald, P., Hazan, E., Kale, S. (eds.) Proceedings of The 28th Conference on Learning Theory, Proceedings of Machine Learning Research, vol. 40, pp. 113–149. PMLR (2015)
  • 5.Aubert G, Kornprobst P. Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations. Berlin: Springer; 2006. [Google Scholar]
  • 6.Bach, F., Mairal, J., Ponce, J.: Convex sparse matrix factorizations. arXiv preprint arXiv:0812.1869 (2008)
  • 7.Bredies K, Holler M. Regularization of linear inverse problems with total generalized variation. J. Inverse Ill Posed Probl. 2014;22(6):871–913. [Google Scholar]
  • 8.Bredies K, Holler M. A TGV-based framework for variational image decompression, zooming and reconstruction. Part II: Numerics. SIAM J. Imaging Sci. 2015;8(4):2851–2886. [Google Scholar]
  • 9.Bredies K, Kunisch K, Pock T. Total generalized variation. SIAM J. Imaging Sci. 2010;3(3):492–526. [Google Scholar]
  • 10.Bredies K, Lorenz DA. Regularization with non-convex separable constraints. Inverse Probl. 2009;25(8):085011. [Google Scholar]
  • 11.Bredies, K., Valkonen, T.: Inverse problems with second-order total generalized variation constraints. In: Proceedings of SampTA 2011—9th International Conference on Sampling Theory and Applications, Singapore (2011)
  • 12.Brezis H. Functional Analysis, Sobolev Spaces and Partial Differential Equations. Berlin: Springer; 2010. [Google Scholar]
  • 13.Buades A, Coll B, Morel J-M. A non-local algorithm for image denoising. Proc. CVPR. 2005;2:60–65. [Google Scholar]
  • 14.Calatroni, L., Cao, C., De Los Reyes, J.C., Schönlieb, C.-B., Valkonen, T.: Bilevel approaches for learning of variational imaging models. In: Variational Methods in Imaging and Geometric Control, Radon Series on Computational and Applied Mathematics, vol. 18, pp. 252–290 (2016)
  • 15.Campisi P, Egiazarian K. Blind Image Deconvolution: Theory and Applications. Boca Raton: CRC Press; 2016. [Google Scholar]
  • 16.Chambolle, A., Holler, M., Pock, T.: Source code to reproduce the results of “A convex variational model for learning convolutional image atoms from incomplete data”. https://github.com/hollerm/convex_learning [DOI] [PMC free article] [PubMed]
  • 17.Chambolle A, Pock T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011;40(1):120–145. [Google Scholar]
  • 18.Chambolle A, Pock T. An introduction to continuous optimization for imaging. Acta Numer. 2016;25:161–319. [Google Scholar]
  • 19.Chandrasekaran V, Recht B, Parrilo PA, Willsky AS. The convex geometry of linear inverse problems. Found. Comput. Math. 2012;12(6):805–849. [Google Scholar]
  • 20.Chaudhuri S, Velmurugan R, Rameshan R. Blind Image Deconvolution. Berlin: Springer; 2016. [Google Scholar]
  • 21.Chi Y. Guaranteed blind sparse spikes deconvolution via lifting and convex optimization. IEEE J. Sel. Top. Signal Process. 2016;10(4):782–794. [Google Scholar]
  • 22.Dabov K, Foi A, Katkovnik V, Egiazarian K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007;16(8):2080–2095. doi: 10.1109/tip.2007.901238. [DOI] [PubMed] [Google Scholar]
  • 23.Delon, J., Desolneux, A., Sutour, C., Viano, A.: RNLp : Mixing non-local and TV-Lp methods to remove impulse noise from images. 2017. MAP5 2016-29, Hal-preprint Nr. hal01381063v2
  • 24.Diestel J, Uhl JJ. Vector Measures. Providence: American Mathematical Society; 1977. [Google Scholar]
  • 25.Fazel, M.: Matrix rank minimization with applications. Ph.D. thesis, Standford University (2002)
  • 26.Figueiredo, M.A.: Synthesis versus analysis in patch-based image priors. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1338–1342. IEEE (2017)
  • 27.Gao Y, Bredies K. Infimal convolution of oscillation total generalized variation for the recovery of images with structured texture. SIAM J. Imaging Sci. 2018;11(3):2021–2063. [Google Scholar]
  • 28.Garcia-Cardona C, Wohlberg B. Convolutional dictionary learning: A comparative review and new algorithms. IEEE Trans. Comput. Imaging. 2018;4(3):366–381. [Google Scholar]
  • 29.Haeffele, B.D., Vidal, R.: Structured low-rank matrix factorization: Global optimality, algorithms, and applications. IEEE Trans. Pattern. Anal. Mach. Intell. (2019) [DOI] [PubMed]
  • 30.Hintermüller M, Rautenberg CN. Optimal selection of the regularization function in a weighted total variation model. Part I: Modelling and theory. J. Math. Imaging Vis. 2017;59(3):498–514. [Google Scholar]
  • 31.Hofmann B, Kaltenbacher B, Pöschl C, Scherzer O. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Probl. 2007;23(3):987–1010. [Google Scholar]
  • 32.Holler M, Huber R, Knoll F. Coupled regularization with multiple data discrepancies. Inverse Probl. 2018;34(8):084003. doi: 10.1088/1361-6420/aac539. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Holler M, Kunisch K. On infimal convolution of TV-type functionals and applications to video and image reconstruction. SIAM J. Imaging Sci. 2014;7(4):2258–2300. [Google Scholar]
  • 34.Implementation of BM3D denoising, v2.00 (30 January 2014). Obtained from http://www.cs.tut.fi/~foi/GCF-BM3D 1 Oct 2018
  • 35.Kobler, E., Klatzer, T., Hammernik, K., Pock, T.: Variational networks: Connecting variational methods and deep learning. In: German Conference on Pattern Recognition, pp. 281–293. Springer (2017)
  • 36.Kunisch K, Pock T. A bilevel optimization approach for parameter learning in variational models. SIAM J. Imaging Sci. 2013;6(2):938–983. [Google Scholar]
  • 37.Lebrun M, Colom M, Buades A, Morel J-M. Secrets of image denoising cuisine. Acta Numer. 2012;21:475–576. [Google Scholar]
  • 38.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
  • 39.Lewicki, M.S., Sejnowski, T.J.: Coding time-varying signals using sparse, shift-invariant representations. In: Advances in Neural Information Processing Systems, pp. 730–736 (1999)
  • 40.Ling S, Strohmer T. Regularized gradient descent: A non-convex recipe for fast joint blind deconvolution and demixing. Inf. Inference J. IMA. 2018;8(1):1–49. [Google Scholar]
  • 41.Lunz, S., Öktem, O., Schönlieb, C.-B.: Adversarial regularizers in inverse problems. arXiv preprint arXiv:1805.11572 (2018)
  • 42.Mallat S. A Wavelet Tour of Signal Processing—The Sparse Way with Contributions from Gabriel Peyré. 3. Amsterdam: Elsevier; 2009. [Google Scholar]
  • 43.Meyer Y. Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. Providence: American Mathematical Society; 2001. [Google Scholar]
  • 44.Oymak S, Jalali A, Fazel M, Eldar YC, Hassibi B. Simultaneously structured models with application to sparse and low-rank matrices. IEEE Trans. Inf. Theory. 2015;61(5):2886–2908. [Google Scholar]
  • 45.Papadopoulo, T., Lourakis, M.I.: Estimating the Jacobian of the singular value decomposition: Theory and applications. In: European Conference on Computer Vision, pp. 554–570. Springer (2000)
  • 46.Papyan V, Romano Y, Elad M. Convolutional neural networks analyzed via convolutional sparse coding. J. Mach. Lear. Res. 2017;18(1):2887–2938. [Google Scholar]
  • 47.Papyan V, Romano Y, Sulam J, Elad M. Theoretical foundations of deep learning via sparse representations: A multilayer sparse model and its connection to convolutional neural networks. IEEE Signal Process. Mag. 2018;35(4):72–89. [Google Scholar]
  • 48.Pock T, Sabach S. Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems. SIAM J. Imaging Sci. 2016;9(4):1756–1787. [Google Scholar]
  • 49.Richard E, Bach FR, Vert J-P, et al. Intersecting singularities for multi-structured estimation. ICML. 2013;3:1157–1165. [Google Scholar]
  • 50.Richard, E., Obozinski, G.R., Vert, J.-P.: Tight convex relaxations for sparse matrix factorization. In: Advances in Neural Information Processing Systems, pp. 3284–3292 (2014)
  • 51.Rudin LI, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D Nonlinear Phenom. 1992;60(1–4):259–268. [Google Scholar]
  • 52.Rudin W. Fourier Analysis on Groups. Mineola: Courier Dover Publications; 2017. [Google Scholar]
  • 53.Ryan RA. Introduction to Tensor Products of Banach Spaces. Berlin: Springer; 2013. [Google Scholar]
  • 54.Scherzer O, Grasmair M, Grossauer H, Haltmeier M, Lenzen F. Variational Methods in Imaging. Berlin: Springer; 2009. [Google Scholar]
  • 55.Schnass K. Convergence radius and sample complexity of ITKM algorithms for dictionary learning. Appl. Comput. Harmonic Anal. 2018;45(1):22–58. [Google Scholar]
  • 56.SParse Optimization Research COde (SPORCO), v0.1.11 (April 15, 2019). Obtained from https://github.com/bwohlberg/sporco. 26 June 2019
  • 57.Šorel M, Šroubek F. Fast convolutional sparse coding using matrix inversion lemma. Digital Signal Process. 2016;55:44–51. [Google Scholar]
  • 58.Sulam J, Papyan V, Romano Y, Elad M. Multilayer convolutional sparse modeling: Pursuit and dictionary learning. IEEE Trans. Signal Process. 2018;66(15):4090–4104. [Google Scholar]
  • 59.Sun J, Qu Q, Wright J. Complete dictionary recovery over the sphere i: Overview and the geometric picture. IEEE Trans. Inf. Theory. 2016;63(2):853–884. [Google Scholar]
  • 60.Thompson R. Singular value inequalities for matrix sums and minors. Linear Algebra Appl. 1975;11(3):251–269. [Google Scholar]
  • 61.Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9446–9454 (2018)
  • 62.Weickert J. Anisotropic Diffusion in Image Processing. Stuttgart: Teubner; 1998. [Google Scholar]
  • 63.Weidmann J. Linear Operators in Hilbert Spaces. Berlin: Springer; 1980. [Google Scholar]
  • 64.Wohlberg, B.: Sporco: A python package for standard and convolutional sparse representations. In: Proceedings of the 15th Python in Science Conference, Austin, TX, USA, pp. 1–8 (2017)
  • 65.Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2528–2535. IEEE (2010)

Articles from Journal of Mathematical Imaging and Vision are provided here courtesy of Springer

RESOURCES