Abstract
We consider a bilevel optimisation approach for parameter learning in higher-order total variation image reconstruction models. Apart from the least squares cost functional, naturally used in bilevel learning, we propose and analyse an alternative cost based on a Huber-regularised TV seminorm. Differentiability properties of the solution operator are verified and a first-order optimality system is derived. Based on the adjoint information, a combined quasi-Newton/semismooth Newton algorithm is proposed for the numerical solution of the bilevel problems. Numerical experiments are carried out to show the suitability of our approach and the improved performance of the new cost functional. Thanks to the bilevel optimisation framework, also a detailed comparison between and is carried out, showing the advantages and shortcomings of both regularisers, depending on the structure of the processed images and their noise level.
Keywords: Bilevel optimisation, Total variation regularisers, Image quality measures
Introduction
In this paper, we propose a bilevel optimisation approach for parameter learning in higher-order total variation regularisation models for image restoration. The reconstruction of an image from imperfect measurements is essential for all research which relies on the analysis and interpretation of image content. Mathematical image reconstruction approaches aim to maximise the information gain from acquired image data by intelligent modelling and mathematical analysis.
A variational image reconstruction model can be formalised as follows: Given data f which is related to an image (or to certain image information, e.g. a segmented or edge detected image) u through a generic forward operator (or function) K, the task is to retrieve u from f. In most realistic situations, this retrieval is complicated by the ill-posedness of K as well as random noise in f. A widely accepted method that approximates this ill-posed problem by a well-posed one and counteracts the noise is the method of Tikhonov regularisation. That is, an approximation to the true image is computed as a minimiser of
| 1.1 |
where R is a regularising energy that models a-priori knowledge about the image u, is a suitable distance function that models the relation of the data f to the unknown u, and is a parameter that balances our trust in the forward model against the need of regularisation. The parameter , in particular, depends on the amount of ill-posedness in the operator K and the amount (amplitude) of the noise present in f. A key issue in imaging inverse problems is the correct choice of , image priors (regularisation functionals R), fidelity terms d and (if applicable) the choice of what to measure (the linear or non-linear operator K). Depending on this choice, different reconstruction results are obtained.
While functional modelling (1.1) constitutes a mathematically rigorous and physical way of setting up the reconstruction of an image—providing reconstruction guarantees in terms of error and stability estimates—it is limited with respect to its adaptivity for real data. On the other hand, data-based modelling of reconstruction approaches is set up to produce results which are optimal with respect to the given data. However, in general, it neither offers insights into the structural properties of the model nor provides comprehensible reconstruction guarantees. Indeed, we believe that for the development of reliable, comprehensible and at the same time effective models (1.1), it is essential to aim for a unified approach that seeks tailor-made regularisation and data models by combining model- and data-based approaches.
To do so, we focus on a bilevel optimisation strategy for finding an optimal setup of variational regularisation models (1.1). That is, for a given training pair of noisy and original clean images , respectively, we consider a learning problem of the form
| 1.2 |
where F is a generic cost functional that measures the fitness of to the training image . The argument of the minimisation problem will depend on the specific setup (i.e. the degrees of freedom) in the constraint problem (1.1). In particular, we propose a bilevel optimisation approach for learning optimal parameters in higher-order total variation regularisation models for image reconstruction in which the arguments of the optimisation constitute parameters in front of the first- and higher-order regularisation terms.
Rather than working on the discrete problem, as is done in standard parameter learning and model optimisation methods, we optimise the regularisation models in infinite-dimensional function space. The resulting problems are difficult to treat due to the non-smooth structure of the lower level problem, which makes it impossible to verify standard constraint qualification conditions for Karush–Kuhn–Tucker (KKT) systems. Therefore, in order to obtain characterising first-order necessary optimality conditions, alternative analytical approaches have emerged, in particular regularisation techniques [4, 20, 28]. We consider such an approach here and study the related regularised problem in depth. In particular, we prove the Fréchet differentiability of the regularised solution operator, which enables to obtain an optimality condition for the problem under consideration and an adjoint state for the efficient numerical solution of the problem. The bilevel problems under consideration are related to the emerging field of generalised mathematical programmes with equilibrium constraints (MPEC) in function space. Let us remark that even for finite-dimensional problems, there are few recent references dealing with stationarity conditions and solution algorithms for this type of problems (see, e.g. [18, 30, 33, 34, 38]).
Let us give an account to the state of the art of bilevel optimisation for model learning. In machine learning, bilevel optimisation is well established. It is a semi-supervised learning method that optimally adapts itself to a given dataset of measurements and desirable solutions. In [15, 23, 43], for instance, the authors consider bilevel optimisation for finite-dimensional Markov random field models. In inverse problems, the optimal inversion and experimental acquisition setup is discussed in the context of optimal model design in works by Haber, Horesh and Tenorio [25, 26], as well as Ghattas et al. [3, 9]. Recently, parameter learning in the context of functional variational regularisation models (1.1) also entered the image processing community with works by the authors [10, 22], Kunisch, Pock and co-workers [14, 33], Chung et al. [16] and Hintermüller et al. [30].
Apart from the work of the authors [10, 22], all approaches so far are formulated and optimised in the discrete setting. Our subsequent modelling, analysis and optimisation will be carried out in function space rather than on a discretisation of (1.1). While digitally acquired image data are of course discrete, the aim of high-resolution image reconstruction and processing is always to compute an image that is close to the real (analogue, infinite dimensional) world. Hence, it makes sense to seek images which have certain properties in an infinite dimensional function space. That is, we aim for a processing method that accentuates and preserves qualitative properties in images independent of the resolution of the image itself [45]. Moreover, optimisation methods conceived in function space potentially result in numerical iterative schemes which are resolution and mesh independent upon discretisation [29].
Higher-order total variation regularisation has been introduced as an extension of the standard total variation regulariser in image processing. As the Total Variation (TV) [41] and many more contributions in the image processing community have proven, a non-smooth first-order regularisation procedure results in a non-linear smoothing of the image, smoothing more in homogeneous areas of the image domain and preserving characteristic structures such as edges. In particular, the TV regulariser is tuned towards the preservation of edges and performs very well if the reconstructed image is piecewise constant. The drawback of such a regularisation procedure becomes apparent as soon as images or signals (in 1D) are considered which do not only consist of constant regions and jumps but also possess more complicated, higher-order structures, e.g. piecewise linear parts. The artefact introduced by TV regularisation in this case is called staircasing [40]. One possibility to counteract such artefacts is the introduction of higher-order derivatives in the image regularisation. Chambolle and Lions [11], for instance, propose a higher-order method by means of an infimal convolution of the TV and the TV of the image gradient called Infimal Convolution Total Variation (ICTV) model. Other approaches to combine first- and second-order regularisation originate, for instance, from Chan et al. [12] who consider total variation minimisation together with weighted versions of the Laplacian, the Euler-elastica functional [13, 37], which combines total variation regularisation with curvature penalisation, and many more [35, 39] just to name a few. Recently, Bredies et al. have proposed Total Generalized Variation (TGV) [5] as a higher-order variant of TV regularisation.
In this work, we mainly concentrate on two second-order total variation models: the recently proposed TGV [5] and the ICTV model of Chambolle and Lions [11]. We focus on second-order TV regularisation only since this is the one which seems to be most relevant in imaging applications [6, 31]. For open and bounded and , the ICTV regulariser reads
| 1.3 |
On the other hand, second-order TGV [7, 8] for reads
| 1.4 |
Here
| 1.5 |
stands for the total variation of u in , is the space of vector fields of bounded deformation on , E denotes the symmetrised gradient and denotes the space of symmetric tensors of order 2 with arguments in . The parameters are fixed positive parameters and will constitute the arguments in the special learning problem á la (1.2) we consider in this paper. The main difference between (1.3) and (1.4) is that we do not generally have that for any function v. That results in some qualitative differences of ICTV and TGV regularisation, compare for instance [1]. Substituting in (1.1) by or gives the TGV image reconstruction model and the ICTV image reconstruction model, respectively. In this paper, we only consider the case identity and in (1.1) which corresponds to an image denoising model for removing Gaussian noise. With our choice of regulariser, the former scalar in (1.1) has been replaced by a vector of two parameters in (1.3) and (1.4). The choice of the entries in this vector now do not only determine the overall strength of the regularisation (depending on the properties of K and the noise level), but those parameters also balance between the different orders of regularity of the function u, and their choice is indeed crucial for the image reconstruction result. Large will give regularised solutions that are close to TV regularised reconstructions, compare Fig. 1. Large will result in TV type solutions, that is solutions that are regularised with TV of the gradient [27, 39], compare Fig. 2. With our approach described in the next section, we propose a learning approach for choosing those parameters optimally, in particular optimally for particular types of images.
Fig. 1.
Effect of on denoising with optimal
Fig. 2.
Effect of choosing too large in denoising
For the existence analysis of an optimal solution as well as for the derivation of an optimality system for the corresponding learning problem (1.2), we will consider a smoothed version of the constraint problem (1.1)—which is the one in fact used in the numerics. That is, we replace R(u)—being TV, TGV or ICTV in this paper—by a Huber-regularised version and add an regularisation with a small weight to (1.1). In this setting and under the special assumption of box constraints on and , we provide a simple existence proof for an optimal solution. A more general existence result that holds also for the original non-smooth problem and does not require box constraints is derived in [19], and we refer the reader to this paper for a more sophisticated analysis on the structure of solutions.
A main challenge in the setup of such a learning approach is to decide what is the best way to measure fitness (optimality) of the model. In our setting this amounts to choosing an appropriate distance F in (1.2) that measures the fitness of reconstructed images to the ‘perfect’, noise-free images in an appropriate training set. We have to formalise what we mean by an optimal reconstruction model. Classically, the difference between the original, noise-free image and its regularised version is computed with an cost functional
| 1.6 |
which is closely related to the PSNR quality measure. Apart from this, we propose in this paper an alternative cost functional based on a Huberised total variation cost
| 1.7 |
where the Huber regularisation will be defined later on in Definition 2.1. We will see that the choice of this cost functional is indeed crucial for the qualitative properties of the reconstructed image.
The proposed bilevel approach has an important indirect consequence: It establishes a basis for the comparison of the different total variation regularisers employed in image denoising tasks. In the last part of this paper, we exhaustively compare the performance of , and for various image datasets. The parameters are chosen optimally, according to the proposed bilevel approach, and different quality measures (like PSNR and SSIM) are considered for the comparison. The obtained results are enlightening about when to use each one of the considered regularisers. In particular, appears to behave better for images with arbitrary structure and moderate noise levels, whereas behaves better for images with large smooth areas.
Outline of the paper In Sect. 2, we state the bilevel learning problem for the two higher-order total variation regularisation models, TGV and ICTV, and prove existence of an optimal parameter pair . The bilevel optimisation problem is analysed in Sect. 3, where existence of Lagrange multipliers is proved and an optimality system, as well as a gradient formula, is derived. Based on the optimality condition, a BFGS algorithm for the bilevel learning problem is devised in Sect. 5.1. For the numerical solution of each denoising problem, an infeasible semismooth Newton method is considered. Finally, we discuss the performance of the parameter learning method by means of several examples for the denoising of natural photographs in Sect. 5. Therein, we also present a statistical analysis on how TV, ICTV and TGV regularisation compare in terms of returned image quality, carried out on 200 images from the Berkeley segmentation dataset BSDS300.
Problem Statement and Existence Analysis
We strive to develop a parameter learning method for higher-order total variation regularisation models that maximises the fit of the reconstructed images to training images simulated for an application at hand. For a given noisy image , open and bounded, we consider
| 2.1 |
where, . We focus on TGV,
and ICTV,
for . For these models, we want to determine the optimal choice of , given a particular type of images and a fixed noise level. More precisely, we consider a training pair , where f is a noisy image corrupted by normally distributed noise with a fixed variation, and the image represents the ground truth or an image that approximates the ground truth within a desirable tolerance. Then, we determine the optimal choice of by solving the following problem:
| 2.2 |
where equals the cost (1.6) or the Huberised TV cost (1.7) and for a given f solves a regularised version of the minimisation problem (2.1) that will be specified in the next section, compare problem (2.3b). This regularisation of the problem is a technical requirement for solving the bilevel problem that will be discussed in the sequel. In contrast to learning in (2.1) in finite dimensional parameter spaces (as is the case in machine learning), we consider optimisation techniques in infinite dimensional function spaces.
Formal Statement
Let be an open bounded domain with Lipschitz boundary. This will be our image domain. Usually for w and h the width and height of a two-dimensional image, although no such assumptions are made in this work. Our data f and are assumed to lie in .
In our learning problem, we look for parameters that for some cost functional solve the problem
| 2.3a |
subject to
| 2.3b |
| 2.3c |
where
Here is the regularised denoising functional that amends the regularisation term in (2.1) by a Huber-regularised version of it with parameter , and an elliptic regularisation term with parameter . In the case of TGV, the modified regularisation term then reads, for ,
and in the case of ICTV, we have
Here, and the Huber regularisation is defined as follows.
Definition 2.1
Given , we define for the norm on , the Huber regularisation
and its derivative, given by
| 2.4 |
For the cost functional , given noise-free data and a regularised solution , we consider in particular the cost
as well as the Huberised total variation cost
with noise-free data .
Remark 2.1
Please note that in our formulation of the bilevel problem (2.3), we only impose a non-negativity constraint on the parameters and , i.e. we do not strictly bound them away from zero. There are two reasons for that. First, for the existence analysis of the smoothed problem, the case is not critical since compactness can be secured by the term in the functional, compare Sect. 2.2. Second, in [19], we indeed prove that even for the non-smooth problem (as ), under appropriate assumptions on the given data, the optimal are guaranteed to be strictly positive.
Existence of an Optimal Solution
The existence of an optimal solution for the learning problem (2.3) is a special case of the class of bilevel problems considered in [19], where the existence of optimal parameters in is proven. For convenience of the reader, we provide a simplified proof for the case where additional box constraints on the parameters are imposed. We start with an auxiliary lower semicontinuity result for the Huber-regularised functionals.
Lemma 2.1
Let , . Then, the functional , where is the Huber regularisation in Definition 2.1, is lower semicontinuous with respect to weak* convergence in
Proof
Recall that for , the Huber-regularised norm may be written in dual form as
Therefore, we find that
The functional G is of the form , where is the convex conjugate of G. Now, let converge to u weakly* in . Taking a supremising sequence for this functional at any point u, we easily see lower semicontinuity by considering the sequences for each j.
Our main existence result is the following.
Theorem 2.1
We consider the learning problem (2.3) for TGV and ICTV regularisation, optimising over parameters such that . Here is an arbitrary but fixed vector in that defines a box constraint on the parameter space. There exists an optimal solution for this problem for both choices of cost functionals, and .
Proof
Let be a minimising sequence. Due to the box constraints we have that the sequence is bounded in . Moreover, we get for the corresponding sequences of states that
in particular this holds for . Hence,
| 2.5 |
Exemplarily, we consider here the case for the TGV regulariser, that is . The proof for the ICTV regulariser can be done in a similar fashion. Inequality (2.5) in particular gives
where is the optimal w for . This gives that is uniformly bounded in and that there exists a subsequence which converges weakly in to a limit point . Moreover, strongly in and in . Using the continuity of the fidelity term with respect to strong convergence in , and the weak lower semicontinuity of the term with respect to weak convergence in and of the Huber-regularised functional even with respect to weak convergence in (cf. Lemma 2.1), we get
where in the last step we have used the boundedness of the sequence from (2.5) and the convergence of in . This shows that the limit point is an optimal solution for . Moreover, due to the weak lower semicontinuity of the cost functional F and the fact that the set is closed, we have that is optimal for (2.3).
Remark 2.2
Using the existence result in [19], in principle we could allow infinite values for and . This would include both and as possible optimal regularisers in our learning problem.
The existence of solutions with , that is without elliptic regularisation, is also proven in [19]. Note that here, we focus on the case since the elliptic regularity is required for proving the existence of Lagrange multipliers in the next section.
Remark 2.3
In [19], it was shown that the solution map of our bilevel problem is outer semicontinuous. This implies, in particular, that the minimisers of the regularised bilevel problems converge towards the minimiser of the original one.
Lagrange Multipliers
In this section, we prove the existence of Lagrange multipliers for the learning problem (2.3) and derive an optimality system that characterises stationary points. Moreover, a gradient formula for the reduced cost functional is obtained, which plays an important role in the development of fast solution algorithms for the learning problems (see Sect. 5.1).
In what follows, all proofs are presented for the regularisation case, that is . However, possible modifications to cope with the ICTV model will also be commented. Moreover, we consider along this section a smoother variant of the Huber regularisation, given by
with
This modified Huber function is required in order to get differentiability of the solution operator, a matter which is investigated next.
Differentiability of the Solution Operator
We recall that the denoising problem can be rewritten as
Using an elliptic regularisation, we then get
where . A necessary and sufficient optimality condition for the latter is then given by the following variational equation:
| 3.1 |
where , and
| 3.2 |
Theorem 3.1
The solution operator , which assigns to each pair the corresponding solution to the denoising problem (3.1), is Fréchet differentiable and its derivative is characterised by the unique solution of the following linearised equation:
| 3.3 |
Proof
Thanks to the ellipticity of and the monotonicity of , the existence of a unique solution to the linearised equation follows from the Lax-Milgram theorem.
Let , where and . Our aim is to prove that Combining the equations for , y and z we get that
where . Adding and subtracting the terms
and
where and , we obtain that
Testing with and using the monotonicity of , we get that
for some generic constant . Considering the differentiability and Lipschitz continuity of , it then follows that
| 3.4 |
where stands for the norm in the space . From regularity results for second-order systems (see [24, Theorem 1, Remark 14]), it follows that
since . Inserting the latter in estimate (3.4), we finally get that
Remark 3.1
The extra regularity result for second-order systems used in the last proof and due to Gröger [24, Thm. 1, Rem. 14] relies on the properties of the domain . The result was originally proved for domains. However, the regularity of the domain (in the sense of Gröger) may also be verified for convex Lipschitz bounded domains [17], which is precisely our image domain case.
Remark 3.2
The Fréchet differentiability proof makes use of the quasilinear structure of the variational form, making it difficult to extend to the ICTV model without further regularisation terms. For the latter, however, a Gâteaux differentiability result may be obtained using the same proof technique as in [22].
The Adjoint Equation
Next, we use the Lagrangian formalism for deriving the adjoint equations for both the and ICTV learning problems. The existence of a solution to the adjoint equation follows from the Lax-Milgram theorem.
Defining the Lagrangian associated to the learning problem by
and taking the derivative with respect to the state variable (u, w), we get the necessary optimality condition
If , then
| 3.5 |
whereas if , then
| 3.6 |
Theorem 3.2
Let . There exists a unique solution to the adjoint system
| 3.7 |
The corresponding solution is called adjoint state associated to (v, w).
Proof
We have to show that the left-hand side of equation (3.7) constitutes a bilinear, continuous and coercive form on . Linearity and continuity follows immediately. For the coercivity, let us take . Since is a monotone function, the terms and become positive, yielding
Thus, coercivity holds and, using Lax-Milgram theorem, we conclude that there exists a unique solution to the adjoint system (3.7).
Remark 3.3
For the ICTV model, it is possible to proceed formally with the Lagrangian approach. We recall that a necessary and sufficient optimality condition for the ICTV functional is given by
| 3.8 |
and the correspondent Lagrangian functional is given by
Deriving the Lagrangian with respect to the state variables (u, v) and setting it equal to zero yields
By taking successively and , the following adjoint system is obtained
| 3.9a |
| 3.9b |
Optimality Condition
Using the differentiability of the solution operator and the well-posedness of the adjoint equation, we derive next an optimality system for the characterisation of local minima of the bilevel learning problem. Besides the optimality condition itself, a gradient formula arises as byproduct, which is of importance in the design of solution algorithms for the learning problems.
Theorem 3.3
Let be a local optimal solution for problem (2.3). Then there exist Lagrange multipliers and such that the following system holds
| 3.10a |
| 3.10b |
| 3.10c |
| 3.10d |
| 3.10e |
| 3.10f |
Proof
Consider the reduced cost functional The bilevel optimisation problem can then be formulated as
where and C corresponds to the positive orthant in . From [47, Thm. 3.1], there exist multipliers such that
By taking the derivative with respect to and denoting by z the solution to the linearised equation (3.3), we get, together with the adjoint equation (3.10b), that
which, taking into account the linearised equation, yields
| 3.11 |
Altogether we proved the result.
Remark 3.4
From the existence result (see Remark 2.2), we actually know that, under some assumptions on F, and are strictly greater than zero. This implies that the multipliers and may be zero, and the problem becomes an unconstrained one. This plays an important role in the design of solution algorithms, since only a mild treatment of the constraints has to be taken into account, as shown in Sect. 6.
Numerical Algorithms
In this section, we propose a second-order quasi-Newton method for the solution of the learning problem with scalar regularisation parameters. The algorithm is based on a BFGS update, preserving the positivity of the iterates through the line search strategy and updating the matrix cyclically depending on the satisfaction of the curvature condition. For the solution of the lower level problem, a semismooth Newton method with a properly modified Jacobi matrix is considered. Moreover, warm initialisation strategies have to be taken into account in order to get convergence for the problem.
BFGS Algorithm
Thanks to the gradient characterisation obtained in Theorem 3.3, we next devise a BFGS algorithm to solve the bilevel learning problems with higher-order regularisers. We employ a few technical tricks to ensure convergence of the classical method. In particular, we limit the step length to get at most a fraction closer to the boundary. As shown in [19], the solution is in the interior for the regularisation and cost functionals we are interested in.
Moreover, the good behaviour of the BFGS method depends upon the BFGS matrix staying positive definite. This would be ensured by the Wolfe conditions, but because of our step length limitation, the curvature condition is not necessarily satisfied. (The Wolfe conditions are guaranteed to be satisfied for some step length , if our domain is unbounded, but the range, where the step satisfies the criterion, may be beyond our maximum step length and is not necessarily satisfied closer to the current point.) Instead, we skip the BFGS update if the curvature is negative.
Overall, our learning algorithm may be written as follows:
Algorithm 4.1
(BFGS for denoising parameter learning) Pick Armijo line search constant c, and target residual . Pick initial iterate . Solve the denoising problem (2.3b) for , yielding . Initialise . Set , and iterate the following steps:
Solve the adjoint equation (3.10b) for , and calculate from (3.11).
- If , do the following:
- Set , and .
- Perform the BFGS update
- Compute from
- Initialise , where
Repeat the following:- Let , and solve the denoising problem (2.3b) for , yielding .
- If the residual , do the following:
-
(i)If over all tried, choose the minimiser, set , , and continue from Step 5.
-
(ii)Otherwise end the algorithm with solution .
-
(i)
- Otherwise, if Armijo condition holds, set , , and continue from Step 5.
- In all other cases, set and continue from Step 4a.
If the residual , end the algorithm with . Otherwise continue from Step 1 with .
Step (4) ensures that the iterates remain feasible, without making use of a projection step.
An Infeasible Semismooth Newton Method
In this section, we consider semismooth Newton methods for solving the and the ICTV denoising problems. Semismooth Newton methods feature a local superlinear convergence rate and have been previously successfully applied to image processing problems (see, e.g. [21, 29, 32]). The primal-dual algorithm we use here is an extension of the method proposed in [29] to the case of higher-order regularisers.
In variational form, the denoising problem can be written as
or, in general abstract primal-dual form, as
| 4.1a |
| 4.1b |
where is a second-order linear elliptic operator, , are linear operators acting on y and , correspond to the dual multipliers.
Let us set
Let us also define the diagonal application by
We may derive being defined by
Then (4.1a), (4.1b) may be written as
Linearising, we obtain the system
| SSN-1 |
where
The semismooth Newton method solves (SSN-1) at a current iterate . It then updates
| SSN-2 |
for a suitable step length , allowing to become infeasible in the process. That is, it may hold that , which may lead to non-descent directions. In order to globalise the method, one projects
| SSN-3 |
in the building of the Jacobi matrix. Following [29, 42], it can be shown that a discrete version of the method (SSN-1)–(SSN-3) converges globally and locally superlinearly near a point where the subdifferentials of the operator on corresponding (4.1) are non-singular. Further dampening as in [29] guarantees local superlinear convergence at any point. We do not present the proof, as going into the discretisation and dampening details would expand this work considerably.
Remark 4.1
The system (SSN-1) can be further simplified, which is crucial to obtain acceptable performance with . Indeed, observe that B is invertible, so we may solve from
| 4.2 |
Thus, we may simplify out of (SSN-1) and only solve for using a reduced system matrix. Finally, we calculate from (4.2).
For the denoising sub-problem (2.3b), we use the method (SSN-1)–(SSN-3) with the reduced system matrix of Remark 4.1. Here, we denote by y in the case of TGV the parameters
and in the case of ICTV
For the calculation of the step length , we use Armijo line search with parameter . We end the SSN iterations when
where , and .
Warm Initialisation
In our numerical experimentation, we generally found Algorithm 4.1 to perform well for learning the regularisation parameter for denoising as was done in [22]. For learning the two (or even more) regularisation parameters for denoising, we found that a warm initialisation is needed to obtain convergence. More specifically, we use as an aid for discovering both the initial iterate as well as the initial BFGS matrix . This is outlined in the following algorithm:
Algorithm 4.2
(BFGS initialisation for parameter learning) Pick a heuristic factor . Then do the following:
Solve the corresponding problem for using Algorithm 4.1. This yields optimal denoising parameter , as well as the BFGS estimate for .
Run Algorithm 4.1 for with initialisation , and initial BFGS matrix .
With , we pick , where the original discrete image has pixels. This corresponds to the heuristic [2, 44] that if or 256, and the discrete image is mapped into the corresponding domain directly (corresponding to spatial step size of one in the discrete gradient operator), then tends to be a good choice. We will later verify this through the use of our algorithms. Now, if is rescaled to , i.e. , then with and , we have the theoretical equivalence
| 4.3 |
| 4.4 |
This introduces the factor between rescaled , .
Experiments
In this section, we present some numerical experiments to verify the theoretical properties of the bilevel learning problems and the efficiency of the proposed solution algorithms. In particular, we exhaustively compare the performance of the new proposed cost functional with respect to well-known quality measures, showing a better behaviour of the new cost for the chosen tested images. The performance of the proposed BFGS algorithm, combined with the semismooth Newton method for the lower level problem, is also examined.
Moreover, on basis of the learning setting proposed, a thorough comparison between and is carried out. The use of higher-order regularisers in image denoising is rather recent, and the question on whether or ICTV performs better has been around. We target that question and, on basis of the bilevel learning approach, we are able to give some partial answers.
Gaussian Denoising
We tested Algorithm 4.1 for and Algorithm 4.2 for Gaussian denoising parameter learning on various images. Here we report the results for two images, the parrot image in Fig. 4a, and the geometric image in Fig. 5. We applied synthetic noise to the original images, such that the PSNR of the parrot image are 24.7, and the PSNR of the geometric image is 24.8.
Fig. 4.
Optimal denoising results for initial guess for and for
Fig. 5.
Optimal denoising results for initial guess for and for
In order to learn the regularisation parameter for , we picked initial . For , initialisation by was used as in Algorithm 4.1. We chose the other parameters of Algorithm 4.1 as , , , and . For the SSN denoising method, the parameters and were chosen.
We have included results for both the -squared cost functional and the Huberised total variation cost functional . The learning results are reported in Table 1 for the parrot images, and Table 2 for the geometric image. The denoising results with the discovered parameters are shown in Figs 4 and 5. We report the resulting optimal parameter values, the cost functional value, PSNR, SSIM [46], as well as the number of iterations taken by the outer BFGS method.
Table 1.
Quantified results for the parrot image ()
| Denoise | Cost | Initial (,) | Result (*, *) | Cost | SSIM | PSNR | Its. | Fig. |
|---|---|---|---|---|---|---|---|---|
| (0.069/, 0.051/ ) | 6.615 | 0.897 | 31.720 | 12 | 4c | |||
| (0.058/, 0.041/) | 6.412 | 0.890 | 31.992 | 11 | 4d | |||
| ICTV | (0.068/ , 0.051/) | 6.656 | 0.895 | 31.667 | 16 | 4e | ||
| ICTV | (0.051/, 0.041/) | 6.439 | 0.887 | 31.954 | 7 | 4f | ||
| TV | 0.057/ | 6.944 | 0.887 | 31.298 | 10 | 4g | ||
| TV | 0.042/ | 6.623 | 0.879 | 31.710 | 12 | 4h |
Table 2.
Quantified results for the synthetic image ()
| Denoise | Cost | Initial | Result | Value | SSIM | PSNR | Its. | Fig. |
|---|---|---|---|---|---|---|---|---|
| TGV | (0.453/, 0.071/) | 3.769 | 0.989 | 36.606 | 17 | 5c | ||
| TGV | (0.307/, 0.055/) | 3.603 | 0.986 | 36.997 | 19 | 5d | ||
| ICTV | (0.505/, 0.103/) | 4.971 | 0.970 | 34.201 | 23 | 5e | ||
| ICTV | (0.056/, 0.049/) | 3.947 | 0.965 | 36.206 | 7 | 5f | ||
| TV | 0.136/ | 5.521 | 0.966 | 33.291 | 6 | 5g | ||
| TV | 0.052/ | 4.157 | 0.948 | 35.756 | 7 | 5h |
Our first observation is that all approaches successfully learn a denoising parameter that gives a good-quality denoised image. Secondly, we observe that the gradient cost functional performs visually and in terms of SSIM significantly better for parameter learning than the cost functional . In terms of PSNR, the roles are reversed, as should be, since the is equivalent to PSNR. This again confirms that PSNR is a poor-quality measure for images. For , there is no significant difference between different cost functionals in terms of visual quality, although the PSNR and SSIM differ.
We also observe that the optimal parameters generally satisfy . This confirms the earlier observed heuristic that if then tends to be a good choice. As we can observe from Figs. 4 and 5, this optimal parameter choice also avoids the staircasing effect that can be observed with in the results.
In Fig. 3, we have plotted by the red star the discovered regularisation parameter reported in Fig. 4. Studying the location of the red star, we may conclude that Algorithms 4.1 and 4.2 manage to find a nearly optimal parameter in very few BFGS iterations.
Fig. 3.

Cost functional value versus for denoising, for the parrot test images, for both and cost functionals. The illustrations are contour plots of function value versus
Statistical Testing
To obtain a statistically significant outlook to the performance of different regularisers and cost functionals, we made use of the Berkeley segmentation dataset BSDS300 [36], displayed in Fig. 6. We resized each image to 128 pixels on its shortest edge and take the top left square of the image. To this dataset, we applied pixelwise Gaussian noise of variance , and 20. We tested the performance of both cost functionals, and , as well as the , , and regularisers on this dataset, for all noise levels. In the first instance, reported in Figs. 7, 8, 9 and 10 (noise levels only), and Tables 3, 4 and 5, we applied the proposed bilevel learning model on each image individually, to learn the optimal parameters specifically for that image, and a corresponding noisy image for all of the noise levels separately. For the algorithm, we use the same parametrisation as presented in Sect. 5.1.
Fig. 6.
The 200 images of the Berkeley segmentation dataset BSDS300 [36], cropped to be rectangular, keeping top left corner, and resized to
Fig. 7.
Ordering of regularisers with individual learning, cost, and noise variance , on the 200 images of the BSDS300 dataset, resized. Best regulariser: red TV, green ICTV, blue TGV; top SSIM, middle PSNR, bottom objective value
Fig. 8.
Ordering of regularisers with individual learning, cost, and noise variance , on the 200 images of the BSDS300 dataset, resized. Best regulariser: red TV, green ICTV, blue TGV; top SSIM, middle PSNR, bottom objective value
Fig. 9.
Ordering of regularisers with individual learning, cost, and noise variance , on the 200 images of the BSDS300 dataset, resized. Best regulariser: red TV, green ICTV, blue TGV; top SSIM, middle PSNR, bottom objective value
Fig. 10.
Ordering of regularisers with individual learning, cost, and noise variance , on the 200 images of the BSDS300 dataset, resized. Best regulariser: red TV, green ICTV, blue TGV; top SSIM, middle PSNR, bottom objective value
Table 3.
Regulariser performance with individual learning, and costs and noise variance 2; BSDS300 dataset, resized
| SSIM | PSNR | Value | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Mean | Std | Med | Best | Mean | Std | Med | Best | Mean | Std | Med | Best | |
| Noisy data | 0.978 | 0.015 | 0.981 | 0 | 41.56 | 0.86 | 41.95 | 0 | 2.9E | 3.1E | 2.9E | 0 |
| TV | 0.988 | 0.005 | 0.989 | 1 | 42.57 | 1.10 | 42.46 | 5 | 2.4E | 3.7E | 2.5E | 1 |
| ICTV | 0.989 | 0.005 | 0.990 | 141 | 42.74 | 1.16 | 42.62 | 143 | 2.3E | 3.9E | 2.4E | 137 |
| TGV | 0.989 | 0.005 | 0.989 | 58 | 42.70 | 1.17 | 42.55 | 52 | 2.4E | 4.0E | 2.5E | 62 |
| 95 % t test | ||||||||||||
| TV | 0.988 | 0.005 | 0.988 | 2 | 42.64 | 1.14 | 42.50 | 2 | 0.41 | 0.08 | 0.43 | 2 |
| ICTV | 0.988 | 0.005 | 0.989 | 142 | 42.79 | 1.18 | 42.64 | 148 | 0.39 | 0.08 | 0.41 | 148 |
| TGV | 0.988 | 0.005 | 0.989 | 56 | 42.76 | 1.19 | 42.58 | 50 | 0.40 | 0.08 | 0.42 | 50 |
| 95 % t test | ||||||||||||
Table 4.
Regulariser performance with individual learning, and costs and noise variance 10; BSDS300 dataset, resized
| SSIM | PSNR | Value | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Mean | Std | Med | Best | Mean | Std | Med | Best | Mean | Std | Med | Best | |
| Noisy data | 0.731 | 0.120 | 0.744 | 0 | 27.72 | 0.88 | 28.09 | 0 | 1.4E | 2.5E | 1.4E | 0 |
| TV | 0.898 | 0.036 | 0.900 | 4 | 31.28 | 1.63 | 30.97 | 8 | 7.3E | 2.2E | 7.3E | 1 |
| ICTV | 0.906 | 0.034 | 0.909 | 139 | 31.54 | 1.68 | 31.21 | 142 | 7.1E | 2.2E | 7.1E | 121 |
| TGV | 0.905 | 0.035 | 0.907 | 57 | 31.47 | 1.72 | 31.10 | 50 | 7.1E | 2.2E | 7.1E | 78 |
| 95 % t test | ICTV > TGV TV | ICTV > TGV TV | ICTV > TGV TV | |||||||||
| TV | 0.897 | 0.033 | 0.898 | 9 | 31.54 | 1.76 | 31.15 | 2 | 5.52 | 1.89 | 5.51 | 2 |
| ICTV | 0.903 | 0.032 | 0.903 | 131 | 31.72 | 1.76 | 31.33 | 148 | 5.30 | 1.81 | 5.35 | 148 |
| TGV | 0.902 | 0.033 | 0.903 | 60 | 31.67 | 1.80 | 31.28 | 50 | 5.38 | 1.87 | 5.39 | 50 |
| 95 % t test | ICTV > TGV TV | ICTV > TGV TV | ICTV > TGV TV | |||||||||
Table 5.
Regulariser performance with individual learning, and costs and noise variance ; BSDS300 dataset, resized
| SSIM | PSNR | Value | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Mean | Std | Med | Best | Mean | Std | Med | Best | Mean | Std | Med | Best | |
| Noisy data | 0.505 | 0.143 | 0.516 | 0 | 21.80 | 0.92 | 22.14 | 0 | 2.8E | 7.9E | 2.8E | 0 |
| TV | 0.795 | 0.063 | 0.799 | 7 | 27.27 | 1.64 | 27.02 | 11 | 1.0E | 3.5E | 9.7E | 1 |
| ICTV | 0.810 | 0.061 | 0.814 | 120 | 27.52 | 1.66 | 27.24 | 125 | 9.7E | 3.4E | 9.6E | 79 |
| TGV | 0.808 | 0.062 | 0.814 | 73 | 27.50 | 1.74 | 27.15 | 64 | 9.8E | 3.5E | 9.5E | 120 |
| 95 % t test | ICTV > TGV TV | ICTV, TGV TV | ICTV, TGV TV | |||||||||
| TV | 0.802 | 0.056 | 0.804 | 8 | 27.70 | 1.93 | 27.28 | 0 | 13.65 | 5.53 | 13.14 | 0 |
| ICTV | 0.811 | 0.056 | 0.816 | 126 | 27.86 | 1.91 | 27.45 | 138 | 13.14 | 5.22 | 12.62 | 138 |
| TGV | 0.810 | 0.057 | 0.814 | 66 | 27.83 | 1.94 | 27.41 | 62 | 13.28 | 5.38 | 12.77 | 62 |
| 95 % t test | ICTV > TGV TV | ICTV > TGV TV | ICTV > TGV TV | |||||||||
The figures display the noisy images and indicate by colour coding the best result as judged by the structural similarity measure SSIM [46], PSNR and the objective function value ( or cost). These criteria are, respectively, the top, middle and bottom rows of colour-coding squares. Red square indicates that performed the best, green square indicates that performed the best and blue square indicates that performed the best—this is naturally for the optimal parameters for the corresponding regulariser and cost functional discovered by our algorithms.
In the tables, we report the information in a more concise numerical fashion, indicating the mean, standard deviation and median for all the different criteria (SSIM, PSNR and cost functional value), as well as the number of images for which each regulariser performed the best. We recall that SSIM is normalised to [0, 1], with higher value better. Moreover, we perform a statistical 95 paired t-test on each of the criteria, and a pair of regularisers, to see whether any pair of regularisers can be ordered. If so, this is indicated in the last row of each of the tables.
Overall, studying the t-test and other data, the ordering of the regularisers appears to be
This is rather surprising, as in many specific examples, has been observed to perform better than , see Figs. 4 and 5, as well as [1, 5]. Only when the noise is high, appears to come on par with with the cost functional in Fig. 9 and Table 5.
A more detailed study of the results in Figs. 7, 8, 9 and 10 seems to indicate that performs better than when the image contains large smooth areas, but generally seems to perform better for images with more complicated and varying contents. This observation agrees with the results in Figs. 4 and 5, as well as [1, 5], where the images are of the former type.
One possible reason for the better performance of could be that has more degrees of freedom—in we essentially constrain for some function v—and therefore overfits to the noisy data, until the noise level becomes so high that overfitting would become too high for any parameter. To see whether this is true, we also performed batch learning, learning a single set of parameters for all images with the same noise level. That is, we studied the model
with
where , are the noisy images with the same noise level, and the original noise-free images.
The results are shown in Figs. 11, 12, 13 and 14 (noise levels only), and Tables 6, 7 and 8. The results are still roughly the same as with individual learning. Again, only with high noise in Table 8, does not lose to . Another interesting observation is that starts to be frequently the best regulariser for individual images, although still statistically does worse than either or .
Fig. 11.
Ordering of regularisers with batch learning, cost, and noise variance , on the 200 images of the BSDS300 dataset, resized. Best regulariser: red TV, green ICTV, blue TGV; top SSIM, middle PSNR, bottom objective value
Fig. 12.
Ordering of regularisers with batch learning, cost, and noise variance , on the 200 images of the BSDS300 dataset, resized. Best regulariser: red TV, green ICTV, blue TGV; top SSIM, middle PSNR, bottom objective value
Fig. 13.
Ordering of regularisers with batch learning, cost, and noise variance , on the 200 images of the BSDS300 dataset, resized. Best regulariser: red TV, green ICTV, blue TGV; top SSIM, middle PSNR, bottom objective value
Fig. 14.
Ordering of regularisers with batch learning, , cost, and noise variance , on the 200 images of the BSDS300 dataset, resized. Best regulariser: red TV, green ICTV, blue TGV; top SSIM, middle PSNR, bottom objective value
Table 6.
Regulariser performance with batch learning, and costs, noise variance 2; BSDS300 dataset, resized
| SSIM | PSNR | Value | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Mean | Std | Med | Best | Mean | Std | Med | Best | Mean | Std | Med | Best | |
| Noisy data | 0.978 | 0.015 | 0.981 | 16 | 41.56 | 0.86 | 41.95 | 24 | 2.9E | 3.1E | 2.9E | 16 |
| TV | 0.987 | 0.006 | 0.988 | 23 | 42.43 | 1.07 | 42.37 | 21 | 2.5E | 3.4E | 2.5E | 20 |
| ICTV | 0.988 | 0.006 | 0.989 | 119 | 42.56 | 1.06 | 42.51 | 135 | 2.4E | 3.5E | 2.5E | 113 |
| TGV | 0.987 | 0.006 | 0.989 | 42 | 42.51 | 1.09 | 42.44 | 20 | 2.4E | 3.6E | 2.5E | 51 |
| 95 % t test | ICTV > TGV TV | ICTV > TGV TV | ICTV > TGV TV | |||||||||
| TV | 0.986 | 0.007 | 0.987 | 13 | 42.46 | 0.95 | 42.43 | 17 | 0.42 | 0.07 | 0.43 | 17 |
| ICTV | 0.987 | 0.007 | 0.988 | 139 | 42.57 | 0.95 | 42.56 | 128 | 0.41 | 0.07 | 0.42 | 128 |
| TGV | 0.987 | 0.007 | 0.988 | 38 | 42.53 | 0.97 | 42.51 | 40 | 0.41 | 0.07 | 0.42 | 40 |
| 95 % t test | ICTV > TGV TV | ICTV > TGV TV | ICTV > TGV TV | |||||||||
Table 7.
Regulariser performance with batch learning, and costs, noise variance 10; BSDS300 dataset, resized
| SSIM | PSNR | Value | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Mean | Std | Med | Best | Mean | Std | Med | Best | Mean | Std | Med | Best | |
| Noisy data | 0.731 | 0.120 | 0.744 | 8 | 27.72 | 0.88 | 28.09 | 2 | 1.4E | 2.5E | 1.4E | 0 |
| TV | 0.893 | 0.035 | 0.897 | 23 | 31.24 | 1.87 | 30.94 | 23 | 7.5E | 2.2E | 7.3E | 18 |
| ICTV | 0.897 | 0.034 | 0.902 | 134 | 31.36 | 1.81 | 31.11 | 150 | 7.4E | 2.2E | 7.2E | 107 |
| TGV | 0.896 | 0.035 | 0.901 | 35 | 31.31 | 1.88 | 31.01 | 25 | 7.4E | 2.3E | 7.2E | 75 |
| 95 % t test | ICTV > TGV TV | ICTV > TGV TV | ICTV, TGV TV | |||||||||
| TV | 0.887 | 0.035 | 0.889 | 29 | 31.31 | 1.50 | 31.15 | 25 | 5.72 | 1.91 | 5.51 | 25 |
| ICTV | 0.889 | 0.036 | 0.893 | 127 | 31.41 | 1.44 | 31.28 | 131 | 5.57 | 1.83 | 5.37 | 131 |
| TGV | 0.888 | 0.035 | 0.891 | 44 | 31.38 | 1.50 | 31.20 | 44 | 5.64 | 1.90 | 5.44 | 44 |
| 95 % t test | ICTV > TGV TV | ICTV > TGV TV | ICTV > TGV TV | |||||||||
Table 8.
Regulariser performance with batch learning, and costs, noise variance 20; BSDS300 dataset, resized
| SSIM | PSNR | Value | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Mean | Std | Med | Best | Mean | Std | Med | Best | Mean | Std | Med | Best | |
| Noisy data | 0.505 | 0.143 | 0.516 | 4 | 21.80 | 0.92 | 22.14 | 1 | 2.8E | 7.9E | 2.8E | 0 |
| TV | 0.789 | 0.067 | 0.798 | 18 | 27.37 | 2.13 | 26.98 | 24 | 1.0E | 3.7E | 9.8E | 14 |
| ICTV | 0.795 | 0.065 | 0.804 | 139 | 27.46 | 2.10 | 27.05 | 141 | 1.0E | 3.6E | 9.6E | 91 |
| TGV | 0.794 | 0.066 | 0.804 | 39 | 27.44 | 2.12 | 27.04 | 34 | 1.0E | 3.7E | 9.6E | 95 |
| 95 % t test | ICTV > TGV TV | ICTV > TGV TV | TGV ICTV > TV | |||||||||
| TV | 0.786 | 0.053 | 0.790 | 31 | 27.50 | 1.71 | 27.27 | 33 | 14.11 | 5.78 | 13.16 | 33 |
| ICTV | 0.790 | 0.054 | 0.790 | 123 | 27.56 | 1.64 | 27.37 | 119 | 13.84 | 5.54 | 12.75 | 119 |
| TGV | 0.789 | 0.053 | 0.793 | 46 | 27.55 | 1.70 | 27.33 | 48 | 13.93 | 5.73 | 12.95 | 48 |
| 95 % t test | ICTV, TGV TV | ICTV, TGV TV | ICTV > TGV TV | |||||||||
For the first image of the dataset, does in all of the Figs. 7, 8, 9, 10, 11, 12, 13 and 14 better than , while for the second image, the situation is reversed. We have highlighted these two images for the cost in Figs. 15, 16, 17 and 18, for both noise levels and . In the case where does better, hardly any difference can be observed by the eye, while for second image, clearly has less staircasing in the smooth areas of the image, especially with the noise level .
Fig. 15.
Image for which performs better than ,
Fig. 16.
Image for which performs better than ,
Fig. 17.
Image for which performs better than ,
Fig. 18.
Image for which performs better than ,
Based on this study, it therefore seems that is the most reliable regulariser of the ones tested, when the type of image being processed is unknown, and low SSIM, PSNR or cost functional value is desired. But as can be observed for individual images, it can within large smooth areas exhibit artefacts that are avoided by the use of .
The Choice of Cost Functional
The cost functional naturally obtains better PSNR than , as the two former are equivalent. Comparing the results for the two cost funtionals in Tables 3, 4 and 5, we may however observe that for low noise levels , and generally for batch learning, attains better (higher) SSIM. Since SSIM better captures [46] the visual quality of images than PSNR, this recommends the use of our novel total variation cost functional . Of course, one might attempt to optimise the SSIM. This is however a non-convex functional, which will pose additional numerical challenges avoided by the convex total variation cost.
Conclusion and Outlook
In this paper, we propose a bilevel optimisation approach in function space for learning the optimal choice of parameters in higher-order total variation regularisation. We present a rigorous analysis of this optimisation problem as well as a numerical discussion in the context of image denoising.
Analytically, we obtain the existence results for the bilevel optimisation problem and prove the Fréchet differentiability of the solution operator. This leads to the existence of Lagrange multipliers and a first-order optimality system characterising optimal solutions. In particular, the existence of an adjoint state allows to obtain a cost functional gradient formula which is of importance in the design of efficient solution algorithms.
We make use of the bilevel learning approach, and the theoretical findings, to compare the performance—in terms of returned image quality—of TV, ICTV and TGV regularisation. A statistical analysis, carried out on a dataset of 200 images, suggests that ICTV performs slightly better than TGV, and both perform better than TV, in average. For denoising of images with a high noise level, ICTV and TGV score comparably well. For images with large smooth areas, TGV performs better than ICTV.
Moreover, we propose a new cost functional for the bilevel learning problem, which exhibits interesting theoretical properties and has a better behaviour with respect to the PSNR related L cost used previously in the literature. This study raises the question of other, alternative cost functionals. For instance, one could be tempted to used the SSIM as cost, but its non-convexity might present several analytical and numerical difficulties. The new cost functional, proposed in this paper, turns out to be a good compromise between image quality measure and analytically tractable cost term.
Acknowledgments
This research has been supported by King Abdullah University of Science and Technology (KAUST) Award No. KUK-I1-007-43, EPSRC grants Nr. EP/J009539/1 “Sparse & Higher-order Image Restoration” and Nr. EP/M00483X/1 “Efficient computational tools for inverse imaging problems”, Escuela Politécnica Nacional de Quito Award No. PIS 12-14, MATHAmSud project SOCDE “Sparse Optimal Control of Differential Equations” and the Leverhulme Trust project on “Breaking the non-convexity barrier”. While in Quito, T. Valkonen has moreover been supported by SENESCYT (Ecuadorian Ministry of Higher Education, Science, Technology and Innovation) under a Prometeo Fellowship.
Biographies
J. C. De los Reyes
is Director of the Ecuadorian Research Center on Mathematical Modelling (MODEMAT) and Full Professor at Escuela Politécnica Nacional (Ecuador). He obtained his degree in Mathematics at Escuela Politécnica Nacional in 2000 and his Ph.D. in Mathematics at the University of Graz (Austria) in 2003. He worked for one year (2005/2006) as postdoctoral researcher at the Technical Univ. of Berlin. In the year 2009 he was awarded a Alexander von Humboldt Fellowship for Experienced Researchers to carry out research in Berlin. He has held Visiting Professor positions at the Humboldt University of Berlin (2010) and at the University of Hamburg (2013). In 2010 he was also awarded a J.T. Oden Faculty Fellowship to carry out research at The University of Texas at Austin. In February 2015 he was appointed as member of the Academy of Sciences of Ecuador (ACE).
C.-B. Schönlieb
is a Reader in Applied and Computational Mathematics, head of the Cambridge Image Analysis (CIA) group at the Department of Applied Mathematics and Theoretical Physics (DAMTP), University of Cambridge. Moreover, she is the Director of the Cantab Capital Institute for the Mathematics of Information, Co-Director of the EPSRC Centre for Mathematical and Statistical Analysis of Multimodal Clinical Imaging, a Fellow of Jesus College, Cambridge and co-leader of the IMAGES network. Carola obtained her degree in Mathematics at the University of Salzburg in 2004 and her Ph.D. in Mathematics at the University of Cambridge in 2009. After one year of postdoctoral activity at the University of Goettingen (Germany), she became a Lecturer in DAMTP, promoted to Reader in 2015.
T. Valkonen
is a Lecturer in applied mathematics at the University of Liverpool. He obtained his Ph.D. in Mathematics at the University of Jyväskylä (Finland) 2008 and worked as postdoctoral researcher at the University of Graz and at the University of Cambridge.
References
- 1.Benning M, Brune C, Burger M, Müller J. Higher-order TV methods-enhancement via Bregman iteration. J. Sci. Comput. 2013;54(2–3):269–310. doi: 10.1007/s10915-012-9650-3. [DOI] [Google Scholar]
- 2.Benning M, Gladden L, Holland D, Schönlieb C-B, Valkonen T. Phase reconstruction from velocity-encoded MRI measurements—a survey of sparsity-promoting variational approaches. J. Magn. Reson. 2014;238:26–43. doi: 10.1016/j.jmr.2013.10.003. [DOI] [PubMed] [Google Scholar]
- 3.Biegler L, Biros G, Ghattas O, Heinkenschloss M, Keyes D, Mallick B, Tenorio L, van Bloemen Waanders B, Willcox K, Marzouk Y. Large-Scale Inverse Problems and Quantification of Uncertainty. New York: Wiley; 2011. [Google Scholar]
- 4.Bonnans JF, Tiba D. Pontryagin’s principle in the control of semilinear elliptic variational inequalities. Appl. Math. Optim. 1991;23(1):299–312. doi: 10.1007/BF01442403. [DOI] [Google Scholar]
- 5.Bredies K, Kunisch K, Pock T. Total generalized variation. SIAM J. Imaging Sci. 2011;3:492–526. doi: 10.1137/090769521. [DOI] [Google Scholar]
- 6.Bredies K, Holler M. A total variation-based jpeg decompression model. SIAM J. Imaging Sci. 2012;5(1):366–393. doi: 10.1137/110833531. [DOI] [Google Scholar]
- 7.Bredies K, Kunisch K, Valkonen T. Properties of : the one-dimensional case. J. Math. Anal. Appl. 2013;398:438–454. doi: 10.1016/j.jmaa.2012.08.053. [DOI] [Google Scholar]
- 8.Bredies, K., Valkonen, T.: Inverse problems with second-order total generalized variation constraints. In: Proceedings of the 9th International Conference on Sampling Theory and Applications (SampTA), Singapore (2011)
- 9.Bui-Thanh T, Willcox K, Ghattas O. Model reduction for large-scale systems with high-dimensional parametric input space. SIAM J. Sci. Comput. 2008;30(6):3270–3288. doi: 10.1137/070694855. [DOI] [Google Scholar]
- 10.Calatroni L, De los Reyes JC, Schönlieb C-B. Dynamic sampling schemes for optimal noise learning under multiple nonsmooth constraints. In: Poetzsche C, editor. System Modeling and Optimization. New York: Springer Verlag; 2014. pp. 85–95. [Google Scholar]
- 11.Chambolle A, Lions P-L. Image recovery via total variation minimization and related problems. Numer. Math. 1997;76:167–188. doi: 10.1007/s002110050258. [DOI] [Google Scholar]
- 12.Chan T, Marquina A, Mulet P. High-order total variation-based image restoration. SIAM J. Sci. Comput. 2000;22(2):503–516. doi: 10.1137/S1064827598344169. [DOI] [Google Scholar]
- 13.Chan TF, Kang SH, Shen J. Euler’s elastica and curvature-based inpainting. SIAM J. Appl. Math. 2002;63(2):564–592. [Google Scholar]
- 14.Chen, Y., Pock, T., Bischof, H.: Learning -based analysis and synthesis sparsity priors using bi-level optimization. In: Workshop on Analysis Operator Learning versus Dictionary Learning, NIPS 2012 (2012)
- 15.Chen, Y., Ranftl, R., Pock, T.: Insights into analysis operator learning: from patch-based sparse models to higher-order mrfs. IEEE Trans. Image Process. (2014) (to appear) [DOI] [PubMed]
- 16.Chung, J., Español, M.I., Nguyen, T.: Optimal regularization parameters for general-form tikhonov regularization. arXiv preprint arXiv:1407.1911 (2014)
- 17.Dauge M. Neumann and mixed problems on curvilinear polyhedra. Integr. Equ. Oper. Theory. 1992;15(2):227–261. doi: 10.1007/BF01204238. [DOI] [Google Scholar]
- 18.De los Reyes JC, Meyer C. Strong stationarity conditions for a class of optimization problems governed by variational inequalities of the second kind. J. Optim. Theory Appl. 2015;168(2):375–409. doi: 10.1007/s10957-015-0748-2. [DOI] [Google Scholar]
- 19.De los Reyes JC, Schönlieb C-B, Valkonen T. The structure of optimal parameters for image restoration problems. J. Math. Anal. Appl. 2016;434(1):464–500. doi: 10.1016/j.jmaa.2015.09.023. [DOI] [Google Scholar]
- 20.De los Reyes JC. Optimal control of a class of variational inequalities of the second kind. SIAM J. Control Optim. 2011;49(4):1629–1658. doi: 10.1137/090764438. [DOI] [Google Scholar]
- 21.De los Reyes JC, Hintermüller M. A duality based semismooth Newton framework for solving variational inequalities of the second kind. Interfaces Free Bound. 2011;13(4):437–462. doi: 10.4171/IFB/267. [DOI] [Google Scholar]
- 22.De los Reyes JC, Schönlieb C-B. Image denoising: learning the noise model via nonsmooth PDE-constrained optimization. Inverse Probl. Imaging. 2013;7(4):1139–1155. doi: 10.3934/ipi.2013.7.1183. [DOI] [Google Scholar]
- 23.Domke, J.: Generic methods for optimization-based modeling. In: International Conference on Artificial Intelligence and Statistics, pp. 318–326 (2012)
- 24.Gröger K. A -estimate for solutions to mixed boundary value problems for second order elliptic differential equations. Math. Ann. 1989;283(4):679–687. doi: 10.1007/BF01442860. [DOI] [Google Scholar]
- 25.Haber E, Tenorio L. Learning regularization functionals—a supervised training approach. Inverse Probl. 2003;19(3):611. doi: 10.1088/0266-5611/19/3/309. [DOI] [Google Scholar]
- 26.Haber E, Horesh L, Tenorio L. Numerical methods for the design of large-scale nonlinear discrete ill-posed inverse problems. Inverse Probl. 2010;26(2):025002. doi: 10.1088/0266-5611/26/2/025002. [DOI] [Google Scholar]
- 27.Hinterberger W, Scherzer O. Variational methods on the space of functions of bounded hessian for convexification and denoising. Computing. 2006;76(1):109–133. doi: 10.1007/s00607-005-0119-1. [DOI] [Google Scholar]
- 28.Hintermüller M, Laurain A, Löbhard C, Rautenberg CN, Surowiec TM. Elliptic mathematical programs with equilibrium constraints in function space: Optimality conditions and numerical realization. In: Rannacher R, editor. Trends in PDE Constrained Optimization. Berlin: Springer International Publishing; 2014. pp. 133–153. [Google Scholar]
- 29.Hintermüller M, Stadler G. An infeasible primal-dual algorithm for total bounded variation-based inf-convolution-type image restoration. SIAM J. Sci. Comput. 2006;28(1):1–23. doi: 10.1137/040613263. [DOI] [Google Scholar]
- 30.Hintermüller, M., Wu, T.: Bilevel optimization for calibrating point spread functions in blind deconvolution. Preprint (2014)
- 31.Knoll F, Bredies K, Pock T, Stollberger R. Second order total generalized variation (TGV) for MRI. Magn. Reson. Med. 2011;65(2):480–491. doi: 10.1002/mrm.22595. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Kunisch K, Hintermüller M. Total bounded variation regularization as a bilaterally constrained optimization problem. SIAM J. Imaging Sci. 2004;64(4):1311–1333. [Google Scholar]
- 33.Kunisch K, Pock T. A bilevel optimization approach for parameter learning in variational models. SIAM J. Imaging Sci. 2013;6(2):938–983. doi: 10.1137/120882706. [DOI] [Google Scholar]
- 34.Luo Z-Q, Pang J-S, Ralph D. Mathematical Programs with Equilibrium Constraints. Cambridge: Cambridge University Press; 1996. [Google Scholar]
- 35.Lysaker M, Tai X-C. Iterative image restoration combining total variation minimization and a second-order functional. Int. J. Comput. Vis. 2006;66(1):5–18. doi: 10.1007/s11263-005-3219-7. [DOI] [Google Scholar]
- 36.Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the 8th International Conference on Computer Vision, vol. 2, pp. 416–423 (2001). The database is available online at http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/BSDS300/html/dataset/images.html
- 37.Masnou, S., Morel, J.-M.: Level lines based disocclusion. In: 1998 IEEE International Conference on Image Processing (ICIP 98), pp. 259–263 (1998)
- 38.Outrata JV. A generalized mathematical program with equilibrium constraints. SIAM J. Control Optim. 2000;38(5):1623–1638. doi: 10.1137/S0363012999352911. [DOI] [Google Scholar]
- 39.Papafitsoros K, Schönlieb C-B. A combined first and second order variational approach for image reconstruction. J. Math. Imaging Vis. 2014;48(2):308–338. doi: 10.1007/s10851-013-0445-4. [DOI] [Google Scholar]
- 40.Ring W. Structural properties of solutions to total variation regularization problems. ESAIM. 2000;34:799–810. doi: 10.1051/m2an:2000104. [DOI] [Google Scholar]
- 41.Rudin L, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Phys. D. 1992;60:259–268. doi: 10.1016/0167-2789(92)90242-F. [DOI] [Google Scholar]
- 42.Sun D, Han J. Newton and quasi-Newton methods for a class of nonsmooth equations and related problems. SIAM J. Optim. 1997;7(2):463–480. doi: 10.1137/S1052623494274970. [DOI] [Google Scholar]
- 43.Tappen, M.F.: Utilizing variational optimization to learn Markov random fields. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’07), pp. 1–8 (2007)
- 44.Valkonen T, Bredies K, Knoll F. Total generalised variation in diffusion tensor imaging. SIAM J. Imaging Sci. 2013;6(1):487–525. doi: 10.1137/120867172. [DOI] [Google Scholar]
- 45.Viola, F., Fitzgibbon, A., Cipolla, R.: A unifying resolution-independent formulation for early vision. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 494–501 (2012)
- 46.Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 2004;13(4):600–612. doi: 10.1109/TIP.2003.819861. [DOI] [PubMed] [Google Scholar]
- 47.Zowe J, Kurcyusz S. Regularity and stability for the mathematical programming problem in Banach spaces. Appl. Math. Optim. 1979;5(1):49–62. doi: 10.1007/BF01442543. [DOI] [Google Scholar]

















