Skip to main content

Super-resolution for simultaneous realization of resolution enhancement and motion blur removal based on adaptive prior settings

Abstract

A super-resolution method for simultaneously realizing resolution enhancement and motion blur removal based on adaptive prior settings are presented in this article. In order to obtain high-resolution (HR) video sequences from motion-blurred low-resolution video sequences, both of the resolution enhancement and the motion blur removal have to be performed. However, if one is performed after the other, errors in the first process may cause performance deterioration of the subsequent process. Therefore, in the proposed method, a new problem, which simultaneously performs the resolution enhancement and the motion blur removal, is derived. Specifically, a maximum a posterior estimation problem which estimates original HR frames with motion blur kernels is introduced into our method. Furthermore, in order to obtain the posterior probability based on Bayes’ rule, a prior probability of the original HR frame, whose distribution can adaptively be set for each area, is newly defined. By adaptively setting the distribution of the prior probability, preservation of the sharpness in edge regions and suppression of the ringing artifacts in smooth regions are realized. Consequently, based on these novel approaches, the proposed method can perform successful reconstruction of the HR frames. Experimental results show impressive improvements of the proposed method over previously reported methods.

1 Introduction

High-resolution (HR) video sequences are necessary for various fundamental applications, and acquisition of data with an HR image sensor makes quality improvement straightforwardly. However, it is often difficult to capture video sequences with sufficient high quality from current image sensors. Furthermore, video sequences often include motion blurs in many situations, e.g., there is not enough light to avoid the use of a long shutter speed. Then image processing methodologies for increasing the visual quality are necessary to bridge the gap between demands of applications and physical constraints. Many researchers have proposed super-resolution (SR) methods for increasing the resolution levels of low-resolution (LR) video sequences [130]. Most SR methods are broadly categorized into two approaches, learning-based (example-based) approach and reconstruction-based approach. The learning-based approach estimates the HR frame from only its LR frame, but several other HR frames are utilized to learn a prior on the original HR frame [19]. On the other hand, the reconstruction-based approach estimates the HR frame from their multiple LR frames, and many methods based on this approach have been proposed [1030]. In this article, we focus on the reconstruction-based approach and discuss its details.

The reconstruction-based SR was first proposed by Tsai and Huang [10]. They used a frequency domain approach, and their formulation was extended by Kim et al. [11, 12]. In general, the frequency domain approaches have strength of theoretical simplicity and high computational efficiency. However, in these frequency domain approaches [1012], the observation model of LR frames is restricted to only translational motion [17]. Due to the lack of data correlation in the frequency domain, it is difficult to effectively use the spatial domain knowledge. Therefore, spatial domain approaches have often been developed to overcome the weakness of the frequency domain approaches [1324, 2730].

In general, since the estimation of HR frames is an ill-posed problem, the prior information is introduced to determine the solution of the SR problems. Also, it is represented as a prior probability or a regularization term, and they are adopted to stabilize the inversion of the ill-posed problem. Typically, intensity gradients are used for the regularization, and their L 1-norm or L 2-norm regularization approaches are often used [13, 14, 18]. Total variation (TV) [31] is utilized as the most common regularization. This means the conventional methods assume that the TV obtained from the original HR frames is based on the pre-defined distribution. Since L 2-norm regularization penalizes high-frequency components severely, the solution tends to become oversmoothed. On the other hand, although L 1-norm regularization keeps sharpness compared to L 2-norm regularization, it tends to increase artifacts in smooth regions.

In addition to the above problems, these conventional SR methods try to only recover HR frames from their LR frames. However, motion blurs are also caused in the image acquisition process, and their removal must be performed with the resolution enhancement. Therefore, many methods for removing motion blurs have been proposed [3236]. In order to realize the resolution enhancement and the motion blur removal, the conventional methods tend to separately perform these two procedures. Then, since the performance of the first procedure may cause the deterioration of the performance in the subsequent procedure, some artifacts such as blurring and ringing artifacts are enhanced in the final output.

As shown in the above discussions, the conventional methods have the following problems: (i) simultaneous resolution enhancement and motion blur removal cannot be realized, successfully, and (ii) regularization, i.e., prior information cannot be provided for target video sequences, adaptively.

This article presents an SR method for realizing the simultaneous resolution enhancement and motion blur removal based on adaptive prior settings. The main contributions in the proposed method are twofold.

  1. (i)

    Simultaneous estimation of the HR frame and the motion blur kernels: In order to estimate the original HR frame from its motion-blurred LR frames, a posterior probability of the original HR frame and the motion blur kernels is newly defined. Then, by using the Maximum a Posterior (MAP) estimation, the proposed method performs the simultaneous resolution enhancement and motion blur removal. This enables suppression of the performance degradation due to the separate processing (problem (i)). Note that for realizing the successful estimation of the HR frame in this approach, the following approach becomes necessary.

  2. (ii)

    A new prior probability for the HR frame: The proposed method derives a new prior probability distribution of the HR frame, whose shape can adaptively be set to the suitable one for each area. By estimating the optimal shape adaptively, oversmooth in edge regions and artifacts in smooth regions can be suppressed. Furthermore, the proposed method introduces a new weight factor concerning edge and blur directions into the derivation of the prior probability to reduce the oversmooth, which occurs in the blur direction, and the ringing artifacts. Then the problem (ii) can be alleviated by this approach.

Then, by combining the above two approaches, accurate reconstruction of the HR video sequences can be expected.

The remainder of this article is organized as follows. Section 2 shows the observation model of LR video sequences which is utilized in the proposed method. The resolution enhancement method of motion-blurred LR video sequences is presented in Section 3. In Section 4, the effectiveness of our method is verified by some results of experiments. Concluding remarks are shown in Section 5.

2 Observation model of motion-blurred LR video sequences

In this section, we present the observation model utilized in the proposed method. Let j th frame of a motion-blurred LR video sequence be denoted in a vector form by y ( j ) = y 1 ( j ) , y 2 ( j ) , , y N 1 N 2 ( j ) T R N L , where N 1×N 2 is the size of the LR frame, and N L =N 1 N 2. In this article, T denotes a vector/matrix transpose operator. An i th frame of its HR video sequence is denoted in a vector form by x ( i ) = x 1 ( i ) , x 2 ( i ) , , x q 1 N 1 q 2 N 2 ( i ) T R N H , where q 1 N 1×q 2 N 2 is the size of the HR frame, N H =q 1 N 1 q 2 N 2 and q 1≥1, q 2≥1. Note that j {i-M,i-M+1,…,i,…,i+M-1,i+M}, i.e., the i th HR frame is reconstructed from the 2M+1 motion-blurred LR frames by our method in the following section.

The observation model of j th LR frame is defined by the following equation.

y ( j ) = DBH ( j ) F ( i , j ) x ( i ) + v ( j ) = A ( i , j ) x ( i ) + v ( j ) ,
(1)

where

A ( i , j ) = DBH ( j ) F ( i , j ) R N L × N H .
(2)

In the above equations, F ( i , j ) R N H × N H is a motion operator between i th HR frame x (i) and the original HR frame corresponding to j th motion-blurred LR frame y (j), H ( j ) R N H × N H is a blurring operator due to the motion blur in j th frame, B R N H × N H is a low pass filter, D R N L × N H is a downsampling operator, v ( j ) R N L is an additive white noise vector in j th LR frame. In this article, we assume D and B are known, and they are the bicubic operator. Furthermore, F (i,j) is calculated by using the simple block matching method whose function “cvCalcOpticalFlowBM” is published in the libraries of OpenCV [37].

3 SR algorithm based on adaptive prior settings

This section presents the SR algorithm for simultaneously realizing the resolution enhancement and the motion blur removal based on the adaptive prior settings. In order to simultaneously estimate the HR frame and the motion blur kernels, the proposed method defines their posterior probability. Specifically, this posterior probability can be obtained by using Bayes’ rule as follows:

Pr x ( i ) , β ( i ) , k | y = Pr y | x ( i ) , β ( i ) , k Pr x ( i ) , β ( i ) , k Pr ( y ) .
(3)

In the above equation, β (i) is a parameter vector for a prior probability of the HR frame x (i), where its details such as the role and the dimension are explained in Section 3.1. Furthermore,

k = k ( i - M ) T , k ( i - M + 1 ) T , , k ( i + M ) T T R L 1 L 2 ( 2 M + 1 ) ,
(4)
y = y ( i - M ) T , y ( i - M + 1 ) T , , y ( i + M ) T T R N L ( 2 M + 1 ) .
(5)

As described above, 2M+1 is the number of the motion-blurred LR frames for estimating the HR frame x (i). In Equation (4), the motion blur kernel of j th (j=i-M,…,i+M) frame, which corresponds to the blurring operator H (j), is denoted in a vector form by k ( j ) = k 1 ( j ) , k 2 ( j ) , , k L 1 L 2 ( j ) T R L 1 L 2 , where L 1×L 2 is the size of the motion blur kernel. The blurring operator H (j), which is a Toeplitz matrix, satisfies the following equation:

H ( j ) x ( j ) =vec K ( j ) X ( j ) ,
(6)

where x ( j ) R N H is a vector form of an original HR frame (j th HR frame) of j th motion-blurred LR frame y (j). Furthermore, K ( j ) R L 1 × L 2 and X ( j ) R q 1 N 1 × q 2 N 2 are, respectively, the matrix forms of j th motion blur kernel k (j) and j th HR frame x (j), and is a convolution operator. In addition, vec[·] is a vectorization operator.

Since we generally assume the denominator Pr(y) in Equation (3) is constant, the following equation is obtained.

Pr x ( i ) , β ( i ) , k | y Pr y | x ( i ) , β ( i ) , k Pr x ( i ) , β ( i ) , k .
(7)

In the above equation, since the motion blur kernels k are independent on the HR frame x (i) and the parameter β (i), the prior probability Pr(x (i),β (i),k) is rewritten as

Pr( x ( i ) , β ( i ) ,k)=Pr( x ( i ) , β ( i ) )Pr(k).
(8)

Then we calculate the HR frame x (i) and the motion blur kernels k from the obtained posterior probability based on the MAP estimation as follows:

x ̂ ( i ) , β ̂ ( i ) , k ̂ =arg max x ( i ) , β ( i ) , k logPr( x ( i ) , β ( i ) ,k|y).
(9)

From Equations (7) and (8), the above equation can be rewritten as follows.

x ̂ ( i ) , β ̂ ( i ) , k ̂ = arg max x ( i ) , β ( i ) , k log Pr ( y | x ( i ) , β ( i ) , k ) Pr ( x ( i ) , β ( i ) ) Pr ( k ) .
(10)

In the above equation, we can utilize the observation model shown in the previous section for the likelihood Pr(y|x (i),β (i),k), where its details are shown in Section 3.2. Furthermore, the proposed method defines a new prior probability distribution of the HR frame Pr(x (i),β (i)), whose shape can adaptively be set for each area by determining the parameters β (i), for accurately reconstructing the target HR frame. In addition, a new weight factor is determined by using motion blur and edge directions and introduced into this prior probability to suppresses the oversmooth in edge regions and the noise and ringing artifacts in smooth regions. Then successful estimation of the HR frame and the motion blur kernels from the obtained posterior probability based on the MAP estimation can be expected.

As described above, we adopt the probability model. Furthermore, we use the probability model simultaneously representing the original HR frame x (i), the parameters β (i) and the motion blur kernels k. Thus, we briefly explain the reason why the probability model is adopted and the reason why the probability model simultaneously representing x (i), β (i) and k is used, respectively.

Reason why we adopt the probability model There have been proposed many frameworks which do not adopt probability models. In general, in these methods, they tend to assume that the distribution of the estimation target is represented by only one simple distribution such as the Gaussian distribution. On the other hand, in the methods which adopt probability models, it becomes feasible to adaptively estimate the distribution from the statistic characteristic of the estimation target. Then, as shown in the proposed method, the probability model whose distribution matches the estimation target can directly be used for its reconstruction. Therefore, due to its high degree of freedom, the proposed method uses the probability model.

Reason why the probability model simultaneously representing x (i) , β (i) , and k is used Since the proposed method tries to simultaneously perform the SR and the motion blur removal, we must estimate both of the motion blur kernels k and the original HR frame x (i) from only the motion blurred LR frames y. Furthermore, it is difficult to represent the original HR frame x (i) by using a simple fixed distribution, and we have to model it by using a distribution whose shape can adaptively be determined for each area based on its parameters β (i). Therefore, the original HR frame x (i) depends on the parameters β (i), and the motion blurred LR frames y are generated from the original HR frame x (i) and the motion blur kernels k. In order to estimate these three unknowns x (i), β (i), and k from only the motion-blurred LR frames y without suffering from their contradictions, the proposed method adopts the probability model which enables their simultaneous representation.

This section is organized as follows. Section 3.1 shows the prior probability distribution used in the proposed method. The algorithm for the reconstruction of the HR frame is presented in Section 3.2.

3.1 Definition of prior probability distributions

This section explains the prior probability distributions of the HR frame, its parameters, and the motion blur kernels utilized in our method. As shown in Equation (8), the prior probability Pr(x (i),β (i),k) is divided into Pr(x (i),β (i)) and Pr (k). Thus, in this section, we explain the details of Pr(x (i),β (i)) and Pr (k).

3.1.1 Prior probability of HR frame and its parameters

First, the prior probability of the HR frame and its parameters is defined as follows.

Pr( x ( i ) , β ( i ) ) Pr e ( x ( i ) | β ( i ) ) Pr s ( x ( i ) , β ( i ) ),
(11)

where we assume the prior probability Pr(β (i))c o n s t. in the above equation. The conditional probability Pr e (x (i)|β (i)) is defined in such a way that sharp edges in the HR frame are kept. Therefore, we adaptively set its distribution based on intensity gradients for each area in the HR frame. Furthermore, the conditional probability Pr s (x (i)|β (i)) is adopted to suppress noises and ringing artifacts in smooth regions of the HR frame. By using the intensity gradients of the motion blurred LR frame, we suppress the increase of the intensity gradients at smooth regions in the HR frame. The details of Pr e (x (i)|β (i)) and Pr s (x (i)|β (i)) are shown below.

The details of Pr e (x(i)|β (i))

The conditional probability Pr e (x (i)|β (i)) in Equation. (11) is defined by Generalized Gaussian Distribution (GGD) [38] as follows:

Pr e ( x ( i ) | β ( i ) ) = s = 1 N H t N s 1 2 αΓ 1 + 1 1 + β s , t ( i ) × exp - | ρ t x s ( i ) - μ | α β s , t ( i ) ,
(12)

where

ρ t ( x s ( i ) )= x s ( i ) - x t ( i ) ,
(13)

and N s is the set of pixels neighboring s, s N s , and s N t t N s . In the above equation, x s ( i ) is an s th element of x (i). In Equation (12), μ,α, and β s , t ( i ) are the mean, scale, and shape parameters of the GGD, respectively. In addition, Γ(·) is the Gamma function, which is defined as

Γ(x)= 0 t x - 1 e t dt.
(14)

Note that β (i) in Equation (12) contains all of β s , t ( i ) . Note that each pixel s has | N s | parameters β s , t ( i ) , where | N s | is the number of pixels in the neighborhood. Thus, the dimension of β (i) becomes N H | N s |. In the proposed method, the shape of the prior probability distribution Pr (x (i),β (i)) changes at each area in the HR frame by introducing the shape parameter β s , t ( i ) (1 β s , t ( i ) 2) into the definition of the conditional probability Pr e (x (i)|β (i)). If β s , t ( i ) =1 and β s , t ( i ) =2, the distribution of Pr e (x (i)|β (i)), respectively, equals to the Laplace distribution and the Gaussian distribution. It should be noted that the HR frame generally contains both of edge regions and smooth regions. If the prior probability of the HR frame is defined by one distribution, it means both of edge and smooth regions have the same properties. However, these regions actually have different properties each other. Thus, the proposed method estimates the parameter of the GGD, which determines its shape, at each area in the HR frame. In the HR frame, the edge regions should have large values of the intensity gradients. By automatically estimating the distribution, which nearly becomes the Laplace distribution, the penalty of the intensity gradient becomes weaker than that defined by using the Gaussian distribution, where the details of its estimation are shown in Section 3.2. Consequently, by keeping the large intensity gradients, the edge regions can preserve the sharpness.

The details of Pr s (x(i)|β (i))

Next, the conditional probability Pr s (x (i)|β (i)) in Equation (11) is defined as follows:

Pr s x ( i ) | β ( i ) = s = 1 N H t N s 1 2 Π σ 1 × exp - 1 2 ( σ 1 ) 2 m s ρ t x s ( i ) - ρ t y ~ s ( i ) 2 ,
(15)

where y ~ s ( i ) is s th element of y ~ ( i ) = y ~ 1 ( i ) , y ~ 2 ( i ) , , y ~ N H ( i ) T , and y ~ ( i ) R N H is an enlarged result of y (i) by the cubic interpolation. Note that in this article, we utilize the cubic interpolation to obtain y ~ ( i ) for its simplicity. Several approaches for obtaining y ~ ( i ) can be adopted, and better estimation results can be also expected. Furthermore, (σ 1)2 is a variance of the Gaussian distribution, and

ρ t y ~ s ( i ) = y ~ s ( i ) - y ~ t ( i ) ,
(16)
m s = 1 | N s | t N s β s , t ( i ) - 1 .
(17)

In Equation (15), we use the LR frame to constraint the intensity gradients of the HR frame for suppressing noises and ringing artifacts. Equation (15) is motivated by the fact that the motion blur can generally be considered as a smooth filtering process. In a locally smooth region of the LR frame, the corresponding region in the HR frame should be also smooth. In Equation (17), since the estimated value of β s , t ( i ) becomes larger in smooth regions, m s also becomes larger in such regions. In the region having the large value of m s , the intensity gradient is strongly constrained by the LR frame. Since the LR frame does not have any ringing and noise artifacts, increasing of the intensity gradient is prevented in the estimated HR frame, and those artifacts can be suppressed.

3.1.2 Prior probability of motion blur kernels

As the prior probability of the motion blur kernels, we define its distribution as follows.

Pr(k)= j = i - M i + M Pr( k ( j ) ),
(18)

where

Pr( k ( j ) )=exp(- η ( j ) k ( j ) ),
(19)

and η (j) is a rate parameter. It is commonly observed that since a motion blur kernel identifies the path of the camera, it tends to be sparse with most values close to zero. This prior probability for the motion blur kernel is used in [19], and we adopt the same prior probability in this article.

3.1.3 Discussion of effectiveness of new prior probability

As shown in the above explanations, the proposed method tries to perform the resolution enhancement and the motion blur removal with keeping the sharpness in edge regions and suppressing noises and ringing artifacts in smooth regions. In general, in order to derive the posterior probability of the HR frame based on Bayes’ rule as shown in Equation (3), the likelihood and the prior probability should be defined. Note that the likelihood is derived from the observation model, and its distribution tends to be common between different methods, where the details of the likelihood is shown in the following section. Therefore, the proposed method focuses on the prior probability and introduces the following novel points to solve the conventional problems.

  • Adaptive setting of the distribution shape of the prior probability in Equation (12)

    The proposed method adaptively determines the parameters β s , t ( i ) which set the distribution shape of the prior probability in such a way that the reconstructed HR frame, respectively, keeps sharpness and smoothness in edge and smooth regions.

  • Suppression of noises and ringing artifacts in Equation (15)

    The proposed method monitors the parameters β s , t ( i ) , which represent the distribution shape, in Equation (17) and derives the new prior probability to suppress the occurrence of noises and ringing artifacts in smooth regions.

The proposed method divides Pr(x (i),β (i)) into Pr e (x (i)|β (i)) and Pr s (x (i)|β (i)) in order to deal with edge and smooth areas separately during the SR process. It should be noted that Pr e (x (i)|β (i)) is defined by the GGD, and it has a distribution between the Laplace distribution and the Gaussian distribution. On the other hand, Pr s (x (i)|β (i)) is defined as the Gaussian distribution, i.e., Pr e (x (i)|β (i)) has a higher degree of freedom. This is because Pr e (x (i)|β (i)) and Pr s (x (i)|β (i)), respectively, have different roles in the proposed method. Specifically, Pr e (x (i)|β (i)) is adopted for correctly representing the prior on intensity gradients of the original HR frame. Therefore, the proposed method uses the GGD for providing its distribution correctly. Unfortunately, since there is a limitation to perfectly represent the prior even if we use the GGD, some artifacts may occur as a result, and Pr s (x (i)|β (i)) becomes necessary to remove such artifacts by smoothing the corresponding regions. In the proposed method, we try to perform a simple smoothing, and thus, Pr s (x (i)|β (i)) based on the Gaussian distribution using L 2-norm is utilized. Note that since the smoothing should not be performed in edge regions, the proposed method monitors β s , t ( i ) to avoid the oversmoothing in those regions.

It is also possible to apply some post-processing techniques, such as smoothing filters, to the removal of those artifacts. In this case, we should introduce some functions such as those shown in Equations (15)–(17) into the design of the filters. Nevertheless, since artifacts (i.e., errors) caused in smooth regions affect the estimation of the whole target HR frame during the optimization and also cause the estimation errors, we simultaneously use Pr s (x (i)|β (i)) with Pr e (x (i)|β (i)). Then it is expected that the errors in the smooth regions can be suppressed in the reconstruction process, and the propagation of those errors to the other areas tends to be avoided.

Then, from the above novel points, our method provides a solution to the problems of the conventional methods not being able to perform adaptive reconstruction.

3.2 Algorithm for reconstructing HR frame

In this section, we present the algorithm for reconstructing the HR frame. The proposed method simultaneously estimates the optimal results of the HR frame x (i), its parameters β (i), and the motion blur kernels k by using the MAP estimation scheme, and thus, they can be obtained as Equation (10). In Equation (10), the conditional probability, i.e., the likelihood Pr(y|x (i),β (i),k) is obtained as

Pr(y| x ( i ) , β ( i ) ,k)= j = i - M i + M Pr( y ( j ) | x ( i ) , k ( j ) ),
(20)

where we assume y (j) is independent on β (i). Using the model in Equation (1) and assuming that the noise is a zero mean white Gaussian noise of variance σ 2 ( j ) 2 , the likelihood of the LR frame y (j) can be written as

Pr ( y ( j ) | x ( i ) , k ( j ) ) = 1 2 Π σ 2 ( j ) × exp - 1 2 σ 2 ( j ) 2 y ( j ) - A ( i , j ) x ( i ) 2 .
(21)

By substituting Equations (11),(12),(15),(18), and (21) into Equation (10), the minimization cost function is calculated as follows:

x ̂ ( i ) , β ̂ ( i ) , k ̂ = arg min x ( i ) , β ( i ) , k j = i - M i + M λ 2 ( j ) y ( i ) - A ( i , j ) x ( i ) 2 + s = 1 N H log 2 αΓ 1 + 1 1 + β s , t ( i ) + s = 1 N H w s , t ( i ) ρ t x s ( i ) - μ α β s , t ( i ) + s = 1 N H λ 1 m s ρ t x s ( i ) - ρ t y ~ s ( i ) 2 + η ( j ) k ( j ) ,
(22)

where λ 1 = 1 2 ( σ 1 ) 2 and λ 2 ( j ) = 1 2 σ 2 ( j ) 2 . In the above equation, w s , t ( i ) , which is a new weight factor considering motion blur direction, is introduced into the third term and defined as follows:

w s , t ( i ) = w ~ s , t ( i ) u N s w ~ s , u ( i ) ,
(23)
w ~ s , t ( i ) = u = 1 L 1 v = 1 L 2 K ( i ) ( u , v ) sin arctan v u - arctan Δ y ( s , t ) Δ x ( s , t ) ,
(24)

where Δ y ( s , t ) and Δ x ( s , t ) are distances from s th pixel to t th pixel through x- and y-coordinates, respectively. The matrix K (i) corresponds to the matrix shown in Equation (6), and K (i)(u,v) (u=1,2,…,L 1;v=1,2,…,L 2) is a (u,v)th element of K (i). If the direction between s th pixel and t th pixel becomes parallel to the main direction of the motion blur, the weight factor becomes small. If the resolution enhancement is only performed, the regularization term is dependent only on the characteristic of the HR frame since the blur is commonly constant in all directions. However, due to both of the resolution reduction and the motion blur, the blur does not become constant in all directions. This weight factor is utilized for avoiding the oversmooth due to the regularization term.

Finally, we explain the optimization procedures of Equation (22). Since the cost function shown in Equation (22) consists of the three large sets of unknowns (the HR frame x (i), the parameters β (i), and the motion blur kernels k), the use of direct search techniques is intractable. Therefore, the following cyclic coordinate descent optimization procedures are adopted to estimate the unknowns. Specifically, we iteratively perform the following three procedures.

Step 1: Update of the HR frame x (i)

The parameters β (i) and the motion blur kernels k are fixed, and the HR frame is estimated by performing the following iterations:

x ( r + 1 ) ( i ) = x ( r ) ( i ) + h 1 E 1 ( x ( i ) ) x ( i ) x ( i ) = x ( r ) ( i ) ,
(25)

where r and h 1, respectively, represents the iteration number and the step size, and the cost function E 1(x (i)) is defined as

E 1 ( x ( i ) ) = j = i - M i + M λ 2 ( j ) y ( j ) - A ( i , j ) x ( i ) 2 + s = 1 N H w s , t ( i ) ρ t x s ( i ) - μ α β s , t ( i ) + s = 1 N H λ 1 m s ρ t x s ( i ) - ρ t y ~ s ( i ) 2 .
(26)

Step 2: Update of the parameters β (i)

The HR frame x (i) and the motion blur kernels k are fixed, and the parameters β (i) are estimated by performing the following iterations:

β ( r + 1 ) ( i ) = β ( r ) ( i ) + h 2 E 2 β ( i ) β ( i ) β = β ( r ) ( i ) s.t.1 β s , t ( i ) 2,
(27)

where h 2 is the step size, and the cost function E 2(β (i)) is defined as

E 2 ( β ( i ) ) = s = 1 N H log 2 αΓ 1 + 1 1 + β s , t ( i ) + s = 1 N H w s , t ( i ) ρ t x s ( i ) - μ α β s , t ( i ) + s = 1 N H λ 1 m s ρ t x s ( i ) - ρ t y ~ s ( i ) 2 .
(28)

Step 3: Update of the motion blur kernels k

The HR frame x (i) and the parameters β (i) are fixed, and the motion blur kernel k (j) (j=i-M,…,i+M) of j th frame is estimated by performing the following iterations:

k ( r + 1 ) ( j ) = k ( r ) ( j ) + h 3 E 3 k ( j ) k ( j ) k ( j ) = k ( r ) ( j ) ,
(29)

where h 3 is the step size, and the cost function E 3 k (j) is defined as

E 3 k ( j ) = λ 2 ( j ) y ( j ) - A k ( i , j ) k ( j ) 2 + η ( j ) k ( j ) .
(30)

Note that A k ( i , j ) is defined in the following equations:

A k ( i , j ) =DB X ~ ( j ) R N L × L 1 L 2 ,
(31)

where X ~ ( j ) R N H × L 1 L 2 satisfies

X ~ ( j ) k ( j ) = H ( j ) x ( j ) ,
(32)
x ( j ) = F ( i , j ) x ( i ) .
(33)

Then we can simultaneously estimate the HR frame x ̂ ( i ) , the parameters β ̂ ( i ) , and the motion blur kernels k ̂ . This optimization method is based on the steepest descend algorithm. Thus, the convergence of the iterative process may not be guaranteed.

In the proposed method, we newly define the posterior probability for simultaneous estimation of the HR frame and the motion blur kernels. Furthermore, the proposed method introduces the new prior probability, and by estimating the optimal parameter determining its distribution in each area, the sharpness in edge regions is preserved. In smooth regions, noises and ringing artifacts are reduced by using the information of the motion blurred LR frames. Therefore, the proposed method performs the reconstruction more adaptively than the conventional methods, and accurate restoration and resolution enhancement by our method can expected.

4 Experimental results

The performance of the proposed method is verified in this section. We used video sequences shown in Table 1. According to Equation (1), motion-blurred LR video sequences shown in Table 2 were generated from the motion blur kernels (PSF) shown in Figure 1. Then we applied the proposed method to the LR video sequences and generated resolution-enhanced video sequences at the original resolution. When applying the proposed method to the test sequences, we simply set α=1, μ=0, λ 1 ( j ) = 1 | i - j | , λ 2=1.0×10-3, h 1=2.0×10-2, h 2=1.0×10-5, h 3=1.0×10-11 and η=5.0×107. It should be noted that α, μ, and λ 1 ( j ) have been set to the reasonable values. Furthermore, since h 1, h 2, and h 3 only determine the step size in the cyclic coordinate descend optimization procedures, they do not affect the performance of the proposed method if we set them to sufficiently small values. Then the parameters which seem to affect the performance of the proposed method are λ 2 and η, and we set these parameters from some preliminary experiments. In addition to the setting of the above parameters, we also show the conditions in the experiments below.

  • Number of frames used to reconstruct each HR frame : 5 (i.e., M=2)

  • Number of iterations for the whole optimization: 10

  • Note that in each iteration, we also performed the following iterations for x (i), β (i) and k (j):

    • Number of iterations for optimizing x (i): 300

    • Number of iterations for optimizing β (i): 50

    • Number of iterations for optimizing k (j): 10

  • Block size used in the block matching algorithm : 7×7 pixels

  • Neighborhood N s : Eight neighboring pixels of pixel s

  • Initial conditions of x (i), β (i) and k:

    • x (i) : The initial vector is set to y ~ ( i ) used in Equation (15).

    • β (i) : All elements are initially set to 2.0.

    • k : Result obtained by the method in [34] is used as an initial condition.

Table 1 Test video sequences used for the verification in this experiment
Table 2 The motion blurred LR video sequences used for the performance comparison between the proposed method and the conventional methods
Figure 1
figure 1

PSF for blurring the video sequences. (a) PSF of “Mobile & Calendar” (19×19 pixels), (b) PSF of “Susie” (13×13 pixels), (c) PSF of “Coast Guard” (25×25 pixels).

For comparison, we respectively performed the following reconstruction by using the conventional methods [13, 18, 39]:

  1. 1.

    Comparative methods 1 and 2

    For comparison of the proposed method, we used the conventional methods [13, 18]. These methods are only the resolution enhance methods, which, respectively, use the L 2-norm or L 1-norm regularization term. In order to compare the proposed method and the conventional methods, the degradation model including the motion blur is used, and the motion blur kernels are estimated by Fergus et al. [34]. The proposed method adaptively determines the prior distribution, i.e., the regularization term is adaptively determined for the target video sequences. Thus, these comparative methods are suitable for the comparison of the proposed method.

  2. 2.

    Comparative method 3

    The conventional method [39], which is implemented by using the software provided by the authors, is only a resolution enhancement method utilizing the frequency domain approach. In order to compare the performance between the proposed method and this method, we remove the motion blur by Fergus et al. [34] after applying the resolution enhancement. This methods is used as the benchmarking method.

In order to perform the same experiments between different methods, we performed the registration (motion estimation) by using the simple block matching procedures shown in Section 2 for the proposed method and Comparative methods 1 and 2. It should be noted that Comparative method 3 based on [39] is a different approach, and thus, we used their proposed motion estimation approach for this comparative method. Recently, many successful registration methods have been proposed, and the performance of the SR can drastically be improved. However, since the main focus of this article is the reconstruction algorithm, we adopted such simple procedures.

The estimated HR results of “Mobile & Calendar” are shown in Figure 2. For better subjective evaluation, their enlarged portions are shown in Figure 3. It can be seen that the use of the proposed method has achieved improvements compared to the conventional methods. Specifically, the proposed method preserves the sharpness more successfully than do the conventional methods. Furthermore, the estimated kernels are shown in Figure 4. In this result, the proposed method successfully estimates the kernel with keeping its sparseness. Different experimental results are shown in Figures 5, 6, and 7. Compared to the results obtained by the conventional methods, it can be seen that various kinds of the motion blurs are accurately removed, and successful resolution enhancement is realized by using the proposed method. Therefore, high performance of the proposed method was verified by the experiments.

Figure 2
figure 2

Subjective performance comparison between the proposed method and the conventional methods (Test video sequence “Mobile & Calendar”). (a) Original frame, (b) corrupted frame, (c) reconstructed result by the proposed method, (d) reconstructed result by the comparative method 1, (e) reconstructed result by the comparative method 2, (f) reconstructed result by the comparative method 3.

Figure 3
figure 3

Zoomed portions of the result in Figure 2.(a–f) Correspond to zoomed portions of Figures 2a–f.

Figure 4
figure 4

Kernels estimated by the proposed method and the comparative method [34] (Test video sequence “Mobile & Calendar”).(a) PSF which is used for corrupting the original video frame (equal to Figure 1a), (b) estimated PSF obtained by the proposed method, (c) estimated PSF obtained by the method in [34].

Figure 5
figure 5

Subjective performance comparison between the proposed method and the conventional methods (Test video sequence “Susie”). (a) Original frame, (b) corrupted frame, (c) reconstructed result by the proposed method, (d) reconstructed result by the comparative method 1, (e) reconstructed result by the comparative method 2, (f) reconstructed result by the comparative method 3.

Figure 6
figure 6

Zoomed portions of the result in Figure 5: (a–f) correspond to zoomed portions of Figures 5a–f.

Figure 7
figure 7

Kernels estimated by the proposed method and the comparative method [34] (Test video sequence “Susie”).(a) PSF which is used for corrupting the original video frame (equal to Figure 1b), (b) estimated PSF obtained by the proposed method, (c) estimated PSF obtained by the method in [34].

Table 3 shows the mean PSNR values obtained by the proposed method and the comparative methods. The experimental results show that the proposed method outperforms other conventional methods. Furthermore, Figure 8 shows the details of the PSNR results obtained from the estimated HR video sequences.

Table 3 Performance comparison (PSNR (dB)) between the proposed method and the conventional methods
Figure 8
figure 8

Results of quantitative evaluation (PSNR) obtained by the proposed method and the conventional methods. (a) Results of “Mobile & Calendar”, (b) results of “Susie”, (c) results of “Coast Guard”.

From the obtained results, we can see the proposed method enables the successful reconstruction of the HR video sequences from the motion blurred LR video sequences. As shown in the previous section, the proposed method newly adopts the following two novel approaches:

  1. (i)

    Simultaneous resolution enhancement and motion blur removal

    The proposed method uses the posterior probability for simultaneously estimating the HR frame and the motion blur kernels from the target motion blurred LR frames. Then this provides a solution to the problems of the conventional methods that separately perform these two reconstructions, i.e., the problem that errors caused in the first reconstruction affect the performance of the subsequent reconstruction.

  2. (ii)

    Adaptive setting of prior probability on HR frame

    In the proposed method, the prior probability is adaptively set for the target video sequence. Specifically, we calculate the parameters which determine the distribution shape of the prior probability on intensity gradients to keep the sharpness in edge regions. Furthermore, the prior probability is also determined in such a way that noises and ringing artifacts are suppressed in smooth regions.

First, in order to confirm the effectiveness of (i), we show an example result which is obtained by separately performing the motion blur removal and the resolution enhancement in Figure 9. Specifically, after the resolution enhancement based on the proposed SR method, the motion blur removal by Fergus et al. [34] is performed. From the obtained results, we can see that the results obtained by the separate procedures tend to suffer from some degradations compared to those obtained by the proposed method. Thus, from this experiment, the effectiveness of (i) can be confirmed.

Figure 9
figure 9

Examples for confirming the effectiveness of the novel point (i) in the proposed method. (a) Results of reconstruction by the proposed method, (b) results of reconstruction by the method which performs the resolution enhancement by our method and removes the motion blur by [34], separately, (c) zoomed portion of (a), (d) zoomed portion of (b).

When estimating unknown data from its observed data, its estimation generally becomes an ill-posed problem. Therefore, we have to provide some prior information on the estimation target. In general, since it is quite difficult to perfectly provide the prior, the estimation error is always caused due to the mismatch of the prior. In methods which separately perform the SR and the motion blur removal, this problem occurs in each process, and the estimation performance of the original HR frame is degraded. Furthermore, after finishing the first process, the obtained result contains errors due to the above problem, and the remaining second process estimates the original HR frame by regarding the result obtained by the first process as an observation. However, the model in the second process does not generally consider the error caused in the first process, and its compensation becomes difficult.

Therefore, we think if one is performed after the other, errors in the first process may cause performance degradation of the subsequent process. Thus, the probability model simultaneously estimating all unknowns is introduced into the proposed method.

Next, in order to confirm the effectiveness of (ii), we show two results obtained by fixing the parameters, which determine the distribution shape of the prior probability, as β s , t ( i ) =1 and β s , t ( i ) =2 in Figure 10a,b, respectively. Furthermore, in Figure 10f, we show a map of the estimated parameters β s , t ( i ) . From the obtained results, it can be seen that the proposed method can adaptively set the parameters and enable the successful reconstruction of the HR video sequences. Consequently, the proposed method effectively solves the conventional problems and realizes the accurate reconstruction of HR video sequences from motion blurred LR video sequences.

Figure 10
figure 10

Examples for confirming the effectiveness of the novel point (ii) in the proposed method. (a) Results of reconstruction by the method which fixes the parameters β s , t ( i ) =1, (b) results of reconstruction by the method which fixes the parameters β s , t ( i ) =2, (c) zoomed portion of (a), (d) zoomed portion of (b), (e) zoomed portion of the proposed method (Zoomed portion of Figure 2c), (f) a map of the estimated parameters β s , t ( i ) for obtaining Figure 2c in our method. ( β s , t ( i ) =1 and β s , t ( i ) =2 are shown in black and white in this image.) Note that since s in Equation (12) represents the eight nearest neighbor pixels, the eight results are obtained as shown in (f).

In the above results and discussions, we can confirm the effectiveness of the novelties in the proposed method. Furthermore, we compare the performance of the proposed method with those of recent works. As the recent previous works, we selected the studies of [26, 36]. Since these methods realize SR, we performed the motion blur removal by using[34]. Note that in [26], the authors used L 1-norm or L 2-norm based regularization terms. Thus, in this experiment, we used both results obtained by these two regularization terms. Also, for implementing the method in [26], we used the region segmentation algorithm [40] instead of their reported one. Results obtained by applying the above recent methods to “Mobile & Calendar” and “Susie” are shown in Figures 11 and 12. Quantitative comparison between the proposed method and these methods are also shown in Figure 13. Furthermore, in Table 4, the mean PSNR values obtained by these methods are also shown. From the obtained results, it can be seen that the proposed method enables successful reconstruction compared to these conventional methods.

Figure 11
figure 11

Results obtained by the recent conventional methods (Test video sequence “Mobile & Calendar”). (a) Reconstruction result by the method in [26] (L 1-norm), (b) reconstruction result by the method in [26] (L 2-norm), (c) reconstruction result by the method in [36], (d) zoomed portion of (a), (e) zoomed portion of (b), (f) zoomed portion of (c).

Figure 12
figure 12

Results obtained by the recent conventional methods (Test video sequence “Susie”). (a) Reconstruction result by the method in [26] (L 1-norm), (b) reconstruction result by the method in [26] (L 2-norm), (c) reconstruction result by the method in [36], (d) zoomed portion of (a), (e) zoomed portion of (b), (f) zoomed portion of (c).

Figure 13
figure 13

Results of quantitative evaluation (PSNR) obtained by the proposed method and the recent conventional methods [26, 36].(a) Results of “Mobile & Calendar”, (b) results of “Susie”, (c) results of “Coast Guard”. Note that the results of our method are the same as those in Figure 8.

Table 4 Quantitative results (PSNR (dB)) of the conventional methods[26, 36]

In addition to the above results, we also show results obtained by applying the proposed method to some real video sequences. Specifically, we used the video sequences shown in [41] and performed the resolution enhancement by using the proposed method. Note that since these sequences do not include motion blurs, we focus on how successfully resolution enhancement can be achieved. Figures 14, 15, and 16 show the results of the proposed method. From the obtained results, we can see that the proposed method enables successful resolution enhancement in several areas of the obtained HR frames. On the other hand, the results of some areas are not satisfactory. Thus, we focus on the problem of the proposed method below.

Figure 14
figure 14

Reconstruction example 1 by the proposed method. (a) LR frame, (b) HR frame estimated by the proposed method. In this experiment, the resolution is enhanced to the twice size, and the obtained HR frame in (b) is 480×360 pixels. Note that the LR frame in (a) is enlarged to the same size as that of (b) by the cubic interpolation.

Figure 15
figure 15

Reconstruction example 2 by the proposed method. (a) LR frame, (a) HR frame estimated by the proposed method. In this experiment, the resolution is enhanced to the twice size, and the obtained HR frame in (b) is 946×708 pixels. Note that the LR frame in (a) is enlarged to the same size as that of (b) by the cubic interpolation.

Figure 16
figure 16

Reconstruction example 3 by the proposed method. (a) LR frame, (b) HR frame estimated by the proposed method. In this experiment, the resolution is enhanced to the twice size, and the obtained HR frame in (b) is 300×348 pixels. Note that the LR frame in (a) is enlarged to the same size as that of (b) by the cubic interpolation.

Finally, we discuss the limitation of the proposed method and show its future outlook. In the proposed method, we calculate the motion vectors for estimating F (i,j) by using the simple block matching method [37]. It is well known that results of SR severely depend on the estimation performance of F (i,j). In this article, we only focus on the performance of the reconstruction algorithm, but it is necessary to adopt more accurate motion estimation algorithms for improving the performance of SR.

Next, enhancing only the spatial resolution with removing motion blurs will introduce temporal aliasing effects. Therefore, video-to-video SR becomes necessary for reducing the above problem. There have been proposed many space-time based methods such as [36, 41, 42], and we also have to expand our method to the video-to-video version.

Furthermore, in this article, we only considered motion blur caused by the ego (camera) motion. However, in real applications, we have to focus on the motion blur caused by two different factors: the ego (global) motion and the objects’ (local) motions. For the successful video-to-video SR, we have to consider both global/local motion blurs.

These topics are future work in our study.

5 Conclusion

A resolution enhancement method of motion-blurred LR video sequences based on the SR technique has been presented in this article. In the proposed method, we introduce the following two approaches:

  1. (i)

    simultaneous estimation of the HR frame and the motion blur kernels, and (ii) a new prior probability for correctly representing the HR frame. Then we simultaneously estimate the HR frame and the motion blur kernels based on the new prior probability. Consequently, successful reconstruction of HR video sequences can be realized with preserving the sharpness and suppressing some artifacts.

Note that although the proposed method can perform the accurate SR in the experiments, some artifacts occur between the edge regions and smooth regions. This is because it becomes difficult to accurately estimate the parameters in the prior distribution of the original HR frame from only the motion blurred LR frames. We will have to tackle this problem in the future work. Furthermore, we simply set the parameters used in the proposed method. These parameters were set to the values that output the highest performance. However, they should be determined from the target video sequences, adaptively.

In addition, we also have to realize the video-to-video SR approach for reducing temporal aliasing effects. Therefore, we will study this point in the subsequent report.

References

  1. Freeman WT, Pasztor EC, Carmichael OT: Learning low-level vision. Int. J. Comput. Vis 2000, 40: 25-47. 10.1023/A:1026501619075

    Article  MATH  Google Scholar 

  2. Hertzmann A, Jacobs CE, Oliver N, Curless B, Salesimn DH: Image analogies. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’01 2001, 327-340.

    Chapter  Google Scholar 

  3. Freeman WT, Jones TR, Pasztor EC: Example-based super-resolution. IEEE Comput. Graph. Appl 2002, 22: 56-65.

    Article  Google Scholar 

  4. Sun J, Zheng NN, Tao H, Shum HY: Image hallucination with primal sketch priors, vol. 2. Proceedings. 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003 2003, 729-736.

    Google Scholar 

  5. Wang Q, Tang X, Shum H: Patch based blind image super resolution, vol. 1. Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) 2005, 709-716.

    Chapter  Google Scholar 

  6. Stephenson TA, Chen T: Adaptive Markov random fields for example-based super-resolution of faces. EURASIP J. Appl. Signal Process 2006, 2006: 225-225.

    Article  Google Scholar 

  7. Jiji CV, Chaudhuri S: Single-frame image super-resolution through contourlet learning. EURASIP J. Appl. Signal Process 2006, 2006: 235-235.

    Article  Google Scholar 

  8. Jiji CV, Chaudhuri S, Chatterjee P: Single frame image super-resolution: should we process locally or globally? Multidimen. Syst. Signal Process 2007, 18: 123-152. 10.1007/s11045-007-0024-1

    Article  MATH  MathSciNet  Google Scholar 

  9. Li X, Lam KM, Qiu G, Shen L, Wang S: An efficient example-based approach for image super-resolution. International Conference on Neural Networks and Signal Processing, 2008 2008, 575-580.

    Google Scholar 

  10. Tsai R, Huang T: Multiframe image restoration and registration. Adv. Comput. Vis. Image Process 1984, 1: 317-339.

    Google Scholar 

  11. Kim S, Bose N, Valenzuela H: Recursive reconstruction of high resolution image from noisy undersampled multiframes. IEEE Trans. Acoust. Speech Signal Process 1990, 38(6):1013-1027. 10.1109/29.56062

    Article  Google Scholar 

  12. Kim S, Su WY: Recursive high-resolution reconstruction of blurred multiframe images. IEEE Trans. Image Process 1993, 2(4):534-539. 10.1109/83.242363

    Article  Google Scholar 

  13. Schultz R, Stevenson R: Extraction of high resolution frames from video sequences. IEEE Trans. Image Process 1996, 5: 996-1011. 10.1109/83.503915

    Article  Google Scholar 

  14. Hardie R, Barnard K, Armstrong E: Joint MAP registration and high-resolution image estimation using a sequence of undersampled images. IEEE Trans. Image Process 1997, 6(12):1621-1633. 10.1109/83.650116

    Article  Google Scholar 

  15. Baker S, Kanade T: Limits on super-resolution and how to break them. IEEE Trans. Pattern Anal. Mach. Intell 2002, 24(9):1167-1183. 10.1109/TPAMI.2002.1033210

    Article  Google Scholar 

  16. Farsiu S, Robinson D, Elad M, Milanfar P: Robust shift and add approach to super-resolution. Appl. Digi. Image Process. XXVI 2003, 5203: 121-130. 10.1117/12.507194

    Article  Google Scholar 

  17. Park S, Park M, Moon G: Super-resolution image reconstruction: a technical over view. IEEE Signal Process. Mag 2003, 20(3):21-36. 10.1109/MSP.2003.1203207

    Article  Google Scholar 

  18. Farsiu S, Robinson M, Elad M, Milanfar P: Fast and robust multiframe super resolution. IEEE Trans. Image Process 2004, 13(10):1327-1344. 10.1109/TIP.2004.834669

    Article  Google Scholar 

  19. Hu H, Kondai L: A regularization framework for joint blur estimation and super-resolution of video sequences. ICIP 2005, 3: 329-332.

    Google Scholar 

  20. van Ouwerkerk J: Image super-resolution survey. Image Vis. Comput 2006, 24(10):1039-1052. 10.1016/j.imavis.2006.02.026

    Article  Google Scholar 

  21. Shen H, Zhang L, Huang B, Li P: A MAP approach for joint motion estimation, segmentation, and super resolution. IEEE Trans. Image Process 2007, 16(2):479-490.

    Article  MathSciNet  Google Scholar 

  22. Takeda H, Frasiu S, Milanfar P: Kernel regression for image processing and reconstruction. IEEE Trans. Image Process 2007, 16(2):349-366.

    Article  MathSciNet  Google Scholar 

  23. Yuan-Ran L, Dao-Qing D: Color superresolution reconstruction and demosaicing using elastic net and tight frame. IEEE Trans. Circuits Syst. I: Regular Papers 2008, 55(11):3500-3512.

    Article  Google Scholar 

  24. Omer O, Tanaka T: Joint blur identification and high-resolution image estimation based on weighted mixed-norm with outlier rejection. IEEE International Conference on Acoustics, Speech and Signal Processing, 2008, ICASSP 2008 2008, 1305-1308.

    Chapter  Google Scholar 

  25. Omer O, Tanaka T: Extraction of high-resolution frame from low-resolution video sequence using region-based motion estimation. IEICE Trans. Fund 2010, E93-A(4):742-751. 10.1587/transfun.E93.A.742

    Article  Google Scholar 

  26. Omer O, Tanaka T: Region-based weighted-norm with adaptive regularization for resolution enhancement. Digi. Signal Process 2011, 21(4):508-516. 10.1016/j.dsp.2011.02.005

    Article  Google Scholar 

  27. Protter M, Elad M, Takeda H, Milanfar P: Generalizing the nonlocal-means to super-resolution reconstruction. IEEE Trans. Image Process 2009, 18: 36-51.

    Article  MathSciNet  Google Scholar 

  28. Baboulaz L, Dragotti P: Extract feature extraction using finite rate of innovation principles with an application to image super-resolution. IEEE Trans. Image Process 2009, 18(2):281-298.

    Article  MathSciNet  Google Scholar 

  29. Takeda H, Milanfar P, Protter M, Elad M: Super-resolution without explicit subpixel motion estimation. IEEE Trans. Image Process 2009, 18(9):1958-1975.

    Article  MathSciNet  Google Scholar 

  30. Lee IH, Bose N, Lin CW: Locally adaptive regularized super-resolution on video with arbitrary motion. 17th IEEE International Conference on Image Processing (ICIP), 2010 2010, 897-900.

    Chapter  Google Scholar 

  31. Rudin L, Osher S, Fatemi E: Nonlinear total variation based on removal algorithms. Physica D 1992, 60: 259-268. 10.1016/0167-2789(92)90242-F

    Article  MATH  MathSciNet  Google Scholar 

  32. Richardson H: Bayesian-based iterative method of image restoration. J. Opt. Soc. Am 1972, 62: 55-59. 10.1364/JOSA.62.000055

    Article  Google Scholar 

  33. Lucy L: An iterative technique for the rectification of observed distributions. Astron. J 1974, 79(6):745-754.

    Article  Google Scholar 

  34. Fergus R, Singh B, Hertzmann A, Roweis S, Freeman W: Removing camera shake from a single photograph. ACM Trans. Graph. (SIGGRAPH) 2006, 25(3):787-794. 10.1145/1141911.1141956

    Article  Google Scholar 

  35. Yuan L, Sun J, Quan L, Shum H: Progressive inter-scale and intra-scale non-blind image deconvolution. ACM Trans. Graph. (SIGGRAPH) 2008, 27(3):1-10.

    Article  Google Scholar 

  36. Takeda H, Milanfar P: Removing motion blur with space-time processing. IEEE Trans. Image Process 2011, 20(10):2990-3000.

    Article  MathSciNet  Google Scholar 

  37. Bradski G: The OpenCV library. Dr. Dobb’s Journal of Software Tools, 2000

    Google Scholar 

  38. Do M, Vetteli M: Wavelet-based texture retrieval using generalized gaussian density and Kullback-Leibler distance. IEEE Trans. Image Process 2002, 11: 146-158. 10.1109/83.982822

    Article  MathSciNet  Google Scholar 

  39. Vandewalle P, Süsstrunk S, Vetterli M: A frequency domain approach to registration of aliased images with application to super-resolution. EURASIP J. Appl. Signal Process 2006, 2006: 1-14.

    Article  Google Scholar 

  40. Meyer F: Topographic distance and watershed lines. Signal Process 1994, 38: 113-125. 10.1016/0165-1684(94)90060-4

    Article  MATH  Google Scholar 

  41. Faktor A, Irani M: Space-time super-resolution from a single video. In Proceedings of CVPR. : ; 2011:3353-3360.

    Google Scholar 

  42. Shechtman E, Caspi Y, Irani M: Space-time super-resolution. IEEE Trans. Pattern Anal. Mach. Intell 2005, 27(4):531-545.

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This research was partly supported by a Grant-in-Aid for Scientific Research (B) 21300030, from the Japan Society for the Promotion of Science (JSPS).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Takahiro Ogawa.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ogawa, T., Izumi, D., Yoshizaki, A. et al. Super-resolution for simultaneous realization of resolution enhancement and motion blur removal based on adaptive prior settings. EURASIP J. Adv. Signal Process. 2013, 30 (2013). https://doi.org/10.1186/1687-6180-2013-30

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2013-30

Keywords