Skip to main content

Texture classification using rotation invariant models on integrated local binary pattern and Zernike moments

Abstract

More and more attention has been paid to the invariant texture analysis, because the training and testing samples generally have not identical or similar orientations, or are not acquired from the same viewpoint in many practical applications, which often has negative influences on texture analysis. Local binary pattern (LBP) has been widely applied to texture classification due to its simplicity, efficiency, and rotation invariant property. In this paper, an integrated local binary pattern (ILBP) scheme including original rotation invariant LBP, improved contrast rotation invariant LBP, and direction rotation invariant LBP is proposed which can effectively overcome the deficiency of original LBP that is ignoring contrast and direction information. In addition, for surmounting another major drawback of LBP such as locality which can result in the lack of shape and space expression of the holistic texture image, Zernike moment features are fused into the improved LBP texture features in the proposed method because they comprise orthogonal and rotation invariant property and can be easily and rapidly calculated to an arbitrary high order. Experimental results show that the proposed method can be remarkably superior to the other state-of-the-art methods when rotation invariant texture features are extracted and classified.

1 Introduction

Texture analysis is an attractive topic in image processing and pattern recognition. It plays a vital role in many important applications such as object tracking or recognition, remote sensing, image retrieval based on similarity, and so on [14]. Guo et al. [5] summarized four primary problems about texture analysis which are respectively image classification based on texture content, image segmentation of homogeneous texture regions, texture synthesis for graphics applications, and shape information acquisition from texture cue.

It is a very difficult problem to analyze existing texture in the real world mainly because of some uncertain factors such as inhomogeneity, illumination changes, and variability of texture appearance, etc. In the early stage, researchers focus on using statistical features to classify texture images. Haralick et al. [6] firstly proposed to use cooccurrence statistics to describe texture features. In the nineties, the Gabor filtering method of Manjunath and Ma [7] is credited as the current excellent technique in texture analysis. Although these methods obtained good performance, generally they need be made an explicit or implicit assumption that the training and testing samples have identical or similar orientations or are acquired from the same viewpoint [8]. In many practical applications, however, this assumption often cannot be guaranteed. Based on the practical experience, this phenomenon can be found that no matter how to rotate the texture images, these texture images always can be exactly classified from human vision point of view. Therefore, invariant texture analysis is highly demanded in both theoretical research and practical application.

More and more attention has been paid to the invariant texture analysis. An excellent review is summarized by Zhang and Tan [8]. Among these methods, Kashyap and Khotanzad [9] firstly researched rotation invariant texture classification by using a circular autoregressive model whose parameters are invariant to image rotation. Choe and Kashyap [10] proposed an autoregressive fractional difference model to possess rotation invariant parameters. Hidden Markov model [11] also was used to explore rotation invariant texture classification. In addition, wavelet analysis is an excellent tool to obtain rotation invariant texture feature. For example, Jafari-Khouzani and Soltanian-Zadeh [12] proposed to extract wavelet energy features containing the texture orientation information to classify the texture images. In addition, a polar, analytic form of a two-dimensional Gabor wavelet [13] was used to deduce rotation invariant texture feature. Recently, some methods based on statistical learning was proposed by Varma and Zisserman [14, 15], in which a rotation invariant texton library is first built using a training set, and then a testing texture image is classified according to its texton distribution. Crosier and Griffin [16] use basic image features (BIF) for texture classification and obtain excellent results. Furthermore, some pioneering work on scale and affine invariant texture classification has been done by using fractal analysis [17] and affine adaptation [18].

Local binary pattern (LBP) has been being reputable due to its effectiveness, speed, and rotation invariant property since it was mentioned by Harwood et al. [19]. Later it was introduced to the public by Ojala et al. [20]. Many researchers developed LBP methods based on Ojala’s idea. For example, Zhao et al. [21], Maani et al. [22], and Ahonen et al. [23] respectively improved the LBP method using frequency domain analysis methods. Mäenpää [24] pointed out that texture can be regarded as a two-dimensional phenomenon characterized by two orthogonal properties: patterns and the strength of the patterns, and these two measures are supplementary to each other in a very useful way. However, it is ‘the strength of the pattern’ that the original LBP ignores besides direction information. Guo et al. [5] proposed an adaptive LBP method including the directional statistical information of texture for rotation invariant texture classification. Motivated by their work, original rotation invariant LBP, improved contrast rotation invariant LBP, and direction rotation invariant LBP are combined, called integrated LBP (ILBP) shown using the dashed line and box in Figure 1, to represent the texture information of the image, which can effectively overcome the inherent deficiency of original LBP that is ignoring contrast and direction information.

Figure 1
figure 1

The framework of the proposed method. (The integrated LBP is shown using the dashed line and box).

Although an LBP descriptor can get an excellent performance, it only describes the difference of local gray level and lacks the shape and space expression of the holistic texture image. Furthermore, compared to homogeneous textures such as bricks or sands which have the uniform statistical features, inhomogeneous textures like clouds or flowers generally cannot be extracted robust texture features using conventional algorithms focusing on homogeneous textures [25]. In effectively making up the missed shape and space information of the holistic texture image when LBP texture features are extracted, Zernike moment is a desirable choice.

Moments and functions of moments have been successfully utilized as pattern features in many applications such as image recognition [26] and image retrieval [25] which can capture global information of the image. Zernike moments are deduced based on the theory of orthogonal polynomials. Khotanzad and Hong [26] have suggested that orthogonal moments like Zernike moments are better than other types of moments in terms of information redundancy and image representation. Compared to other orthogonal moments, Zernike moments are possessed of rotation invariant property and can be easily and rapidly calculated to an arbitrary high order.

Therefore, a promising rotation invariant texture classification method is proposed which combines ILBP features with Zernike moment rotation invariant features. These two features respectively describe local and holistic information of texture images. Using the fusion strategy effectively, excellent performances are obtained by means of elaborate experiments and comprehensive texture databases including the Columbia-Utrecht Reflection and Texture (CUReT) database [27], the Outex database [28], and the KTH-TIPS database [29]. The framework of the proposed method is shown using a solid line and box in Figure 1.

The rest of the paper is organized as follows. Section 2 explains the original LBP. Section 3 presents the proposed method in which contrast and direction information of LBP are considered, and shape and space information of the holistic image obtained by Zernike moments are fused during the course of feature extraction. The experimental results of the proposed method and the other compared methods are shown in Section 4. Finally, a conclusion is drawn.

2 Original LBP texture model

2.1 Basic LBP model

Ojala et al. [20] used LBP as a texture descriptor of the image as shown in Figure 2, which is composed by central pixel and neighborhoods. Considering the central pixel as the threshold of texton, LBP code can be described using the following equation:

Figure 2
figure 2

An example of the pattern and LBP.

LBP x c , y c = p = 0 P 1 s g p g c 2 P
(1)

where s(x) is a signal function, and s x = 1 x 0 0 x < 0 . (x c , y c ) is the allowable position as the central pixel. g c is the central pixel, g p is the pixel value of neighborhood, P is the number of the neighbors.

By making statistics about the frequencies of the occurred LBP codes at all allowable positions in the image, the texture spectrum histogram S[h] (h = 0, 1, …, 2P) can be obtained using the following equation:

S h = x c = 0 u 1 y c = 0 v 1 f x c , y c u × v
(2)

where f x c , y c = 1 0 L B P x c , y c = h otherwise ,u×v is the size of image.

Subsequently, Ojala et al. [1] improved the square LBP to be a circular form with discretionary radius R and neighborhoods P. Supposing that the coordinate of central pixel g c is (x c , y c ), then the coordinate of the neighbor g p is (x c  + R cos(2πi/P), y c  − R sin(2πi/P)). The pixel values of the neighbors which are not in the image grids can be calculated using an interpolation method. The relative position of central pixel and neighbors is shown in the Figure 3.

Figure 3
figure 3

The relative position of central pixel and neighbors.

2.2 Uniform and rotation invariant LBP

A hidden trouble exists in the abovementioned LBP. As the number of neighbors increases, the dimension of the histogram grows rapidly. For example, if P is 16, then the dimension of the histogram is 216 = 65,536. Therefore, the texture spectrum is so long that it is inconvenient to be applied in practice.

In the LBP code, the number of spatial transitions (bitwise 0/1 changes) can be described as:

U L B P P , R = s g P g c s g 1 g c + i = 2 P s g i g c s g i 1 g c
(3)

When U(LBPP,R) ≤ 2, the LBP pattern is defined as uniform patterns LB P P , R u 2 which has P(P − 1) + 2 discriminative patterns [1]. Although the histogram spectrum feature can be simplified using the uniform pattern, this processing way is feasible. By experiments and observation, uniform LBPs are fundamental properties of texture, providing the vast majority of patterns, sometimes over 90%. Detailed experimental results are listed in Section 4.

Furthermore, by observing, it is not difficult to find that no matter how to rotate the LBP, its structure is identical, which means that the original LBP and the rotated LBP have the same order and bitwise 0/1 changes as shown in Figure 4. For obtaining the rotation invariant texture description, Ojala et al. [1] gave the following definition:

Figure 4
figure 4

Some LBPs belonging to the same family.

L B P P , R r i =min R O R L B P P , R , p , p = 0 , , P 1
(4)

where ri means the rotation invariance, ROR(x, p) represents that the LBP code x is rotated p times around the center pixel. That is to say, using the LBP with the minimal decimal value stands for other LBPs belonging to the same family. Figure 4 shows some LBPs pertaining to the same family. The rotation invariant uniform LBP LB P P , R r i u 2 can be calculated using the following equation:

L B P P , R r i u 2 = p = 0 P 1 s g p g c if U L B P P , R 2 P + 1 otherwise
(5)

where riu2 means rotation invariant uniform pattern which has P + 2 discriminative patterns. Thus the dimension of texture spectrum histogram is greatly simplified. By making statistics about the frequencies of the occurred LB P P , R r i u 2 at all allowable pixel positions in the image, the texture spectrum histogram Soriginal can be obtained.

3 Integrated LBP and Zernike moments model

As mentioned above, attention is paid to the detailed information when texture features are extracted by LBP. But the major drawback of LBP texture analysis is its locality. Zernike moment features are just opposite. That is to say, Zernike moments emphasize holistic and shape information of images but lack specific information. Therefore, LBP and Zernike moments complement each other in terms of information description of images. What is more, these two measure ways can be both described as a histogram spectrum, so it is very convenient to fuse them.

3.1 Integrated rotation invariant LBP model

Other two kinds of rotation invariant LBPs are proposed besides the original rotation invariant pattern LB P P , R r i u 2 . They are respectively contrast rotation invariant LBPs represented by C_LB P P , R r i u 2 and direction rotation invariant LBP represented by O_LB P P , R r i . These three kinds of rotation invariant LBPs are collectively referred to as an ILBP model.

3.1.1 Contrast rotation invariant LBP

Although rotation invariant pattern LB P P , R r i u 2 can obtain an excellent performance, this kind of LBP texture representation only describes the change between the central pixel and neighbors. As to how much change occurs between them on earth, LB P P , R r i u 2 cannot give an explicit description. For example, both of the central pixels are 50 in two local textons whose neighbors are respectively {82,90,30,75,124,69,39,104} and {79,68,24,82,136,73,45,233}. Although their LBP codes are both {1,1,0,1,1,1,0,1}, the absolute values of their contrast change between the central pixel and neighbors are different which are respectively {32,40,20,25,74,19,11,54} and {29,18,26,32,86,23,5,183}. For supplementing these missed information, contrast rotation invariant LBP is added to describe the texture images besides the original LB P P , R r i u 2 . Using C p represents the absolute value of contrast change between the central pixel and neighbors in every texton, i.e., C p  = |g p  − g c |; LBP of C p can be obtained by the following equation:

C_ L B P P , R x c , y c = p = 0 P 1 s C p μ C 2 P ,
(6)

where μ C is the mean of the absolute value C p of contrast change between the central pixel and neighbors in every texton, and μ C = 1 P p = 0 P 1 C p .. If the similar processing method such as (5) is applied to C _ LBPP,R, the contrast rotation invariant C_LB P P , R r i u 2 can be obtained. By making statistics about the frequencies of the occurring C_LB P P , R r i u 2 at all allowable pixel positions in the image, the texture spectrum histogram S C can be obtained.

3.1.2 Direction rotation invariant LBP

For the stochastic texture images as shown in Figure 5a, the direction information is not apparent. But for the periodic or partly periodic texture images as shown in Figure 5b, the direction information is obvious. In the real world, most of the texture images contain the directional cue, so supplementing direction information in the discriminative features is worth trying.

Figure 5
figure 5

Some examples of (a) stochastic texture images and (b) periodic or partly periodic texture images.

The mean μ Op and variance σ Op of C p in whole texture image are used to describe the direction information along the orientation 2πp/P. The specific equations are shown below.

μ Op = 1 u × v i = 1 u j = 1 v C p , p = 1 , , P
(7)
σ Op = 1 u × v i = 1 u j = 1 v C p μ Op 2 , p = 1 , , P
(8)

Therefore, two vectors μ O  = [μO 1, μO 2, …, μ OP ] and σ O  = [σO 1, σO 2, …, σ OP ] representing direction information can be obtained. Figure 6 shows an example of directional information μ O and σ O about one texture image and corresponding rotated image with a 90° angle, respectively. By the observation, it can be found that μ O and σ O contain strong directional information and can be used to revise the histogram spectrum feature of the images so that more similarities between the image and its rotated images are mined. μ O and σ O can be respectively converted into rotation invariant LBP using the means of μ O and σ O as the thresholds. Direction rotation invariant information O μ _ L B P P , R r i and O σ _ L B P P , R r i of the holistic texture image can be obtained using the following equations:

Figure 6
figure 6

Texture image and directional information. An example of (a) 0° texture image, (b) 90° rotated image, and (c) corresponding mean μ Op and (d) variance σ Op of C p . (Solid line and dashed line respectively denote 0° and 90° image. Here, P = 8 and R = 1).

O μ _ L B P P , R r i =min R O R p = 0 P 1 s μ Op μ ¯ Op 2 P , p
(9)
O σ _ L B P P , R r i =min R O R p = 0 P 1 s σ Op σ ¯ Op 2 P , p
(10)

where μ ¯ Op = 1 P p = 0 P 1 μ Op , σ ¯ Op = 1 P p = 0 P 1 σ Op O μ _ L B P P , R r i and O σ _ L B P P , R r i are used to together represent direction rotation invariant O_LB P P , R r i of the whole texture image. As to how to revise the histogram spectrum feature of the image using direction rotation invariant O_LB P P , R r i , the processing method will be detailedly introduced in the following section.

3.2 Rotation invariant Zernike moments model

Although LBP is an excellent method in both performance and efficiency, it ignores the shape and space information of the holistic texture image. For supplementing the missed information, Zernike moment rotation invariant features are used and fused. Because the basis set of ordinary moments is not orthogonal, Zernike [30] introduced a set of complex polynomials which makes a complete orthogonal set denoted by {V nm (x, y)} over the interior of the unit circle, i.e., x2 + y2 = 1. The form of these polynomials is described as:

V nm x , y = V nm ρ , θ = R nm ρ exp j m θ
(11)

where n is positive integer or zero, m is positive and negative integers subject to constraints that n − |m| is even, and |m| ≤ n. ρ is the length of vector from origin to (x, y) pixel, and θ is the angle between vector ρ and x axis in counterclockwise direction, and R nm (ρ) is radial polynomial shown as the following equation:

R nm ρ = s = 0 n m / 2 1 s n s ! s ! n + m 2 s ! n m 2 s ! ρ n 2 s
(12)

And Rn,− m(ρ) = R nm (ρ). At the same time, these polynomials are orthogonal and satisfy:

x 2 + y 2 1 V nm x , y V p q x , y dxdy= π n + 1 δ n p δ m q
(13)

where δ ab = 1 a = b 0 otherwise Zernike moments are the projection of the image function onto these orthogonal basis functions. So Zernike moment of n th order with the repetition m for a texture image f(x, y) is:

A nm = n + 1 π x 2 + y 2 1 V nm ρ , θ f x , y dxdy
(14)

For a digital image, the above equation can be changed into the following form:

A nm = n + 1 π x y V nm ρ , θ f x , y , x 2 + y 2 1
(15)

When calculating the Zernike moments of a given image, the center of the image is taken as the origin and pixel coordinates are mapped into the unit circle. The pixels falling outside the circle are not used, and A nm  = An,− m. By theoretical testifying, Zernike moments have the rotation invariant property, that is to say, if the Zernike moments of an image and its rotated image with an angle θ are respectively denoted using A nm and A nm , they have the following relation:

A nm = A nm exp j m θ
(16)

If the image is preprocessed using some simple methods [26], Zernike moments are also invariant to translation and scale besides rotation. Using (15), the Zernike moments of different orders can be obtained such as A00, A11, A20, A22, and so on. The vector S Z composed of Zernike moments of different orders is used as the histogram spectrum feature to describe the image information, and the specific form is:

S Z = A 00 , A 11 , A 20 , A 22 , .... , A nm
(17)

3.3 Construction and revise of fusion feature

After the ILBP and Zernike moment features of the image are respectively obtained through the above description, the fusion feature is constructed and revised, then a final classification decision is made.

3.3.1 Construction of fusion feature

Because the features of LBP and Zernike moments are both histogram spectrum form, it is very convenient to fuse them. In fact, a lot of experiments are made including serial, parallel, and jointly methods. However, the serial method can obtain more stable and excellent performance. The serial method is very simple and can be described as:

F= S original S C S Z
(18)

where F denotes the fused histogram spectrum feature. Actually, the histogram spectrum Soriginal of original rotation invariant LB P P , R r i u 2 and the histogram spectrum S C of contrast rotation invariant C_LB P P , R r i u 2 can also be serially fused. The related experimental results will be given in Section 4.

3.3.2 Revise of fusion feature

In the preceding section, a method for acquiring directional information of the image is proposed. Here the revise method of fused histogram spectrum feature using the direction rotation invariant O_LB P P , R r i including O μ _LB P P , R r i and O σ _LB P P , R r i will be elaborated. The equation is described as:

F = F 1 + c 1 exp c 2 O μ _ L B P P , R r i μ O μ / σ O μ 1 + c 1 exp c 2 O σ _ L B P P , R r i μ O σ / σ O σ
(19)

where F′ is the revised fusion histogram spectrum feature. μ(O μ ) and σ(O μ ) are respectively the mean and variance of the direction rotation invariant O μ _LB P P , R r i training images; μ(O σ ) and σ(O σ ) are respectively the mean and standard of the direction rotation invariant O σ _LB P P , R r i of all training images. c1 and c2 are positive parameters. In fact, besides fusion histogram spectrum feature F, O_LB P P , R r i can also revise other histogram spectrum features such as Soriginal generated by original rotation invariant LB P P , R r i u 2 , S C generated by contrast rotation invariant C_LB P P , R r i u 2 , even S Z calculated by rotation invariant Zernike moments.

3.4 Classifier and multiscale fusion idea

Nearest neighbor is a kind of effective and simple classification criterion. There are many good measures to estimate the difference and similarity between two histograms such as log-likelihood ratio and chi-square statistic [1]. The chi-square distance function in the experiments is chosen due to its excellent performance in terms of both speed and good recognition rates which is described as:

d F train , F test = i = 1 N F train , i F test , i 2 / F train , i + F test , i
(20)

where d is the chi-square distance between the revised fusion histogram F train of the training image and the revised fusion histogram F test of the testing image. Subscript i is the corresponding bin, and N is the number of bins.

In fact, multiscale fusion idea could be used to improve the classification accuracy in the proposed method, i.e., multiple descriptors of various (P, R) are used simultaneously. Because different scale operators support different structure space of the image, multiple scale descriptors can capture richer and completer texture information.

4 Experimental results

Many experiments have been elaborately designed and executed with the aim of demonstrating the effectiveness of the proposed method.

4.1 The database

Two large and comprehensive texture databases in the study are chosen which are respectively the CUReT database [27], the Outex database [28], and the KTH-TIPS database [29]. The CUReT database includes 61 classes of real-world textures, and each corresponds to different combinations of illumination and viewing angle. The same as the literature proposed by Guo [5], 92 sufficiently large images in each class with a viewing angle less than 60° are selected in the experiments. Among them, the first 23 images in each class were used as training images. Therefore, there are 1,403 (61 × 23 = 1,403) training models and 4,209 (61 × 69 = 4,209) testing samples. This design may be regarded as an analog about the situation with a small number of and less comprehensive training images.

In the Outex database, each texture is captured using six spatial resolutions (100, 120, 300, 360, 500, and 600 dpi), nine rotation angles (0°, 5°, 10°, 15°, 30°, 45°, 60°, 75°, and 90°), and three different simulated illuminants (‘horizon’, ‘inca’, and ‘TL84’). The experimental images include canvas (46 classes), cardboard (1 classes), carpet (12 classes), chips (12 classes), and wallpaper (17 classes), i.e., 99 classes texture images all together in the Outex database. Each class texture images contains 27 images (3 illuminants, 9 angles, and spatial resolution of 600 dpi). The first 9 images (‘horizon’ illuminant, 9 angles, and spatial resolution of 600 dpi) in each class are chosen as training images. Therefore, there are 891 (99 × 9 = 891) training models and 1,782 (99 × 18 = 1,782) testing samples.

The KTH-TIPS database contains 10 texture classes such as crumpled aluminum foil, sponge, brown bread, etc. Each texture is captured under 9 scales, 3 different illumination directions, and 3 different poses. Therefore, there are 81 images per material. The first 21 images in each class are chosen as training images. Therefore, there are 210 (10 × 21 = 210) training models and 600 (10 × 60 = 600) testing samples.

The proposed method are compared with the state of the art LBP methods including LB P P , R r i u 2 [1], variance method (VARP,R) [1], LB P P , R r i u 2 /VA R P , R [1], adaptive LBP method ALBP F P , R r i u 2 [5] and LBP histogram Fourier (LBPHF) method [21] (concatenating sign LBP histogram Fourier and magnitude LBP histogram Fourier). Because VARP,R and LB P P , R r i u 2 /VA R P , R were set as 128 and 16 bins. All the images are converted to grey scale. For removing the effect of global intensity and contrast, each texture image was normalized to have an average intensity 128 and a standard deviation 20 [1].

4.2 The feasibility of uniform LBP

For showing the effectiveness on dimensionality reduction using LB P P , R u 2 , the proportions of frequencies of LB P P , R u 2 are calculated. Some statistic results are shown in Table 1, and the images are selected from the Outex database.

Table 1 The proportions of frequencies ofLB P P , R u 2 ( P= 8, R= 1)

As can be seen from the Table 1, the uniform LBP occupies the vast majority of a local binary pattern, sometimes over 90%. Therefore, it is feasible to use the uniform LBP to reduce the dimensionality of histogram spectrum.

4.3 Experimental results on CUReT database

In the experiments, different combination on three kinds of rotation invariant LBP operators and rotation invariant Zernike moments are compared. ‘/O’ denotes revising the histogram spectrum by direction rotation invariant LBP. ‘C’ represents capturing the histogram spectrum features by contrast rotation invariant LBP. ‘Z’ is Zernike moments method. And ‘_’ denotes connecting two or three kinds of histogram spectrum features in series. For example, LB P P , R r i u 2 _C_Z represents serially connecting original rotation invariant LB P P , R r i u 2 , contrast rotation invariant C_LB P P , R r i u 2 and Zernike moments rotation invariant A nm . LB P P , R r i u 2 _C_Z/O represents revising the fusion feature LB P P , R r i u 2 _C_Z by direction rotation invariant O_LB P P , R r i . The number 5, 8, or 10 denotes the order of Zernike moments. VZ_ MR4 and VZ_ MR8 respectively denote MR4 and MR8 of MR filter banks method. Table 2 lists experimental results on CUReT database using different methods.

Table 2 Recognition rates of different methods

As can be seen from the Table 2, firstly, the recognition rate of contrast rotation invariant C_LB P P , R r i u 2 (represented by ‘C’ in the Table 2) alone is worse than that of original rotation invariant LB P P , R r i u 2 . For example, the recognition rates of LB P P , R r i u 2 can respectively reach at 62.25%, 64.93%, and 68.33% when P and R are respectively (8,1), (16,2), and (24,3). Whereas the results of C_LB P P , R r i u 2 are respectively 52.58%, 51.41%, and 50.18% in the same case. It shows that the information which is contained by LB P P , R r i u 2 is richer than that contained by C_LB P P , R r i u 2 .

Secondly, the role of contrast information, not only VARP,R but also C_LB P P , R r i u 2 decreases as the number of neighbors and the size of texton increase. It states that the reliability of difference value between the central pixel and the neighbors reduces as the size of texton augments. But the recognition rate of original rotation invariant LB P P , R r i u 2 grows as the number of the neighbors and the size of texton increase.

Thirdly, among the compared methods with respect to LBP, LBPHF and adaptive LBP method obtain better results. And for non-LBP method, the results of MR8 method are better than ones of MR4 because of the richer feature representation.

Fourthly, for Zernike moment features, the recognition rate grows as the order increases. The reason for this phenomenon is that the higher the order is, the richer the detailed information contained by the Zernike moment histogram spectrum is. Fourthly, directional information can improve the recognition results of different features including LBP, Zernike moments, and fusion histogram spectrum.

Finally, fusion modes can effectively boost the recognition results. For example, when P = 8 and R = 1, the recognition rates are respectively 62.25%, 52.58%, and 36.07% obtained alone by LB P P , R r i u 2 , C_LB P P , R r i u 2 and Zernike moments (10 order). However, when fusion features LB P P , R r i u 2 _C and LB P P , R r i u 2 _C_Z are used, the recognition rates can reach at 67.31% and 76.36%, respectively.

By applying the multiscale idea mentioned above in Section 3, better results can be obtained. For example, recognition rates respectively reach at 77.33% and 81.94% when different radius and different neighbors fusion features L B P P , R r i u 2 _ C 8 , 1 + 16 , 2 + 24 , 3 and L B P P , R r i u 2 _C_ Z 8 , 1 + 16 , 2 + 24 , 3 are used. And recognition rates respectively reach at 81.84% and 78.33% when different radius and same neighbors fusion features L B P P , R r i u 2 _ C 16 , 1 + 16 , 2 + 16 , 3 and same radius and different neighbors fusion features L B P P , R r i u 2 _C_ Z 8 , 2 + 16 , 2 + 24 , 2 are used. Here, Zernike moment features are gotten using 10 order moments, and different scale fusion features are obtained by simply connecting the histogram features of different scales. Better performance can be expected if more ingenious fusion strategies are used [31]. Because the results on LBPHF method are more stable among these compared methods, we also calculated the recognition rate of different radius and different neighbors fusion features LBPHF8,1 + 16,2 + 24,3 which reaches at 71.77%.

4.4 Experimental results on Outex database

In this section, all the experiments are done using the same methods, and the results are listed in Table 3. Because the images in the Outex database are larger than those in the CURet database, the results of many methods show ‘out of memory’ besides those of VARP,R and LB P P , R r i u 2 /VA R P , R , when the number of neighbors P is 24 and radius R is 3. Therefore, the results on the scale of P = 24 and R = 3 are not listed in Table 3.

Table 3 Recognition rates of different methods

As can be seen from the Table 3, firstly, the results of original rotation invariant LB P P , R r i u 2 are better than those of contrast rotation invariant C_LB P P , R r i u 2 . Secondly, the results of LB P P , R r i u 2 improve as the number of neighbors and the size of texton increase; however, the results of C_LB P P , R r i u 2 are the opposite.

Thirdly, for the Zernike moment method, the recognition rate grows as the order increases. The change trend is the same as that of the CUReT database. In addition, it can be found that the results of Zernike moments are very excellent mainly because of two factors. On the one hand, angle changes are highly emphasized for the images in the Outex database. On the other hand, Zernike moment features are possessed of a rotation invariant property and can well describe shape and space information of the image, so they are very suitable to be used to recognize the images with different rotation angles. It is the direction information of the image that has been fully mined by Zernike moments; therefore, the proposed direction rotation invariant LBP can hardly affect the original feature histogram.

Finally, the fusion method can remarkably improve the results. For example, when P and R are respectively 16 and 2, LB P P , R r i u 2 and C_LB P P , R r i u 2 respectively obtain the recognition rate of 31.03% and 15.38%, but fusion features LB P P , R r i u 2 _C and LB P P , R r i u 2 _C_Z can respectively reach at 32.72% and 71.16%. Here, Zernike moments are calculated using a 10-order parameter. However, it can be found that the fusion results are worse than the results of Zernike moments. It is not difficult to explain this phenomenon from the signal processing point of view. When the quality difference between two signal sources is too big, then the fusion result would be bad because the relatively worse signal may disturb the relatively better signal resembling the noise. Therefore, the recognition rates of fusion feature LB P P , R r i u 2 _C_Z are worse than those of alone Zernike moments but greatly better than those of alone texture feature such as LB P P , R r i u 2 or contrast LBP and even the fusion feature LB P P , R r i u 2 _C.

Multiscale method in the Outex database is also tried, and an excellent performance is obtained. For example, the recognition rates can respectively reach at 72.17%, 68.86%, and 74.41% when different radius and different neighbors fusion feature LB P P , R r i u 2 _C_ Z 8 , 1 + 16 , 2 , different radius and same neighbors fusion features LB P P , R r i u 2 _C_ Z 16 , 1 + 16 , 2 and same radius and different neighbors fusion features LB P P , R r i u 2 _C_ Z 8 , 2 + 16 , 2 are used. Here, Zernike moments are calculated using a 10-order parameter. Furthermore, we also calculated the recognition rate of the LBPHF method with different radius and different neighbors fusion features LBPHF8,1 + 16,2 which reaches at 56.73%.

4.5 Experimental results on KTH-TIPS database

In this section, all the experiments are done using the same methods, and the results are listed in Table 4. Because the trends of most of the results are similar to those of the CURet and Outex databases, here, only some different phenomena are given. Firstly, the recognition rates of ALBP F P , R r i u 2 and LBPHF methods decrease as the number of the neighbors and the size of texton increase. Secondly, compared with the results on the CURet and Outex databases, the role of contrast information is very obvious, sometimes even better than the ones of LB P P , R r i u 2 . The reason may be that the images in the KTH-TIPS database contain sharp scale changes.

Table 4 Recognition rates of different methods

In addition, the multiscale method can further improve the results. For example, the recognition rates can respectively reach at 64.50%, 62.33%, and 63.83% when different radius and different neighbors fusion features L B P P , R r i u 2 _C_ Z 8 , 1 + 16 , 2 + 24 , 3 , different radius and same neighbors fusion features L B P P , R r i u 2 _C_ Z 16 , 1 + 16 , 2 + 16 , 3 , and same radius and different neighbors fusion features L B P P , R r i u 2 _C_ Z 8 , 2 + 16 , 2 + 24 , 2 are used. Here, Zernike moments are calculated using a 10-order parameter. Furthermore, we also calculated the recognition rate of the LBPHF method with different radius and different neighbors fusion features LBPHF8,1 + 16,2 + 24,3 which reaches at 55.83%.

In a word, the proposed method in this paper obtained more exact, stable, and robust results compared with other methods including L B P P , R r i u 2 , V A R P , R , L B P P , R r i u 2 / V A R P , R , ALBPF P , R r i u 2 , LBPHF and MR methods. Although the results of alone Zernike moment features in the Outex database are very outstanding, they are not stable compared with the proposed method because the results in the CUReT and KTH-TIPS databases are very bad. In addition, multiscale idea can further notably improve the recognition results.

5 Conclusions

LBP is an excellent tool for texture classification because of its simplicity, efficiency, and rotation invariant property. However, two mainly adverse factors weaken its performance, which are respectively ignoring contrast and direction information and lacking the shape and space expression of the holistic texture image. To effectively make up for the missed information, the rotation invariant contrast and direction information are added to the original rotation invariant LBP texture feature, which is called ILBP. In addition, Zernike moments are fused into the improved LBP texture features when representing images because they can effectively describe shape and space information of the holistic image, are possessed of orthogonal and rotation invariant properties, and can be easily and rapidly calculated to an arbitrary high order. Experimental results show that the proposed method can obtain a superior performance in terms of the large and comprehensive CUReT, Outex, and KTH-TIPS texture databases compared with other classic LBP and non-LBP methods, and multiscale idea can further remarkably improve the recognition results.

References

  1. Ojala T, Pietikainen M, Mäenpää T: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Machine Intell. 2002, 24: 971-987. 10.1109/TPAMI.2002.1017623

    Article  MATH  Google Scholar 

  2. Zhang L, Zou B, Zhang J, Zhang Y: Classification of Polarimetric SAR Image based on Support Vector Machine using Multiple-Component Scattering Model and Texture Features. EURASIP J. Adv. Signal Proc. 2010, 3: 1-10.

    MathSciNet  Google Scholar 

  3. Sajn L, Kononenko I: Multiresolution image parameterization for improving texture classification. EURASIP J. Adv. Signal Proc. 2008, 2: 1-13.

    Article  MATH  Google Scholar 

  4. Wang Y, He DJ, Yu CC, Jiang TQ, Liu ZW: Multimodal biometrics approach using face and ear recognition to overcome adverse effects of pose changes. J. Electron. Imaging 2012, 21: 043026-1-043026-11.

    Google Scholar 

  5. Guo ZH, Zhang L, Zhang D, Zhang S: Rotation invariant texture classification using adaptive LBP with directional statistical features. In Proceedings of the 7th IEEE International Conference on Image Processing. IEEE, Hong Kong, China; 2010:285-288.

    Google Scholar 

  6. Haralick RM, Shanmngam K, Dinstein I: Texture feature for image classification. IEEE Trans. Syst. Man Cy. 1973, 3: 610-621.

    Article  Google Scholar 

  7. Manjunath B, Ma W: Texture features for browsing and retrieval of image data. IEEE Trans. Pattern Anal. Machine Intell. 1996, 18: 837-842. 10.1109/34.531803

    Article  Google Scholar 

  8. Zhang JG, Tan TN: Brief review of invariant texture analysis methods. Pattern Recogn. 2002, 35: 735-747. 10.1016/S0031-3203(01)00074-7

    Article  MATH  Google Scholar 

  9. Kashyap RL, Khotanzed A: A model-based method for rotation invariant texture classification. IEEE Trans. Pattern Anal. Machine Intell. 1986, 8: 472-481.

    Article  Google Scholar 

  10. Choe Y, Kashyap RL: 3-D shape from a shaded and textural surface image. IEEE Trans. Pattern Anal. Machine Intell. 1991, 13: 907-918. 10.1109/34.93809

    Article  Google Scholar 

  11. Wu WR, Wei SC, Trans IEEE: Rotation and gray-scale transform invariant texture classification using spiral resampling, subband decomposition, and hidden Markov model. Image Process. 1996, 5: 1423-1434. 10.1109/83.536891

    Article  Google Scholar 

  12. Jafari-Khouzani K, Soltanian-Zadeh H: Radon transform orientation estimation for rotation invariant texture analysis. IEEE Trans. Pattern Anal. Machine Intell. 2005, 27: 1004-1008.

    Article  Google Scholar 

  13. Haley GM, Manjunath BS: Rotation-invariant texture classification using a complete space-frequency model. IEEE Trans. Image Process. 1999, 8: 255-269. 10.1109/83.743859

    Article  Google Scholar 

  14. Varma M, Zisserman A: Texture classification: Are filter banks necessary? In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, Madison, USA; 2003:691-698.

    Google Scholar 

  15. Varma M, Zisserman A: A statistical approach to texture classification from single images. Int. J. Comput. Vision 2005, 62: 61-81. 10.1007/s11263-005-4635-4

    Article  Google Scholar 

  16. Crosier M, Griffin LD: Using Basic Image Features for Texture Classification. Int. J. Comput. Vision 2010, 88: 447-460. 10.1007/s11263-009-0315-0

    Article  MathSciNet  Google Scholar 

  17. Xu Y, Ji H, Fermuller C: Viewpoint invariant texture description using fractal analysis. Int. J. Comput. Vision 2005, 38: 85-100.

    Google Scholar 

  18. Lazebnik S, Schmid C, Ponce J: A sparse texture representation using local affine regions. IEEE Trans. Pattern Anal. Machine Intell. 2005, 27: 1265-1278.

    Article  Google Scholar 

  19. Harwood D, Ojala T, Pietikäinen M, Kelman S, Davis L: Texture classification by center-symmetric auto-correlation, using Kullback discrimination of distributions. Pattern Recogn. Lett. 1995, 16: 1-10. 10.1016/0167-8655(94)00061-7

    Article  Google Scholar 

  20. Ojala T, Pietikäinen M, Harwood D: A comparative study of texture measures with classification based on featured distributions. Pattern Recogn. 1996, 29: 51-59. 10.1016/0031-3203(95)00067-4

    Article  Google Scholar 

  21. Zhao GY, Ahonen T, Matas J, Pietikäinen M: Rotation-invariant image and video description with local binary pattern features. IEEE Trans. Image Process. 2012, 21: 1465-1477.

    Article  MathSciNet  Google Scholar 

  22. Maani R, Kalra S, Yang YH: Rotation invariant local frequency descriptors for texture classification. IEEE Trans. Image Process. 2013, 22: 2409-2419.

    Article  MathSciNet  Google Scholar 

  23. Ahonen T, Matas J, He C, Pietikäinen M: Rotation invariant image description with local binary pattern histogram fourier features. In Image Analysis. Springer, Berlin Heidelberg; 2009:61-70.

    Chapter  Google Scholar 

  24. Mäenpää T Ph.D. dissertation . In The local binary pattern approach to texture analysis-extensions and applications. Dept. Elect. Inf. Eng., University of Oulu, Oulu, Finland; 2003.

    Google Scholar 

  25. Kim CY, Kwon OJ, Choi S: A practical system for detecting obscene videos. IEEE Trans. Consum. Electr. 2011, 57: 646-650.

    Article  Google Scholar 

  26. Khotanzad A, Hong YH: Invariant image recognition by Zernike moments. IEEE Trans. Pattern Anal. Machine Intell. 1990, 12: 489-497. 10.1109/34.55109

    Article  Google Scholar 

  27. Dana KJ, van Ginneken B, Nayar SK, Koenderink JJ: Reflectance and texture of real world surfaces. ACM Trans. Graphic. 1999, 18: 1-34. 10.1145/300776.300778

    Article  Google Scholar 

  28. Ojala T, Mäenpää T, Pietikäinen M, Viertola J, Kyllönen J, Huovinen S: Outex-new framework for empirical evaluation of texture analysis algorithm. In Proceedings of the International Conference on Pattern Recognition. IEEE, Quebec, Canada; 2002:701-706.

    Google Scholar 

  29. Caputo B, Hayman E, Fritz M, Eklundh JO: Classifying materials in the real world. Image Vis. Comput. 2010, 28(1):150-163. 10.1016/j.imavis.2009.05.005

    Article  Google Scholar 

  30. Zernike F: Diffraction theory of the cut procedure and its improved form, the phase contrast method. Physica 1934, 1: 689-704. 10.1016/S0031-8914(34)80259-5

    Article  MATH  Google Scholar 

  31. Woods K, Kegelmeyer WP Jr, Bowyer K: Combination of multiple classifiers using local accuracy estimates. IEEE Trans. Pattern Anal. Machine Intell. 1997, 19: 405-410. 10.1109/34.588027

    Article  Google Scholar 

Download references

Acknowledgements

The authors sincerely thank Postdoctor Zhenhua Guo from Tinghua University and Professor Guoying Zhao from University of Oulu for sharing the source codes on adaptive LBP method and LBP histogram Fourier method. This work was supported by the national natural science foundation of China (NSFC) under Grant No. 61171068.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Wang.

Additional information

Competing interests

Pattern recognition, image processing and computer vision.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Zhao, Y. & Chen, Y. Texture classification using rotation invariant models on integrated local binary pattern and Zernike moments. EURASIP J. Adv. Signal Process. 2014, 182 (2014). https://doi.org/10.1186/1687-6180-2014-182

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2014-182

Keywords