Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter June 30, 2015

Content-Based Image Retrieval Using Edge and Gradient Orientation Features of an Object in an Image From Database

  • H. Kavitha EMAIL logo and M.V. Sudhamani

Abstract

In this work, we present a combination of edge feature and distribution of the gradient orientation of an object technique for content-based image retrieval (CBIR). First, the bidimensional empirical mode decomposition (BEMD) technique is employed to get the edge features of an image. Later, the information about the gradient orientation is obtained by the histogram of oriented gradient (HOG) descriptor. These two features are extracted from the images and stored in the database for further usage. When the user submits the query image, the features are extracted in same way and compared with the features of the data set images. Based on the similarity, the relevant images have been selected as a resultant set. These images are ranked from higher similarity to lower similarity and displayed on the user interface. The experiments are carried out using the Columbia Object Image Library (COIL-100) dataset. The COIL-100 database is a collection of 7200 color images belonging to 100 various objects, each with 72 different orientations. Our proposed method results are high with precision and recall values of 93.00 and 77.70, respectively. Taken individually, the precision and recall values for BEMD are 82.25 and 68.54 and for HOG are 85.00, 71.10, respectively. The observation from the experimental result is that the combined method performs better than the individual methods. Experiments are conducted in the presence of noise, and the robustness of the method is verified.

1 Introduction

The massive growth of Internet users along with the dropping price of digital cameras have contributed to the huge collection of digital images. This huge collection of digital images has thrown light on the novel and effective retrieval and storage systems. In the 1970s, the retrieval system in use was text based. The complexity behind the naming convention paved the way for the content-based image retrieval (CBIR) systems in the early 1980s. CBIR is a combination of various techniques to search for the relevant images, based on their contents from the database. CBIR has been profusely used in the area of fashion and graphic design, historical research, publishing and advertising, medical diagnosis, etc. Many researchers have explored and carried out a detailed study in this area for the past few decades [4, 13, 15, 18, 20]. Many of the systems built in the early years of CBIR are global feature (color, texture, or shape)-based systems. Images are indexed in the query by image content (QBIC) system [4] based on the texture and color global features. The global features lag behind with respect to perceptual modeling of shape, and also, when the objects in the images are partially occluded, they result in poor retrieval. In Ref. [13], images are indexed on the local features in the FourEyes system. First the images are divided into equally sized squares, and then, their shape, texture and the other similar local features are extracted from the individual parts of the image. Finally, the images are indexed based on these features.

The CBIR system based on the local features has been proposed to overcome the major drawback of the global features [7]. The corner stone of the local features are the interest points; they enrich the local information of the image. In computer vision and the image processing field, the corners and the blobs are the commonly used interest points. According to the literature survey, the most familiar work in the area of interest point detectors include Moravec’s corner detector, the Harris detector, the histogram of oriented gradient (HOG), scale invariant feature transform (SIFT), and speeded up robust features (SURF) [7, 12]. The two most promising approaches used for robust region detection in an image are SIFT and SURF. Lowe proposed the SIFT features that are invariant to scale, rotation, and also translation in the year 2004 [3]. SIFT does better in the case of the cluttered environments of objects with varying poses and also with the images containing partially occluded objects. Ahmed’s work in the field of object recognition has thrown light on the utility of the SIFT features in this area. Luo Juan and Oubong Gwun have shown experimentally that the PCA-SIFT do better for the deformed images. SIFT has found many applications in the area of image processing like image retrieval, image mosaicing, image recognition, etc. SURF was introduced by Bay in the year 2006. The cornerstones of the SURF are the integral images used for the image convolutions and also the Fast–Hessian detector. Bay published SURF as a means to tackle the point and also the line segment correspondences among the images that belong to the same category. Later, SURF has been applied in many areas of computer vision. Both the SIFT and SURF focus on the detection of the interest points, followed by the creation of the invariant descriptor based on the interest points detected. This invariant descriptor helps in comparing the two images, even under geometrical, illumination, and viewpoint changes. SIFT detects 128-dimensional descriptors and SURF 64-dimensional; due to lower-dimensional descriptors, SURF works faster than SIFT.

In 1988 [5], Harris and Stephens found the Harris corner detector, and it was proposed as an interest point detector by Schmid and Mohr [16]. The base of this detector is the autocorrelation function that is used to determine the pixels when there is a two-dimensional change in the signal. Based on the autocorrelation function, a matrix is constructed. Eigenvectors in the matrix constructed are the principal curvatures of the autocorrelation function. When both the values of the eigenvector are large and distinct, a corner is found. The scale invariant interest point detector that is used to detect blobs in an image, known as the Hessian–Laplace detector, was found by Mikolajczyk and Schmid [12]. HOG was proposed in 2005 by Dalal and Triggs [2] and was basically used for human detection. The core idea behind the HOG descriptor is to describe the local appearance of an object and its shape in the image by taking into consideration the intensity gradient distribution or edge direction. The combination of global and local features has been employed for the object recognition in Ref. [14]. The global feature used is scaling, rotation, and translation invariant Hu’s moment, and the local features adopted are PCA-SIFT and Hessian–Laplace detector.

Empirical mode decomposition (EMD) is the most important part in the Hilbert–Huang transform. EMD decomposes the complex data into smaller units known as the intrinsic mode functions (IMF). EMD has found massive applications in the field of signal and speech processing. Edge detection was carried out by using BEMD in Refs. [1, 22]. In these papers, the experiments were conducted using Canny, Sobel, and BEMD techniques. It was observed that BEMD yields better performance than the other two techniques. It is the motivation for us to develop the CBIR system using the BEMD technique for edge detection along with other features [8, 9, 19]. In Ref. [19], the edge detection is done on the basis of the first IMF. The image retrieval in this work is based on the BEMD and median filtering techniques. In the work of Ref. [9], the combination of feature techniques like BEMD and Harris corner are employed for the object recognition. The edge information is obtained from the BEMD technique, and the corners of the objects are detected by the Harris corner technique. Weightage of 0.85 is given for the edge feature, as the edge feature is more prominent compared to that of the corner of an image, especially when the rotation and scaling factors are considered. The HSV color feature is combined along with the BEMD and Harris corner features in Ref. [8], resulting in the substantial improvement with respect to the precision and recall. Further HOG features have found in enormous applications in the field of image processing and computer vision. These facts motivated us to choose BEMD and HOG as the combination feature techniques.

The CBIR system survey stands with the fact that many of the image retrieval systems are based on retrieving the single query objects. The work in Refs. [6, 10, 17] focuses on the concepts of multi-object retrieval. The multi-object retrieval systems are based on the spatial relationships among the objects in an image. These kinds of system have focused on the complex design of queries. The concept of multi-object retrieval system dealt in Ref. [6] focuses on image segmentation based on shape by applying the gradient vector flow along with active contour automatic initialization.

The rest of the paper flow is as follows: The feature extraction techniques adopted are discussed in Section 2. The proposed methodology of our system is focused in Section 3. Section 4 deals with the similarity measures employed. In Section 5, the experimental results are shown, and we provide the conclusions and future works in Section 6.

2 Feature Extraction

This section focuses on the features used in our work. We have used the two features, namely, HOG descriptors and BEMD edge features. These two techniques are discussed in the following subsections.

2.1 Bidimensional Empirical Mode Decomposition

Many CBIR systems have been built based on the edge feature. Canny edge detection, Roberts edge detection, Prewitt edge detection, and Sobel edge detection techniques are the most commonly used edge detection techniques. The results of these techniques are poor with noisy images. In this work, BEMD technique is used for detecting the edge feature [22]. The EMD is a potential tool, proposed by Huang basically for the analyses of nonlinear and nonstationary data. The main advantage of EMD is that the basic functions can be extracted directly from the original signal itself. The original signal is decomposed by EMD into a set of basic functions, known as IMF. The two must conditions to be satisfied by IMF are:

  1. Within the data set, the total number of zero crossings and extrema must be equal or may differ by at most one.

  2. The mean value of the upper and lower envelopes must be zero, at any point.

Signals that satisfy the above conditions are considered as IMF. One-dimensional data is dealt with EMD; so to deal with the data of two dimensions, the BEMD was proposed. Sifting is a recursive process of obtaining IMFs. In our work, image is taken as a two-dimensional data f (x, y) where x = 1, …, M, and y = 1, …, N, and the BEMD is applied on it. The flowchart of BEMD is shown in Figure 1.

Figure 1: Flow Chart of BEMD.
Figure 1:

Flow Chart of BEMD.

2.1.1 Sifting Process

Sifting is a recursive purpose to obtain the IMFs. The core idea of sifting is to subtract the large-scale features from the signal repeatedly until the fine-scale features are left out. Let us take f (x, y) as the original signal. Extract the local minima and local maxima of the signal. Obtain the upper envelope u (x, y), by applying cubic spline to the local maxima. Similarly obtain the lower envelope v (x, y), by applying cubic spline to the local minima. Compute the mean envelope as m (x, y) = [u (x, y) + v (x, y)]/2. The new component is given by:

(1)hk (x,y)=f(x,y)m(x,y) (1)

Check out if the new component is satisfying the conditions of IMF. If not, consider the new component hk (x, y) as the original data and repeat the shifting process. Find the mean envelope mk1 (x, y) as explained above. Compute the new component:

(2)hk1 (x,y)=hk(x,y)mk1(x,y) (2)

Repeat the above procedure until the first IMF; hkk (x, y) is found:

(3)hkk (x,y)=hkkk1(x,y)mkk(x,y) (3)

The first IMF c1 is found after the k iterations:

(4)c1=hkk (x, y) (4)

Finally, the highest frequency components are found in c1. The stopping criterion of the sifting process is obtained by calculating the standard deviation (SD); 0.3 is considered the typical value of SD. The first IMF extracts the highest frequency bands and, thus, provides the better information of an object edge. Application of thresholding to the first IMF gives a better edge. The morphological operations are applied on the first IMF to reduce the thickness of the edges obtained.

(5)SD=(x,y)Δ|hk1(x,y)hk(x,y)|2hk12(x,y) (5)

where Δ is considered to be the bidimensional field of f (x, y).

2.1.2 Extraction of the Edge Feature With Median Filtering

The overall edge extraction process is summarized as below:

  1. Obtain the gray scale image for the input image.

  2. Apply histogram equalization on the gray scale image to enhance the contrast.

  3. The output image of step 2 is smoothened by using the median filter of size 3 × 3.

  4. The edge is detected using BEMD.

  5. Step 3 edge values are replaced by the BEMD edge values.

  6. Dimension reduction technique is applied, and 128 × 128 features obtained is reduced to 64-bin feature vector (FE).

Figure 2 shows the details about the steps discussed above. Figure 2A presents the initial image. Figure 2B shows the image obtained after the median filtering is applied for the histogram equalized image. Figure 2C provides the edge information obtained by BEMD, and finally, Figure 2D presents the image obtained after replacing the edge information obtained by BEMD to that of the edge values of step 3.

Figure 2: Edge Detection Process.(A) Initial image. (B) Image after histogram equalization and median filtering. (C) Edge detected by BEMD. (D) Image after replacing edge value of (B) by edge value of (C).
Figure 2:

Edge Detection Process.

(A) Initial image. (B) Image after histogram equalization and median filtering. (C) Edge detected by BEMD. (D) Image after replacing edge value of (B) by edge value of (C).

2.2 Histogram of Oriented Gradient Descriptors or HOG Descriptors

Dalal and Triggs [2] proposed the HOG descriptor for human detection. HOG features have found enormous application in the field of image processing and computer vision. The core idea behind the HOG descriptor is to describe the local appearance of an object and its shape in the image by taking into consideration the intensity gradient distribution or edge direction. First of all, the gradient values are calculated by applying in both the vertical and horizontal directions the derivative mask for the one-dimensional center points. This is done by applying the kernel filters, and the gray scale images are filtered as shown below:

(6)DX=[1 0 1] and DY=[1 0 1]T (6)

The x and y derivatives are obtained by applying the convolution operation for the input image I:

(7)IX=IDX;IY=IDY; (7)

Calculation of the magnitude of the gradient goes as follows:

(8)|G|=IX2+IY2 (8)

The orientation of the gradient is computed as given below:

(9)Θ=arctan IYIX (9)

Orientation binning is the second part of the implementation process. Cell histograms are computed in this part. Depending on the gradient computation values, a pixel belonging to a cell has to cast the weighted votes based on the histogram orientation. Based on the unsigned or signed gradient, the histogram channels span across 0–180° or 0–360°, respectively, and the cells are rectangular. According to the Dalal and Triggs experimental results, for the unsigned gradients, nine histogram channels do the best. The HOG features are extracted as mentioned above and stored in the feature vector FH. Figure 3 describes the process of obtaining the HOG descriptors. Figure 3A shows the initial image, Figure 3B and C gives the X and Y derivate details. The magnitude of the gradient and also the histogram of the oriented gradient obtained are shown in Figure 3D and E, respectively.

Figure 3: Process of Obtaining HOG Descriptors.(A) Initial image. (B) X derivate of the image. (C) Y derivate of the image. (D) Magnitude of gradient. (E) Histogram of oriented gradient.
Figure 3:

Process of Obtaining HOG Descriptors.

(A) Initial image. (B) X derivate of the image. (C) Y derivate of the image. (D) Magnitude of gradient. (E) Histogram of oriented gradient.

3 Proposed Method

Figure 4 depicts our proposed system architecture. In our proposed system, the BEMD edge features and also the HOG features are retrieved for the images in the database. These features are appended together and stored for future usage. This forms the offline phase. In the real-time phase, when the query image is given by the user, the HOG and the BEMD edge features are extracted and appended one after the other. This extracted feature combination is compared with the features of the database images. Based on the similarity ranking, the topmost images are retrieved and displayed. The appearance of the local object along with its shape is provided by the HOG features in the form of the intensity gradient distribution. The edge information of the objects in the image is given by the BEMD features. As shown in Figure 2D, the edge information is obtained from the BEMD. Figure 3D gives the very clear shape of the object, so the combination of the edge and the shape features together are giving better results.

Figure 4: Architecture of Proposed Method.
Figure 4:

Architecture of Proposed Method.

4 The Similarity Measures

The HOG descriptors are based on counting the occurrences of gradient orientation in localized portions of an image. Thus, the appearance of the local object and its shape is described by the distribution of the intensity gradients or edge directions by the HOG descriptors. The key advantage of the HOG descriptor is mainly due to the fact that its operation is based on the localized cells. Thus, the HOG descriptors are able to cope with the geometric transformation and also photometric transformations. But they do not stand with the object orientation. To deal with the object orientation, the BEMD is employed. Thus, the combination of these two techniques is giving a better result. The edge features are extracted by applying the BEMD technique, and the feature dimension is reduced to 64 bins. These 64 bins are stored in the feature vector. For comparing the feature vectors, let us consider I as the image present in the database, and Q is the query image. Then, I (n) and Q (n) represent the average pixel values for the bins, with n ranging from 1 to 64. The difference between the values of the input image and query image bins is calculated using the formula given below.

(10)θ (n)=abs (I(n)Q(n)) (10)

where n = 1, 2, …, 64.

This value is recorded in the feature vector FE. The HOG features are extracted and stored in the feature vector. The above formula is used to compute the difference between the HOG features of query image and the database images. This difference is stored in the feature vector FH. The final distance between the query and the input image is computed by appending the feature vectors FE and FH, with the similarity measure given below.

(11)SM=[FE, FH] (11)

5 Experimental Results

The experiments are carried out using the Columbia Object Image Library (COIL-100) dataset. COIL-100 database is a collection of 7200 color images belonging to 100 different objects with 72 different orientations. The data set used for the experiments are shown in Figure 5. The 60 different orientations of the category 2 and also category 9 images are shown in Figure 6. First, the HOG features are employed for the retrieval of images followed by the BEMD features. Later, the combination of these two features was tested. The experiments were carried out by giving weight age to each of the features and also by appending the features. The later method had good results. From the experimental analysis, it is learned that for the images of categories 1, 3, 5, 6, 7, and 10, the shape of the object remains the same in all the 72 different orientations, but the inner details vary among the different images. So, the HOG features are giving good result compared to that of edges in these categories. The shape of the object differs in a different orientation for images of category 2. Thus, the edges obtained are almost rectangular in all cases, and the BEMD gives good result for this category. For the images in category 8, the edge detected is affected by the handle’s position visibility in the image. Therefore, the HOG feature gives good result in this category. When the category 9 images are considered, the edges differ, and the shape is the same in the front, side, and back pose of the frog. In this category, also the HOG gives good results. Thus, the rotation invariance is obtained by the combination of these two features.

Figure 5: Ten Categories of Images Used for the Experiment.
Figure 5:

Ten Categories of Images Used for the Experiment.

Figure 6: Sixty Different Orientations of Objects.(A) Objects of Category 2. (B) Objects of Category 9.
Figure 6:

Sixty Different Orientations of Objects.

(A) Objects of Category 2. (B) Objects of Category 9.

The proposed approach is evaluated based on precision, recall, and F-measure values [11]. The retrieval accuracy is measured by precision. Precision is the ratio between the number of relevant images retrieved to that of the total images retrieved.

(12)P=NumberofrelevantimagesretrievedTotalnumberofimagesretrieved (12)

Retrieval robustness of the system is measured by recall. Recall denotes the ratio between the number of relevant images retrieved to that of the total relevant images present in the database.

(13)R=NumberofrelevantimagesretrievedTotalnumberofrelevantimagesinDB (13)

F-measure is calculated to denote the unified performance measure. F-measure is used to denote the importance of precision over recall.

(14)Fα=(1+α)PrecisionXRecall(αXPrecision)+Recall (14)

where α = 0.25 denotes the importance of precision over recall. The resultant precision and recall values are tabulated in the Table 1. The graph of the precision, recall, and F-measure values for these techniques taken individually and together is shown in Figure 7. The graph in Figure 7C denotes that the combination of these two techniques is yielding better results.

Table 1

Precision and Recall Values for the Existing and the Proposed System.

Category of imageHOGBEMDHOG + BEMD
PrecisionRecallPrecisionRecallPrecisionRecall
Category 1100.0083.3399.1782.64100.0083.33
Category 257.5047.9269.5057.9283.6769.72
Category 3100.0083.3398.8382.36100.0083.33
Category 454.0045.0043.0035.8363.8353.19
Category 5100.0083.3397.1780.97100.0083.33
Category 6100.0083.33100.0083.33100.0083.33
Category 793.5077.92100.0083.33100.0083.33
Category 874.0061.6767.5056.2592.3376.94
Category 973.8061.5347.3339.4492.1776.81
Category 10100.0083.33100.0083.33100.0083.33
Average85.0071.1082.2568.5493.0077.70
Figure 7: Precision, Recall, and F-measure Graphs.(A) Graph showing the precision value of the various techniques employed. (B) Graph showing the recall value of the various techniques employed. (C) Graph showing the F-measure value of the various techniques employed.
Figure 7:

Precision, Recall, and F-measure Graphs.

(A) Graph showing the precision value of the various techniques employed. (B) Graph showing the recall value of the various techniques employed. (C) Graph showing the F-measure value of the various techniques employed.

Figure 8 shows all the relevant images retrieved for the first image as the query image for categories 1, 4, 9, and 2 from left to right and top to bottom, respectively. For category 1, all the 72 relevant images are retrieved. In category 4, 46 of them are retrieved. Sixty-one of the relevant images are found for categories 9 and 64 relevant images for category 2.

Figure 8: The Relevant Objects Retrieved from the Database for the First Image as the Query Image.(A) Relevant objects retrieved for category 1. (B) Relevant objects retrieved for category 4. (C) Relevant objects retrieved for category 9. (D) Relevant objects retrieved for category 2.
Figure 8:

The Relevant Objects Retrieved from the Database for the First Image as the Query Image.

(A) Relevant objects retrieved for category 1. (B) Relevant objects retrieved for category 4. (C) Relevant objects retrieved for category 9. (D) Relevant objects retrieved for category 2.

In Ref. [21], work is shown on the COIL-100 database, and the experiments have been conducted with SURF features. Six of the image categories are similar in our work and also the work in Ref. [21]. So, we have compared our results with their results. The HOG + BEMD technique yields a better result than that of the SURF feature-based retrieval, and the results are shown in the table below. The results show that the combination of technique yields better result. The average precision is 58.16 in Ref. [21] and 74.97 for our proposed method as shown in Table 2.

Table 2

Precision and Recall Values with Addition of Noise.

Image categorySURF-based precisionHOG + BEMD-based precision
176.0083.33
225.0069.72
416.0053.19
698.0083.33
836.0076.94
1098.0083.33
Average58.1674.97

The experiments are also conducted by adding Gaussian noise and salt and pepper noise. Figure 9 shows the noisy query images, and Figure 10 shows the images retrieved for these query images. The precision and recall values for the noisy query images are tabulated in Table 3. The results show that the system is robust in the presence of noise. The performance of the system is better in the presence of the salt and pepper noise when compared to that of the Gaussian noise.

Figure 9: Image with Additive Gaussian Noise and Salt and Pepper Noise.
Figure 9:

Image with Additive Gaussian Noise and Salt and Pepper Noise.

Figure 10: Relevant Objects Retrieved for Categories 1 and 4 Images with Additive Gaussian Noise and Salt and Pepper Noise, Respectively.
Figure 10:

Relevant Objects Retrieved for Categories 1 and 4 Images with Additive Gaussian Noise and Salt and Pepper Noise, Respectively.

Table 3

Precision and Recall Values with Addition of Noise.

Category of imageHOG + BEMDHOG + BEMD with Gaussian noiseHOG + BEMD with salt and pepper noise
PrecisionRecallPrecisionRecallPrecisionRecall
Category 1100.0083.3398.0081.6710083.33
Category 283.6769.7271.1759.3183.0069.17
Category 3100.0083.33100.0083.33100.0083.33
Category 463.8353.1963.3352.7863.3352.78
Category 5100.0083.33100.0083.33100.0083.33
Category 6100.0083.33100.0083.33100.0083.33
Category 7100.0083.33100.0083.33100.0083.33
Category 892.3376.9481.3367.7881.3367.78
Category 992.1776.8164.8354.0364.8354.03
Category 10100.0083.33100.0083.33100.0083.33
Average93.0077.7087.9073.2089.3074.00

6 Conclusions

In this work, we have proposed a combination of the HOG descriptors and the BEMD features for image retrieval. For a few categories, the HOG feature performance is better, and for a few categories where the edge information is almost similar in all orientations, BEMD does better. Thus, the combination of these two methods yields good results as discussed above. We have also carried out experiments in the presence of noise, and the results show that the proposed system is robust to noise. The relative performance of the proposed technique is also done. The future work includes the population of database with some more categories. The other feature combination may be considered to improve the retrieval performance.


Corresponding author: H. Kavitha, Asst. Professor, Siddaganga Institute of Technology, Department of Information Science and Engineering, Tumkur 572103, India, e-mail:

Bibliography

[1] S. M. A. Bhuiyan, R. R. Adhami and J. F. Khan, Edge detection via a fast and adaptive bidimensional empirical mode decomposition, in: Proceedings of IEEE Workshop on Machine Learning for Signal Processing, pp. 199–204, Oct. 2008.10.1109/MLSP.2008.4685479Search in Google Scholar

[2] N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, in: Proceedings of the Conference on Computer Vision and Pattern Recognition(CVPR), vol. 1, pp. 886–893, 2005.Search in Google Scholar

[3] G. David Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis. 60 (2004), 91–110.10.1023/B:VISI.0000029664.99615.94Search in Google Scholar

[4] M. Flickner, H. Sawhney, W. Niblack, J. Ashley, Q. Huang, B. Dom, M. Gorkani, J. Hafner, D. Lee, D. Petkovic, D. Steele and P. Yanker, Query by image and video content: the QBIC system, IEEE Comput. 28 (1995), 23–32.10.1016/B978-155860651-7/50108-XSearch in Google Scholar

[5] C. Harris and M. J. Stephens, A combined corner and edge detector, in: Proceedings of Alvey Vision Conference, pp. 147–152, 1988.10.5244/C.2.23Search in Google Scholar

[6] S. K. Katare, K. Suman Mitra and A. Banerjee, Content based image retrieval system for multi object image using combined features, in: Proceedings of the International Conference on Computing: Theory and Applications ICCTA, pp. 595–599, 2007.10.1109/ICCTA.2007.44Search in Google Scholar

[7] H. Kavitha and M. V. Sudhamani, Content-based image retrieval – a survey, Int. J. Adv. Res. Comput. Sci.4 (2013), 14–20.Search in Google Scholar

[8] H. Kavitha and M. V. Sudhamani, Object based image retrieval from database using combined features, Int. J. Comput. Appl. 76 (2013), 38–42.10.1109/ICSIP.2014.31Search in Google Scholar

[9] H. Kavitha and M. V. Sudhamani, Image Retrieval based on object recognition using the Harris corner and edge detection Technique, in: International Conference on Communication, VLSI & Signal Processing, pp. 181–184, 2013.Search in Google Scholar

[10] C. S. Li, J. R. Smith, L. D. Bergman and V. Castelli, Sequential processing for content-based retrieval of composite objects, in: Proceeding of SPIE/IS&T Symposium on electronic Imaging: Science and Technology – Storage 1 Retrieval for Image Video Databases VI, 1998.Search in Google Scholar

[11] P. Manipoonchelvi and K. Muneeswaran, Multi region based image retrieval system, Sadhana Indian Acad. Sci.39 (2014), 333–344.10.1007/s12046-013-0203-8Search in Google Scholar

[12] K. Mikolajczyk and C. Schmid, Scale and affine invariant interest point detectors, Int. J. Comput. Vis. 1 (2004), 63–86.10.1023/B:VISI.0000027790.02288.f2Search in Google Scholar

[13] T. P. Minka and R. W. Picard, Interactive learning using a society of models, Pattern Recogn.30 (1997), 565–581.10.1016/S0031-3203(96)00113-6Search in Google Scholar

[14] R. Muralidharan and C. Chandrasekar, Combining local and global feature for object recognition using SVM-KNN, in: 2012 Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME), pp. 1–7, 2012.10.1109/ICPRIME.2012.6208278Search in Google Scholar

[15] Y. Rui, T. S. Huang and S. F. Chang, Image retrieval: current techniques, promising directions and open issues, J. Vis. Commun. Image Represent. 10 (1999), 39–62.10.1006/jvci.1999.0413Search in Google Scholar

[16] C. Schmid and R. Mohr, Local grayvalue invariants for image retrieval, IEEE Trans. Pattern Anal. Machine Intell. 19 (1997), 530–534.10.1109/34.589215Search in Google Scholar

[17] G. Scott, M. Klaric and C. Shyu, Modeling multi-object spatial relationships for satellite image database indexing and retrieval, CIVR, LNCS3558 (2005), 247–256.10.1007/11526346_28Search in Google Scholar

[18] N. Sebe, M. S. Lew, X. Zhou, T. S. Huang and E. M. Bakker, The state of the art in image and video retrieval, in: Proceedings of the International Conference on Image and Video Retrieval. Lecture Notes in Computer Science, vol. 2728, pp. 1–8, 2003.10.1007/3-540-45113-7_1Search in Google Scholar

[19] P. Shrinivasacharya, H. Kavitha and M. V. Sudhamani, Content based image retrieval by combining median filtering and BEMD technique, in: Proceedings of the International Conference on Data Engineering and Communication Systems (ICDECS 2011), vol. 2, pp.231–236, 2011.Search in Google Scholar

[20] W. M. Smeulders, M. Worring, S. Santini, A. Gupta and R. Jain, Content-based image retrieval at the end of the early years, IEEE Trans. Pattern Anal. Machine Intell. 22 (2000), 1349–1380.10.1109/34.895972Search in Google Scholar

[21] K. Velmurugan and Lt. Dr. S. Santhosh Baboo, Content-based image retrieval using SURF and colour moments, Glob. J. Comput. Sci. Technol.11 (2011), 25–30.Search in Google Scholar

[22] J. Z. Zhang and Z. Qin. Edge detection using fast bidimensional empirical mode decomposition and mathematical morphology, in: Proceedings of IEEE SoutheastCon, pp. 139–142, Mar. 2010.10.1109/SECON.2010.5453903Search in Google Scholar

Received: 2014-4-23
Published Online: 2015-6-30
Published in Print: 2016-7-1

©2016 by De Gruyter

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 20.4.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2014-0088/html
Scroll to top button