Skip to main content
Log in

Superpixels for large dataset subspace clustering

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Due to the numerous applications in computer vision, subspace clustering has been extensively studied in the past two decades. Most research puts emphasis on the spectral clustering-based methods in the recent years. This kind of methods usually extracts the affinity by the self-representation of the data points. Although they achieve the state-of-the-art results, the computation time will be unbearable when the number of the data points is large enough. In addition, the self-representation only considers the information provided by each single data point. In this paper, inspired by the idea of the superpixels in image segmentation, we first propose superpixels for subspace clustering with the large dataset. Then, we provide the strategy for the popular spectral clustering-based methods using these superpixels. Experimental results confirm that our superpixel-based subspace clustering methods can improve the computation speed dramatically. In addition, since the superpixels can consider the information provided by the group of data points, these methods can also improve the performance to some extent.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell 34(11):2274–2282

    Article  Google Scholar 

  2. Basri R, Jacobs DW (2003) Lambertian reflectance and linear subspaces. IEEE Trans Pattern Anal Mach Intell 25(2):218–233

    Article  Google Scholar 

  3. Candès EJ, Li X, Ma Y, Wright J (2011) Robust principal component analysis? J ACM 58(3):1–37

    Article  MathSciNet  Google Scholar 

  4. Costeira JP, Kanade T (1998) A multibody factorization method for independently moving objects. Int J Comput Vis 29(3):159–179

    Article  Google Scholar 

  5. Elhamifar E, Vidal R (2009) Sparse subspace clustering. In: CVPR, pp 2790–2797

  6. Elhamifar E, Vidal R (2013) Sparse subspace clustering: algorithm, theory, and applications. IEEE Trans Pattern Anal Mach Intell 35(11):2765–2781

    Article  Google Scholar 

  7. Felzenszwalb PF, Huttenlocher DP (2004) Efficient graph-based image segmentation. Int J Comput Vis 59(2):167–181

    Article  Google Scholar 

  8. Golub GH, Loan CFV (1996) Matrix computations, 3rd edn. Johns Hopkins University Press, Baltimore

    MATH  Google Scholar 

  9. Ham J, Lee DD (2008) Grassmann discriminant analysis: a unifying view on subspace-based learning. In: ICML, pp 376–383

  10. Harris J (1992) Algebraic geometry. Springer, Berlin

    Book  Google Scholar 

  11. Haykin S, Kosko B (2009) Gradient based learning applied to document recognition. In: IEEE, pp 306–351

  12. Ho J, Yang MH, Lim J, Lee KC, Kriegman DJ (2003) Clustering appearances of objects under varying illumination conditions. In: CVPR, pp 11–18

  13. Lee K, Ho J, Kriegman DJ (2005) Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell 27(5):684–698

    Article  Google Scholar 

  14. Li Z, Chen J (2015) Superpixel segmentation using linear spectral clustering. In: CVPR, pp 1356–1363

  15. Lin Z, Chen M, Wu L, Ma Y (2009) The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. UIUC Technical Report, UILU-ENG-09-2215

  16. Lin Z, Liu R, Su Z (2011) Linearized alternating direction method with adaptive penalty for low-rank representation. In: NIPS, pp 612–620

  17. Liu G, Lin Z, Yan S, Sun J, Yu Y, Ma Y (2013) Robust recovery of subspace structures by low-rank representation. IEEE Trans Pattern Anal Mach Intell 35(1):171–184

    Article  Google Scholar 

  18. Liu G, Lin Z, Yu Y (2010) Robust subspace segmentation by low-rank representation. In: ICML, pp 663–670

  19. Liu R, Lin Z, la Torre FD, Su Z (2012) Fixed-rank representation for unsupervised visual learning. In: CVPR, pp 598–605

  20. Lowe DG (1999) Object recognition from local scale-invariant features. In: ICCV, pp 1150–1157

  21. Nene SA, Nayar SK, Murase H (1996) Columbia object image library (coil-20). Technical Report, CUCS-005-96

  22. Ojala T, Pietikäinen M, Mäenpää T (2000) Gray scale and rotation invariant texture classification with local binary patterns. In: ECCV, pp 404–420

  23. Peng X, Zhang L, Yi Z (2013) Scalable sparse subspace clustering. In: CVPR, pp 430–437

  24. Ren X, Malik J (2003) Learning a classification model for segmentation. In: ICCV, pp 10–17

  25. Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22(8):888–905

    Article  Google Scholar 

  26. Soltanolkotabi M, Candès EJ (2012) A geometric analysis of subspace clustering with outliers. Ann Stat 40(4):2195–2238

    Article  MathSciNet  Google Scholar 

  27. Tang K, Dunson DB, Su Z, Liu R, Zhang J, Dong J (2016) Subspace segmentation by dense block and sparse representation. Neural Netw 75:66–76

    Article  Google Scholar 

  28. Tang K, Liu R, Su Z, Zhang J (2014) Structure-constrained low-rank representation. IEEE Trans Neural Netw Learn Syst 25(12):2167–2179

    Article  Google Scholar 

  29. Tang K, Liu X, Su Z, Jiang W, Dong J (2016) Subspace learning based low-rank representation. In: ACCV, pp 416–431

  30. Tang K, Su Z, Jiang W, Zhang J, Sun X, Luo X (2018) Robust subspace learning-based low-rank representation for manifold clustering. Neural Comput Appl. https://doi.org/10.1007/s00521-018-3617-8

    Article  Google Scholar 

  31. Tang K, Zhang J, Su Z, Dong J (2016) Bayesian low-rank and sparse nonlinear representation for manifold clustering. Neural Process Lett 44(3):719–733

    Article  Google Scholar 

  32. Vidal R (2011) Subspace clustering. IEEE Signal Process Mag 28(2):52–68

    Article  Google Scholar 

  33. Wang Y, Xu H, Leng C (2013) Provable subspace clustering: When LRR meets SSC. In: NIPS, pp 64–72

  34. Yan J, Pollefeys M (2006) A general framework for motion segmentation: independent, articulated, rigid, non-rigid, degenerate and non-degenerate. In: ECCV, pp 94–106

  35. You C, Robinson DP, Vidal R (2016) Scalable sparse subspace clustering by orthogonal matching pursuit. In: CVPR, pp 3918–3927

  36. Zhang H, Wang S, Xu X, Chow T, Wu Q (2018) Tree2vector: learning a vectorial representation for tree-structured data. IEEE Trans Neural Netw Learn Syst 29(11):5304–5318

    Article  MathSciNet  Google Scholar 

  37. Zhang H, Wang S, Zhao M, Xu X, Ye Y (2018) Locality reconstruction models for book representation. IEEE Trans Knowl Data Eng. https://doi.org/10.1109/TKDE.2018.2808953

    Article  Google Scholar 

Download references

Acknowledgements

The work of K. Tang was supported by the National Natural Science Foundation of China (No. 61702243) and the Educational Commission of Liaoning Province, China (No. L201683662). The work of Z. Su was supported by the High-tech Ship Research Program Support Project and the National Natural Science Foundation of China (No. 61572099). The work of W. Jiang was supported by the National Natural Science Foundation of China (No. 61771229). The work of J. Zhang was supported by National Natural Science Foundation of China (No. 61702245) and the Educational Commission of Liaoning Province, China (No. L201683663).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Kewei Tang or Jie Zhang.

Ethics declarations

Conflict of interest

The authors declared that they have no conflict of interest to this work.

Appendix

Appendix

One auxiliary variable \(\mathbf {J}\) must be introduced to make the objective function separable. Thus, the optimization problem (13) can be reformulated as follows

$$\begin{aligned} & \min \limits _{{\mathbf {Z}},{\mathbf {E}}}\Vert \mathbf {J}\Vert _{1,1}+\lambda \Vert {\mathbf {E}}\Vert _{2,1} \\ & s.t. \quad {\mathbf {X}}={\mathbf {X}}{\mathbf {Z}}+{\mathbf {E}}, \quad diag(\mathbf {J})=0, \quad {\mathbf {Z}}=\mathbf {J}. \end{aligned}$$
(14)

The augmented Lagrange function is

$$\begin{aligned} \begin{aligned}&\Vert \mathbf {J}\Vert _{1,1}+\lambda \Vert {\mathbf {E}}\Vert _{2,1}+tr(\mathbf {\Lambda }_{1}^{T}({\mathbf {X}}-{\mathbf {X}}{\mathbf {Z}}-{\mathbf {E}}))+ tr(\mathbf {\Lambda }_{2}^{T}({\mathbf {Z}}-\mathbf {J})) \\&\quad +\frac{\mu }{2} (\Vert {\mathbf {X}}-{\mathbf {X}}{\mathbf {Z}}-{\mathbf {E}}\Vert _{F}^{2}+\Vert {\mathbf {Z}}-\mathbf {J}\Vert _{F}^{2}) \end{aligned} \end{aligned}$$
(15)

where \(\mathbf {\Lambda }_{1}\) and \(\mathbf {\Lambda }_{2}\) are multipliers, \(\mu\) is a penalty parameter tending toward infinity gradually. In this way, minimizing the augmented Lagrange function will ensure the corresponding constraint as well as possible. The ADM algorithm of the model (13) can be obtained by alternately minimizing one variable while others are fixed and shown in Algorithm 2. Its convergence properties are the same with those discussed in LRR [17].

figure b

According to [6], the closed-form solution of the optimization model in step 1 can be obtained by first applying \({\mathbf {Z}}+\frac{\mathbf {\Lambda }_{2}}{\mu }\) to the shrinkage-thresholding operator [15] and then changing the diagonal elements as 0. With respect to the closed-form solution of the optimization model in step 3, please refer to Lemma 3.2 in [18].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tang, K., Su, Z., Jiang, W. et al. Superpixels for large dataset subspace clustering. Neural Comput & Applic 31, 8727–8736 (2019). https://doi.org/10.1007/s00521-018-3914-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-018-3914-2

Keywords

Navigation