Skip to main content

3D Visual Content Datasets

  • Chapter
  • First Online:

Part of the book series: Signals and Communication Technology ((SCT))

Abstract

Development and performance evaluation of efficient methods for coding, transmission, and quality assessment of 3D visual content require rich datasets of a suitable test material. The use of these databases allows a fair comparison of systems under test. Moreover, publicly available and widely used datasets are crucial for experimentation leading to reproducible research. This chapter presents an overview of 3D visual content datasets relevant to research in the field of coding, transmission, and quality assessment. Description of regular stereoscopic or multiview image and video datasets is presented. Databases created using emerging technologies, including light-field imaging, are also addressed. Moreover, there are databases of multimedia content annotated with ratings from the subjective experiment, which are a necessary resource for understanding the complex problem of quality of experience while consuming the 3D visual content.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Image and Video Quality Resources (http://stefan.winklerbros.net/resources.html).

  2. 2.

    Qualinet Databases (http://dbq.multimediatech.cz/).

  3. 3.

    COST Action IC1003 QUALINET (http://www.qualinet.eu/).

  4. 4.

    COST Action IC1105 3D-ConTourNet (http://www.3d-contournet.eu/).

  5. 5.

    MPI-Sintel dataset (http://sintel.is.tue.mpg.de/).

  6. 6.

    Blender (http://www.blender.org).

  7. 7.

    MPI-Sintel stereo videos with ground truth disparity (http://sintel.is.tue.mpg.de/stereo).

  8. 8.

    MPI-Sintel ground truth depth maps (http://sintel.is.tue.mpg.de/depth).

  9. 9.

    ETH3D dataset (www.eth3d.net).

  10. 10.

    KITTI Vision Benchmark Suite (http://www.cvlibs.net/datasets/kitti/).

  11. 11.

    Middlebury dataset (http://vision.middlebury.edu/mview).

  12. 12.

    Stanford Spherical Gantry (https://graphics.stanford.edu/projects/gantry/).

  13. 13.

    Stanford 3D Scanning Repository (http://graphics.stanford.edu/data/3Dscanrep/).

  14. 14.

    CVLAB multiview evaluation dataset (https://cvlab.epfl.ch/).

  15. 15.

    CVLAB stereo face database (http://cvlab.epfl.ch/data/stereoface).

  16. 16.

    CVLAB stereo dataset of buildings (http://cvlab.epfl.ch/data/strechamvs).

  17. 17.

    CVLAB multiview car dataset (http://cvlab.epfl.ch/data/pose).

  18. 18.

    TUM—Computer Vision Group (http://vision.in.tum.de/data/datasets/3dreconstruction).

  19. 19.

    Cornell 3D Location Recognition Datasets (http://www.cs.cornell.edu/projects/p2f).

  20. 20.

    Washington University Photo Tourism Dataset (http://phototour.cs.washington.edu/datasets/).

  21. 21.

    Photometric Harvard Stereo Dataset (http://vision.seas.harvard.edu/qsfs/Data.html).

  22. 22.

    ‘DiLiGenT’ Photometric Stereo Dataset (https://sites.google.com/site/photometricstereodata/).

  23. 23.

    Stanford Computer Vision and Geometry Lab (http://cvgl.stanford.edu/resources.html).

  24. 24.

    Stanford 2D-3D-Semantics Dataset 2D-3D-S (http://buildingparser.stanford.edu/dataset.html).

  25. 25.

    Joint Photographic Experts Group (https://jpeg.org/index.html).

  26. 26.

    Moving Picture Experts Group (https://mpeg.chiariglione.org/).

  27. 27.

    JPEG Pleno (https://jpeg.org/jpegpleno/index.html).

  28. 28.

    JPEG Pleno Database (https://jpeg.org/plenodb/).

  29. 29.

    ERC-funded Interfere project (http://www.erc-interfere.eu/).

  30. 30.

    b-com hologram repository (https://hologram-repository.labs.b-com.com).

  31. 31.

    Oakland Dataset (http://www.cs.cmu.edu/~vmr/datasets/oakland_3d/cvpr09/doc/).

  32. 32.

    IRCCyN/IVC 3D Images dataset (http://ivc.univ-nantes.fr/en/databases/3D_Images/).

  33. 33.

    LIVE 3D Image Quality Database Phase I (http://live.ece.utexas.edu/research/quality/live_3dimage_phase1.html).

  34. 34.

    LIVE 3D Image Quality Database Phase II (http://live.ece.utexas.edu/research/quality/live_3dimage_phase2.html).

  35. 35.

    MMSPG 3D Image Quality Assessment Database (http://mmspg.epfl.ch/3diqa).

  36. 36.

    IRCCyN/IVC DIBR Images (http://ivc.univ-nantes.fr/en/databases/DIBR_Images/).

  37. 37.

    MCL-3D Database (http://mcl.usc.edu/mcl-3d-database/).

  38. 38.

    IRCCyN/IVC NAMA3DS1 (http://ivc.univ-nantes.fr/en/databases/NAMA3DS1_COSPAD1/).

  39. 39.

    MMSPG 3D Video Quality Assessment Database (http://mmspg.epfl.ch/cms/page-58395.html).

  40. 40.

    IRCCyN/IVC DIBR Videos (http://ivc.univ-nantes.fr/en/databases/DIBR_Videos/).

  41. 41.

    LIRIS 3D Model Masking Database (http://liris.cnrs.fr/guillaume.lavoue/data/datasets.html).

  42. 42.

    LIRIS/EPFL 3D Model General-Purpose Database (http://liris.cnrs.fr/guillaume.lavoue/data/datasets.html).

  43. 43.

    IRCCyN/IVC 3D Gaze (http://ivc.univ-nantes.fr/en/databases/3D_Gaze/).

  44. 44.

    Middlebury stereo database (http://vision.middlebury.edu/stereo/data/).

  45. 45.

    EyeC3D: 3D Video Eye-tracking Dataset (http://mmspg.epfl.ch/eyec3d).

  46. 46.

    IRCCyN/IVC Eye-tracking Database for Stereoscopic Videos (http://ivc.univ-nantes.fr/en/databases/Eyetracking_For_Stereoscopic_Videos/).

References

  1. Winkler, S.: Analysis of public image and video databases for quality assessment. IEEE J. Sel. Top. Signal Process. 6(6), 616–625 (2012). https://doi.org/10.1109/JSTSP.2012.2215007

    Article  Google Scholar 

  2. Winkler, S., Savoy, F.M., Subramanian, R. X-Eye: a reference format for eye tracking data to facilitate analyses across databases. In: Proceedings of IS&T/SPIE Human Vision & Electronic Imaging (2014). https://doi.org/10.1117/12.2042433

  3. Winkler, S., Subramanian, R. Overview of eye tracking datasets. In: Proceedings of 5th International Workshop on Quality of Multimedia Experience (QoMEX) (2013). https://doi.org/10.1109/qomex.2013.6603239

  4. Schöps, T., Schönberger, J., Galliani, S., Sattler, T., Schindler, K., Pollefeys, M., Geiger, A.: A multi-view stereo benchmark with high-resolution images and multi-camera videos. In: Proceedings of IEEE Computer Conference on Computer Vision and Pattern Recognition 2538–2547 (2017). https://doi.org/10.1109/CVPR.2017.272

  5. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 3354–3361 (2012). https://doi.org/10.1109/cvpr.2012.6248074

  6. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47(1), 7–42 (2002). https://doi.org/10.1023/A:1014573219977

    Article  MATH  Google Scholar 

  7. Butler, D., Wulff, J., Stanley, G., Black, M.: A naturalistic open source movie for optical flow evaluation. In: Proceedings of the European Conference on Computer Vision, pp. 611–625 (2012). https://doi.org/10.1007/978-3-642-33783-3_44

  8. Johnson-Roberson, M., Barto, C., Rounak, M., Sharath, N., Ram, V.: Driving in the matrix: can virtual worlds replace human-generated annotations for real world tasks? In: Proceedings of IEEE International Conference on Robotics and Automation (2017). https://doi.org/10.1109/icra.2017.7989092

  9. Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015). https://doi.org/10.1109/cvpr.2015.7298925

  10. Furukawa, Y., Ponce, J.: Accurate, dense, and robust multi-view stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 32(8), 1362–1376 (2010). https://doi.org/10.1109/TPAMI.2009.161

    Article  Google Scholar 

  11. Seitz, S., Curless, B., Diebel, J., Scharstein, S., Szeliski, R.: A Comparison and evaluation of multi-view stereo reconstruction algorithms. In: Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR) (2006). https://doi.org/10.1109/cvpr.2006.19

  12. Strecha, C., von Hansen, W., Van Gool, L., Fua, P., Thoennessen, U.: On benchmarking camera calibration and multi-view stereo for high resolution imagery. In: Proc IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2008). https://doi.org/10.1109/cvpr.2008.4587706

  13. Fransens, R., Strecha, C., Van Gool, L.: Parametric stereo for multi-pose face recognition and 3D-face modeling (2005). In: Proceedings of ICCV 2005 Workshop Analysis and Modeling of Faces and Gestures, vol. 3723, pp. 109–124 (2005). https://doi.org/10.1007/11564386_10

  14. Strecha, C., Fransens, R., Van Gool, L.: Wide-baseline stereo from multiple views: a probabilistic account. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2004)

    Google Scholar 

  15. Strecha, C., Fransens, R., Van Gool, L.: Combined depth and outlier estimation in multi-view stereo. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2006). https://doi.org/10.1109/cvpr.2006.78

  16. Ozuysal, M., Lepetit, V., Fua, P.: Pose estimation for category specific multiview object localization. In: Proceedings of Conference on Computer Vision and Pattern Recognition (2009). https://doi.org/10.1109/cvprw.2009.5206633

  17. Cremers, D., Kolev, K.: Multiview stereo and silhouette consistency via convex functionals over convex domains. IEEE Trans. Pattern Anal. Mach. Intell. 33(6), 1161–1174 (2011). https://doi.org/10.1109/TPAMI.2010.174

    Article  Google Scholar 

  18. Li, Y., Snavely, N., Huttenlocher, D.P.: Location recognition using prioritized feature matching. In: Proceedings of ECCV (2010). https://doi.org/10.1007/978-3-642-15552-9_57

  19. Frankot, R., Chellappa, R.: A method for enforcing integrability in shape from shading algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 10(4), 439–451 (1988). https://doi.org/10.1109/34.3909

    Article  MATH  Google Scholar 

  20. Shi, B., Wu, Z., Mo, Z., Duan, D., Yeung, S.-K., Tan, P.: A benchmark dataset and evaluation for non-lambertian and uncalibrated photometric stereo. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  21. Xiang, Y., Mottaghi, R., Savarese, S.: Beyond PASCAL: a benchmark for 3D object detection in the wild. In: Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV) (2014). https://doi.org/10.1109/wacv.2014.6836101

  22. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010). https://doi.org/10.1007/s11263-009-0275-4

    Article  Google Scholar 

  23. Armeni, I., Sax, A., Zamir, A., Savarese, S. Joint 2D-3D-semantic data for indoor scene understanding. In: Computer Vision and Pattern Recognition (2017, to appear)

    Google Scholar 

  24. ITU-R Recommendation BT.1788: Methodology for the subjective assessment of video quality in multimedia applications, Jan 2007

    Google Scholar 

  25. Hasler, D., Suesstrunk, S.E.: Measuring colorfulness in natural images. Hum. Vis. Electron. Imaging VIII 2003, 87–95 (2003). https://doi.org/10.1117/12.477378

    Article  Google Scholar 

  26. Faloutsos, C., Barber, R., Flickner, M., Hafner, J., Niblack, W., Petkovic, D., Equitz, W.: Efficient and effective querying by image content. J. Intell. Inf. Syst. 3(3–4), 231–262 (1994). https://doi.org/10.1007/BF00962238

    Article  Google Scholar 

  27. Pinson, M.: Spatial information (SI) filter (2016). https://www.its.bldrdoc.gov/resources/video-quality-research/guides-and-tutorials/spatial-information-si-filter.aspx. Accessed 29 Sept 2017

  28. Paudyal, P., Gutiérrez. J., Le Callet, P., Carli, M., Battisti, F.: Characterization and selection of light field content for perceptual assessment. In: Proceedings of QoMEX (2017). https://doi.org/10.1109/qomex.2017.7965635

  29. ITU-T Recommendation P.910: Subjective video quality assessment methods for multimedia applications, Apr 2008

    Google Scholar 

  30. Tkalcic, M., Tasic, J.F.: Colour spaces: perceptual, historical and applicational background. In: Proceedings of EUROCON (2003). https://doi.org/10.1109/eurcon.2003.1248032

  31. Datta, R., Joshi, D., Li, J., Wang, J.Z.: Studying aesthetics in photographic images using a computational approach. Proc. Eur. Conf. Comput. Vis. 3953, 288–301 (2006). https://doi.org/10.1007/11744078_23

    Article  Google Scholar 

  32. Bishop, T.E., Zanetti, S., Favaro, P.: Light field superresolution. In: Proceedings of IEEE International Conference on Computational Photography (ICCP 09) (2009). https://doi.org/10.1109/iccphot.2009.5559010

  33. Szeliski, R., Avidan, S., Anandan, P.: Layer extraction from multiple images containing reflections and transparency. In: IEEE Conference on Computer Vision and Pattern Recognition (2000)

    Google Scholar 

  34. Wang, Q., Lin, H., Ma, Y., Kang, S.B., Yu, J.: Automatic layer separation using light field imaging, arXiv preprint (2015). arXiv:1506.04721

  35. Levin, A., Weiss, Y.: User assisted separation of reflections from a single image using a sparsity prior. IEEE Trans. Pattern Anal. Mach. Intell. 29(9), 1647–1654 (2007). https://doi.org/10.1109/TPAMI.2007.1106

    Article  Google Scholar 

  36. Denker, K., Umlauf, G.: Accurate real-time multi-camera stereo-matching on the GPU for 3D reconstruction. J. WSCG 19(1–3), 9–16 (2011)

    Google Scholar 

  37. Jeon, H.-G., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y.W., Kweon, I.S.: Accurate depth map estimation from a lenslet light field camera. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015). https://doi.org/10.1109/cvpr.2015.7298762

  38. Dabała, Ł., Ziegler, M., Didyk, P., Zilly, F., Keinert, J., Myszkowski, K., Seidel, H.-P., Rokita, P., Ritschel, T.: Efficient multi-image correspondences for on-line light field video processing. Comput. Graph. Forum 35(7), 401–410 (2016). https://doi.org/10.1111/cgf.13037

    Article  Google Scholar 

  39. Montilla, I., Puga, M., Luke, J.P., Marichal-Hernandez, J.G., Rodriguez-Ramos, J.M.: Design and laboratory results of a plenoptic objective: from 2D to 3D with a standard camera. J. Disp. Technol. 11(1), 73–78 (2015). https://doi.org/10.1109/JDT.2014.2361257

    Article  Google Scholar 

  40. Ziegler, M., Engelhardt, A., Müller, S., Keinert, J., Zilly, F., Foessel, S., Schmid, K.: Multi-camera system for depth based visual effects and compositing. In: Proceedings of European Conference on Visual Media Production (2015). https://doi.org/10.1145/2824840.2824845

  41. Urvoy, M., Barkowsky, M., Cousseau, R., Koudota, Y., Ricordel, V., Le Callet, P., Gutierrez, J., Garcia, N.: NAMA3DS1-COSPAD1: subjective video quality assessment database on coding conditions introducing freely available high quality 3D stereoscopic sequences. In: Proceedings of Fourth International Workshop on Quality of Multimedia Experience (2012). https://doi.org/10.1109/qomex.2012.6263847

  42. Wang, Z.: Objective image quality assessment: facing the real-world challenges. In: Image Quality and System Performance (keynote speech paper) (2016)

    Google Scholar 

  43. Wang, T.-C., Efros, A.A., Ramamoorthi, R.: Depth estimation with occlusion modeling using light-field cameras. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016). https://doi.org/10.1109/tpami.2016.2515615

  44. Liu, H., Wang, J., Redi, J., Le Callet, P., Heynderickx, I.: An efficient no-reference metric for perceived blur. In: Proceedings of European Workshop on Visual Information Processing (2011). https://doi.org/10.1109/euvip.2011.6045525

  45. Wu, W., Llull, P., Tosic, I., Bedard, N., Berkner, K., Balram, N.: Content-adaptive focus configuration for near-eye multi-focal displays. In: Proceedings of IEEE International Conference on Multimedia and Expo (2016). https://doi.org/10.1109/icme.2016.7552965

  46. Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Comput. Sci. Techn. Rep. 2(11), 1–11 (2005)

    Google Scholar 

  47. Vaish, V., Adams, A.: The (new) stanford light field archive. http://lightfield.stanford.edu/ (2008). Accessed 29 Sept 2017

  48. Paudyal, P., Olsson, R., Sjostrom, M., Battisti, F., Carli, M.: SMART: a light field image quality dataset. In: Proceedings of International Conference on Multimedia Systems (MMSys 2016) (2016). https://doi.org/10.1145/2910017.2910623

  49. Rerabek, M., Yuan, L., Authier, L.A., Ebrahimi, T.: EPFL light-field image dataset. ISO/IEC JTC 1/SC 29/WG1, Technical Report (2015)

    Google Scholar 

  50. Ebrahimi, T., Foessel, S., Pereira, F., Schelkens, P.: JPEG pleno: toward an efficient representation of visual reality. IEEE Multimed. 23(4), 14–20 (2016). https://doi.org/10.1109/MMUL.2016.64

    Article  Google Scholar 

  51. Munoz, D., Bagnell. J.A., Vandapel, N., Hebert, M.: Contextual classification with functional max-margin markov networks. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (2009). https://doi.org/10.1109/cvprw.2009.5206590

  52. d’Eon, E., Harrison, B., Myers, T., Chou, P.A.: 8i voxelized full bodies—a voxelized point cloud dataset. ISO/IEC JTC1/SC29 Joint WG11/WG1 (MPEG/JPEG) input document WG11M40059/WG1M74006, Geneva, January (2017)

    Google Scholar 

  53. Blinder, D., Ahar, A., Symeonidou, A., Xing, Y., Bruylants, T., Schretter, C., Pesquet-Popescu, B., Dufaux, F., Munteanu, A., Schelkens, P.: Open access database for experimental validations of holographic compression engines. In: 7th International Workshop on Quality of Multimedia Experience (QoMEX) (2015). https://doi.org/10.1109/qomex.2015.7148145

  54. Gilles, A., Gioia, P., Cozot, R., Morin, L.: Hybrid approach for fast occlusion processing in computer-generated hologram calculation. Appl. Opt. 55(20), 5459–5470 (2016). https://doi.org/10.1364/AO.55.005459

    Article  Google Scholar 

  55. Benoit, A., Le Callet, P., Campisi, P., Cousseau, R.: Quality assessment of stereoscopic images. EURASIP J. Image Video Process. (2008). https://doi.org/10.1155/2008/659024

    Article  Google Scholar 

  56. ITU-R Recommendation BT.500–13; Methodology for the subjective assessment of the quality of television pictures, Jan 2012

    Google Scholar 

  57. Moorthy, A.K., Su, C.-C., Mittal, A., Bovik, A.C.: Subjective evaluation of stereoscopic image quality. Signal Process. Image Commun. 28(8), 870–883 (2013). https://doi.org/10.1016/j.image.2012.08.004

    Article  Google Scholar 

  58. Chen, M.-J., Cormack, L.K., Bovik, A.C.: No-reference quality assessment of natural stereopairs. IEEE Trans. Image Process. 22(9), 3379–3391 (2013). https://doi.org/10.1109/TIP.2013.2267393

    Article  MathSciNet  MATH  Google Scholar 

  59. Goldmann, L., De Simone, F., Ebrahimi, T.: A comprehensive database and subjective evaluation methodology for quality of experience in stereoscopic video. In: Proceedings of Electronic Imaging (EI), 3D Image Processing (3DIP) and Applications (2010). https://doi.org/10.1117/12.839438

  60. Bosc, E., Pépion, R., Le Callet, P., Köppel, M., Ndjiki-Nya, P., Pressigout, M., Morin, L.: Towards a new quality metric for 3-D synthesized view assessment. IEEE J. Sel. Top. Signal Process. 6029277, 1332–1343 (2011). https://doi.org/10.1109/JSTSP.2011.2166245

    Article  Google Scholar 

  61. Thurstone, L.L.: A law of comparative judgement. Psychol. Rev. 34(4), 273–286 (1927). https://doi.org/10.1037/h0070288

    Article  Google Scholar 

  62. Song, R., Ko, H., Kuo, C.C.: MCL-3D: a database for stereoscopic image quality assessment using 2D-image-plus-depth source. J. Inf. Sci. Eng. 31(5), 1593–1611 (2015)

    Google Scholar 

  63. Goldmann, L., De Simone, F., Ebrahimi, T.: Impact of acquisition distortions on the quality of stereoscopic images. In: Proceedings of 5th International Workshop on Video Processing and Quality Metrics for Consumer Electronics (VPQM) (2010)

    Google Scholar 

  64. Bosc, E., Le Callet, P., Morin, L., Pressigout, M.: Visual quality assessment of synthesized views in the context of 3D-TV. In: Zhu, C., Zhao, Y., Yu, L., Tanimoto, M. (eds) 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges. Springer, New York (2012). https://doi.org/10.1007/978-1-4419-9964-1_15

  65. Lavoué, G., Drelie Gelasca, E., Dupont, F., Baskurt, A., Ebrahimi, T.: Perceptually driven 3D distance metrics with application to watermarking (2006). In: Proceedings of SPIE, vol. 6312. https://doi.org/10.1117/12.686964

  66. Lavoué, G.: A local roughness measure for 3D meshes and its application to visual masking. ACM Trans. Appl. Percept. 5(4) (2009). https://doi.org/10.1145/1462048.1462052

  67. Lavoué, G., Corsini, M.: A comparison of perceptually-based metrics for objective evaluation of geometry processing. IEEE Trans. Multimed. 12(7), 636–649 (2010). https://doi.org/10.1109/TMM.2010.2060475

    Article  Google Scholar 

  68. Wang, J., Perreira Da Silva, M., Le Callet, P., Ricordel, V.: Computational model of stereoscopic 3D visual saliency. IEEE Trans. Image Process. 22(6), 2151–2165 (2013). https://doi.org/10.1109/TIP.2013.2246176

    Article  MathSciNet  MATH  Google Scholar 

  69. Hanhart, P., Ebrahimi, T.: EyeC3D: 3D video eye tracking dataset. In: Proceedings of Sixth International Workshop on Quality of Multimedia Experience (QoMEX 2014) (2014). https://doi.org/10.1109/qomex.2014.6982290

  70. Fang, Y., Wang, J., Li, J., Pépion, R., Le Callet, P.: An eye tracking database for stereoscopic video. In: Proceedings of Sixth International Workshop on Quality of Multimedia Experience (QoMEX 2014) (2014). https://doi.org/10.1109/qomex.2014.6982288

  71. Raghavendra, R., Raja, K., Busch, C.: Exploring the usefulness of light field camera for biometrics: an empirical study on face and iris recognition. IEEE Trans. Inf. Forensics Secur. 11(5), 922–936 (2016). https://doi.org/10.1109/tifs.2015.2512559

  72. Mousnier, A., Vural, E., Guillemot, C.: Partial light field tomographic reconstruction from a fixed-camera focal stack. In: Computer Vision and Pattern Recognition, arXiv preprint (2015). arXiv:1503.01903

  73. Ghasemi, A., Afonso. N., Vetterli, M.: LCAV-31: a dataset for light field object recognition. In IS&T/SPIE Electronic Imaging, pp. 902014–902014 (2014). https://doi.org/10.1117/12.2041097

  74. Li, N., Ye, J., Ji, Y., Ling, H., Yu, J.: Saliency detection on light field. IEEE Trans. Pattern Anal. Mach. Intell. 39(8), 1605–1616 (2017). https://doi.org/10.1109/TPAMI.2016.2610425

    Article  Google Scholar 

  75. Wetzstein, G.: Synthetic light field archive (2016). http://web.media.mit.edu/~gordonw/SyntheticLightFields/, Accessed 29 September 2017

  76. Wanner, S., Meister, S., Goldluecke, B.: Datasets and benchmarks for densely sampled 4D light fields. In: Annual Workshop on Vision, Modeling and Visualization: VMV (2013). https://doi.org/10.2312/pe.vmv.vmv13.225-226

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Karel Fliegel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Fliegel, K. et al. (2019). 3D Visual Content Datasets. In: Assunção, P., Gotchev, A. (eds) 3D Visual Content Creation, Coding and Delivery. Signals and Communication Technology. Springer, Cham. https://doi.org/10.1007/978-3-319-77842-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-77842-6_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-77841-9

  • Online ISBN: 978-3-319-77842-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics