Skip to main content
Log in

Motion-Compensated Spatio-Temporal Filtering for Multi-Image and Multimodal Super-Resolution

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

The classical multi-image super-resolution model assumes that the super-resolved image is related to the low-resolution frames by warping, convolution and downsampling. State-of-the-art algorithms either use explicit registration to fuse the information for each pixel in its trajectory or exploit spatial and temporal similarities. We propose to combine both ideas, making use of inter-frame motion and exploiting spatio-temporal redundancy with patch-based techniques. We introduce a non-linear filtering approach that combines patches from several frames not necessarily belonging to the same pixel trajectory. The selection of candidate patches depends on a motion-compensated 3D distance, which is robust to noise and aliasing. The selected 3D volumes are then sliced per frame, providing a collection of 2D patches which are finally averaged depending on their similarity to the reference one. This makes the upsampling strategy robust to flow inaccuracies and occlusions. Total variation and nonlocal regularization are used in the deconvolution stage. The experimental results demonstrate the state-of-the-art performance of the proposed method for the super-resolution of videos and light-field images. We also adapt our approach to multimodal sequences when some additional data at the desired resolution is available.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Notes

  1. The codes of all depth super-resolution methods are available at http://github.com/qinhongwei/depth-enhancement.

References

  • Al Ismaeil, K., Aouada, D., Mirbach, B., & Ottersten, B. (2016). Enhancement of dynamic depth scenes by upsampling for precise super-resolution (UP-SR). Computer Vision and Image Understanding, 147, 38–49.

    Article  Google Scholar 

  • Alain, M., & Smolic, A. (2017). Light field denoising by sparse 5D transform domain collaborative filtering. In Proceeding of the IEEE international workshop on multimedia signal processing (MMSP) (pp. 1–6). London.

  • Alain, M., & Smolic, A. (2018). Light field super-resolution via LFBM5D sparse coding. In 2018 25th IEEE international conference on image processing (ICIP) (pp. 2501–2505). New York: IEEE.

  • Arias, P., Facciolo, G., Caselles, V., & Sapiro, G. (2011). A variational framework for exemplar-based image inpainting. International Journal of Computer Vision, 93(3), 319–347.

    Article  MathSciNet  MATH  Google Scholar 

  • Bishop, T., Zanetti, S., & Favaro, P. (2009). Light field superresolution. In Proceedings of the IEEE international conference on computational photography (ICCP) (pp. 1–9). San Francisco, CA.

  • Bodduna, K., & Weickert, J. (2017). Evaluating data terms for variational multi-frame super-resolution. In Proceedings of the international conference on scale space and variational methods in computer vision (SSVM), LNCS (Vol. 10302, pp. 590–601). Kolding.

  • Boominathan, V., Mitra, K., & Veeraraghavan, A. (2014). Improving resolution and depth-of-field of light field cameras using a hybrid imaging system. In Proceedings of the IEEE international conference on computational photography (ICCP) (pp. 1–10). Santa Clara, CA.

  • Brox, T., Bruhn, A., Papenberg, N., & Weickert, J. (2004). High accuracy optical flow estimation based on a theory for warping. In Proceedings of the European conference on computer vision (ECCV), LNCS (Vol. 3024, pp. 25–36). Prague.

  • Buades, A., Coll, B., & Morel, J. M. (2005). A review of image denoising algorithms, with a new one. SIAM Multiscale Modeling and Simulation, 4(2), 490–530.

    Article  MathSciNet  MATH  Google Scholar 

  • Buades, A., & Duran, J. (2017). Flow-based video super-resolution with spatio-temporal patch similarity. In Proceedings of the British machine vision conference (BMVC) (pp. 656.1–656.12). London.

  • Buades, A., Lisani, J. L., & Miladinović, M. (2016). Patch based video denoising with optical flow estimation. IEEE Transactions on Image Processing, 25(6), 2573–2586.

    Article  MathSciNet  MATH  Google Scholar 

  • Burger, M., Dirks, H., & Schönlieb, C. B. (2018). A variational model for joint motion estimation and image reconstruction. SIAM Journal on Imaging Sciences, 11(1), 94–128.

    Article  MathSciNet  MATH  Google Scholar 

  • Butler, D., Wulff, J., Stanley, G., & Black, M. (2012). A naturalistic open source movie for optical flow evaluation. In Proceedings of the European conference on computer vision (ECCV), LNCS (Vol. 7577, pp. 611–625). Florence.

  • Caballero, J., Ledig, C., Aitken, A., Acosta, A., Totz, J., Wang, Z., & Shi, W. (2017). Real-time video super-resolution with spatio-temporal networks and motion compensation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2848–2857). Honolulu, HI.

  • Chambolle, A. (2004). An algorithm for total variation minimization and applications. Journal of Mathematical Imaging and Vision, 20(1–2), 89–97.

    MathSciNet  MATH  Google Scholar 

  • Chambolle, A., & Pock, T. (2011). A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1), 120–145.

    Article  MathSciNet  MATH  Google Scholar 

  • Chambolle, A., & Pock, T. (2016). An introduction to continuous optimization for imaging. Acta Numerica, 25, 161–319.

    Article  MathSciNet  MATH  Google Scholar 

  • Chan, D., Buisman, H., Theobalt, C., & Thrun, S. (2008). A noise-aware filter for real-time depth upsampling. In Workshop on multi-camera and multi-modal sensor fusion algorithms and applications-M2SFA2 2008.

  • Cui, Y., Schuon, S., Thrun, S., Stricker, D., & Theobalt, C. (2013). Algorithms for 3D shape scanning with a depth camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(5), 1039–1050.

    Article  Google Scholar 

  • Dabov, K., Foi, A., Katkovnic, V., & Egiazarian, K. (2007). Image denoising by sparse 3D transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16(8), 2080–2095.

    Article  MathSciNet  Google Scholar 

  • Danielyan, A., Foi, A., Katkovnik, V., & Egiazarian, K. (2008). Image and video super-resolution via spatially adaptive block-matching filtering. In International workshop on local and non-local approximation in image processing (p. 8). Lausanne.

  • Diebel, J., & Thrun, S. (2006). An application of Markov random fields to range sensing. In Proceedings of the advances in neural information processing systems (NIPS) (pp. 291–298).

  • Dong, C., Loy, C., He, K., & Tang, X. (2014). Learning a deep convolutional network for image super-resolution. In Proceedings of the European conference on computer vision (ECCV), LNCS (Vol. 8692, pp. 184–199). Zurich.

  • Dong, C., Loy, C., He, K., & Tang, X. (2016). Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2), 295–307.

    Article  Google Scholar 

  • Duchi, J., Shalev-Schwartz, S., Singer, Y., & Chandra, T. (2008). Efficient projections onto the \(\ell ^1-\)ball for learning in high dimensions. In Proceedings of the international conference on machine learning (ICML) (pp. 272–279). New York, NY.

  • Duran, J., Buades, A., Coll, B., & Sbert, C. (2014). A nonlocal variational model for pansharpening image fusion. SIAM Journal on Imaging Sciences, 7(2), 761–796.

    Article  MathSciNet  MATH  Google Scholar 

  • Duran, J., Moeller, M., Sbert, C., & Cremers, D. (2015). A novel framework for nonlocal vectorial total variation based on \(\ell ^{p,q,r}\) norms. In Proceedings of the international conference on energy minimization methods in computer vision and pattern recognition (EMMCVPR), LNCS (Vol. 8932, pp. 141–154). Hong Kong.

  • Duran, J., Moeller, M., Sbert, C., & Cremers, D. (2016a). Collaborative total variation: A general framework for vectorial TV models. SIAM Journal on Imaging Sciences, 9(1), 116–151.

    Article  MathSciNet  MATH  Google Scholar 

  • Duran, J., Moeller, M., Sbert, C., & Cremers, D. (2016b). On the implementation of collaborative TV regularization: Application to cartoon+texture decomposition. Image Processing On Line, 6, 27–74.

    Article  MathSciNet  Google Scholar 

  • Ebrahimi, M., & Vrscay, E.R. (2008). Multi-frame super-resolution with no explicit motion estimation. In international conference on image processing, computer vision and pattern recognition (IPCV) (pp. 455–459). Las Vegas, NV.

  • Elad, M., & Feuer, A. (1999). Superresolution restoration of an image sequence: Adaptive filtering approach. IEEE Transactions on Image Processing, 8(3), 387–395.

    Article  Google Scholar 

  • Esser, E., Zhang, X., & Chan, T. (2010). A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM Journal on Imaging Sciences, 3(4), 1015–1046.

    Article  MathSciNet  MATH  Google Scholar 

  • Farrugia, R. A., Galea, C., & Guillemot, C. (2017). Super resolution of light field images using linear subspace projection of patch-volumes. IEEE Journal of Selected Topics in Signal Processing, 11(7), 1058–1071.

    Article  Google Scholar 

  • Farsiu, S., Robinson, D., Elad, M., & Milanfar, P. (2004). Fast and robust multi-frame super-resolution. IEEE Transactions on Image Processing, 13(10), 1327–1344.

    Article  Google Scholar 

  • Ferstl, D., Reinbacher, C., Ranftl, R., Rüther, M., & Bischof, H. (2013). Image guided depth upsampling using anisotropic total generalized variation. In Proceedings of the international conference on computer vision (ICCV) (pp. 993–1000). Sydeny.

  • Freedman, G., & Fattal, R. (2011). Image and video upscaling from local self-examples. ACM Transactions on Graphics, 30(2), 12.1–12.11.

    Article  Google Scholar 

  • Freeman, W. T., Jones, T. R., & Pasztor, E. C. (2002). Example-based super-resolution. IEEE Computer Graphics and Applications, 22(2), 56–65.

    Article  Google Scholar 

  • Garcia, F., Aouada, D., Mirbach, B., Solignac, T., & Ottersten, B. (2011). A new multi-lateral filter for real-time depth enhancement. In Proceedings of the international conference on advanced video and signal-based surveillance (AVSS) (pp. 42–47). Klagenfurt.

  • Garcia, F., Aouada, D., Mirbach, B., Solignac, T., & Ottersten, B. (2015). Unified multi-lateral filter for real-time depth map enhancement. Image and Vision Computing, 41, 26–41.

    Article  Google Scholar 

  • Gilboa, G., & Osher, S. (2007). Nonlocal linear image regularization and supervised segmentation. SIAM Multiscale Modeling and Simulation, 6(2), 595–630.

    Article  MathSciNet  MATH  Google Scholar 

  • Ham, B., Cho, M., & Ponce, J. (2015). Robust image filtering using joint static and dynamic guidance. In Proceedings of the IEEE conference computer vision and pattern recognition (CVPR) (pp. 4823–4831). Boston, MA.

  • Honauer, K., Johannsen, O., Kondermann, D., & Goldluecke, B. (2016). A dataset and evaluation methodology for depth estimation on 4D light fields. In Proceedings of the Asian conference on computer vision (ACCV), LNCS (Vol. 10113, pp. 19–34). Taipei.

  • Huang, Y., Wang, W., & Wang, L. (2015). Bidirectional recurrent convolutional networks for multi-frame super-resolution. In Proceedings of the advances in neural information processing systems (NIPS) (pp. 235–243). Montreal.

  • Hui, T. W., Loy, C. C., & Tang, X. (2016). Depth map super-resolution by deep multi-scale guidance. In European conference on computer vision (pp. 353–369). Berlin: Springer.

  • Hui, T. W., Tang, X., & Loy, C. C. (2018). Liteflownet: A lightweight convolutional neural network for optical flow estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8981–8989).

  • Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., & Brox, T. (2017). Flownet 2.0: Evolution of optical flow estimation with deep networks. In IEEE conference on computer vision and pattern recognition (CVPR) (Vol. 2, p. 6).

  • Jung, M., Bresson, X., Chan, T., & Vese, L. (2011). Nonlocal Mumford–Shah regularizers for color image restoration. IEEE Transactions on Image Processing, 20(6), 1583–1598.

    Article  MathSciNet  MATH  Google Scholar 

  • Kappeler, A., Yoo, S., Dai, Q., & Katsaggelos, A. K. (2016). Video super-resolution with convolutional neural networks. IEEE Transactions on Computational Imaging, 2(2), 109–122.

    Article  MathSciNet  Google Scholar 

  • Kim, C., Zimmer, H., Pritch, Y., Sorkine-Hornung, A., & Gross, M. H. (2013). Scene reconstruction from high spatio-angular resolution light fields. ACM Transactions on Graphics, 32(4), 73-1.

    MATH  Google Scholar 

  • Kindermann, S., Osher, S., & Jones, P. W. (2005). Deblurring and denoising of images by nonlocal functionals. SIAM Journal on Multiscale Modeling and Simulation, 4(4), 1091–1115.

    Article  MathSciNet  MATH  Google Scholar 

  • Kolb, A., Barth, E., Koch, R., & Larsen, R. (2009). Time-of-flight sensors in computer graphics. In Proceedings of the Eurographics (STARs) (pp. 119–134). Munich.

  • Kopf, J., Cohen, M. F., Lischinski, D., & Uyttendaele, M. (2007). Joint bilateral upsampling. ACM Transactions on Graphics, 26(3), 96.

    Article  Google Scholar 

  • Levin, A., Freeman, W., & Durand, F. (2008). Understanding camera trade-offs through a Bayesian analysis of light field projections. In Proceedings of the European conference on computer vision (ECCV). Lectures Notes in Computer Science (Vol. 5305, pp. 88–101). Marseille.

  • Liao, R., Tao, X., Li, R., Ma, Z., & Jia, J. (2015). Video super-resolution via deep draft-ensemble learning. In Proceedings of the IEEE international conference on computer vision (ICCV) (pp. 531–539). Santiago.

  • Liu, C., & Sun, D. (2011). A Bayesian approach to adaptive video super resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 209–216). Colorado Springs, CO.

  • Liu, J., & Gong, X. (2013). Guided depth enhancement via anisotropic diffusion. In Pacific-Rim conference on multimedia (pp. 408–417). Berlin: Springer.

  • Ma, Z., Liao, R., Tao, X., Xu, L., Jia, J., & Wu, E. (2015). Handling motion blur in multi-frame super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 5224–5232). Boston, MA.

  • Marquina, A., & Osher, S. (2008). Image super-resolution by TV-regularization and Bregman iteration. Journal of Scientific Computing, 37(3), 367–382.

    Article  MathSciNet  MATH  Google Scholar 

  • Mignotte, M. (2008). A non-local regularization strategy for image deconvolution. Pattern Recognition Letters, 29(16), 2206–2212.

    Article  Google Scholar 

  • Min, D., Lu, J., & Do, M. N. (2012). Depth video enhancement based on weighted mode filtering. IEEE Transactions on Image Processing, 21(3), 1176–1190.

    Article  MathSciNet  MATH  Google Scholar 

  • Mitra, K., & Veeraraghavan, A. (2012). Light field denoising, light field superresolution and stereo camera based refocussing using a GMM light field patch prior. In Proceedings of the IEEE conference computer vision and pattern recognition workshops (CVPRW) (pp. 22–28). Providence, RI.

  • Mitzel, D., Pock, T., Schoenemann, T., & Cremers, D. (2009). Video super resolution using duality based TV-L1 optical flow. In Proceedings of the joint pattern recognition symposium, LNCS (Vol. 5748, pp. 432–441). Jena.

  • Nasrollahi, K., & Moeslund, T. B. (2014). Super-resolution: A comprehensive survey. Machine Vision and Applications, 25(6), 1423–1468.

    Article  Google Scholar 

  • Navarro, J., & Buades, A. (2016). Reliable light field multiwindow disparity estimation. In Proceedings of the IEEE international conference on image processing (ICIP) (pp. 1449–1453). Phoenix, AZ.

  • Navarro, J., & Buades, A. (2017). Robust and dense depth estimation for light field images. IEEE Transactions on Image Processing, 26(4), 1873–1886.

    Article  MathSciNet  MATH  Google Scholar 

  • Navarro, J., Duran, J., & Buades, A. (2018). Filtering and interpolation of inaccurate and incomplete depth maps. In Proceedings of the IEEE international conference on image processing (ICIP). Athens.

  • Ng, R. (2006). Digital light field photography. Ph.D. thesis, Stanford University, Stanford, CA, USA. www.lytro.com.

  • Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., & Hanrahan, P. (2005). Light field photography with a hand-held plenoptic camera. Technical report, CSTR 2005-02, Stanford University.

  • Park, J., Kim, H., Tai, Y. W., Brown, M., Kweon, I. (2011). High quality depth map upsampling for 3D-ToF cameras. In Proceedings of the IEEE international conference on computer vision (ICCV) (pp. 1623–1630). Barcelona.

  • Perwass, C. (2010). The next generation of photography. White paper. www.raytrix.de.

  • Petschnigg, G., Szeliski, R., Agrawala, M., Cohen, M., Hoppe, H., & Toyama, K. (2004). Digital photography with flash and no-flash image pairs. ACM Transactions on Graphics, 23, 664–672.

    Article  Google Scholar 

  • Peyré, G., Bougleux, S., & Cohen, L. (2008). Non-local regularization of inverse problems. In Proceedings of the European Conference on Computer Vision (ECCV), Lectures notes in computer science (Vol. 5304, pp. 57–68). Marseille.

  • Protter, M., Elad, M., Takeda, H., & Milanfar, P. (2009). Generalizing the nonlocal-means to super-resolution reconstruction. IEEE Transactions on Image Processing, 18(1), 36–51.

    Article  MathSciNet  MATH  Google Scholar 

  • Rossi, M., & Frossard, P. (2017). Graph-based light field super-resolution. In IEEE international workshop on multimedia signal processing (MMSP) (pp. 1–6). Luton.

  • Rudin, L., Osher, S., & Fatemi, E. (1992). Nonlinear total variation based noise removal algorithms. Physica D, 60(1), 259–268.

    Article  MathSciNet  MATH  Google Scholar 

  • Sabater, N., Seifi, M., Drazic, V., Sandri, G., & Pérez, P. (2014). Accurate disparity estimation for plenoptic images. In: Proceedings of the European conference on computer vision (ECCV) workshops (pp. 548–560). Zurich.

  • Schuon, S., Theobalt, C., Davis, J., & Thrun, S. (2009). Lidarboost: Depth superresolution for ToF 3D shape scanning. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 343–350). Miami, FL.

  • Spies, H., Jähne, B., & Barron, J. (2000). Regularised range flow. In Proceedings of the European conference on computer vision (ECCV), LNCS (Vol. 1843, pp. 785–799). Dublin.

  • Sun, D., Yang, X., Liu, M. Y., & Kautz, J. (2018). PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8934–8943).

  • Takeda, H., Milanfar, P., Protter, M., & Elad, M. (2009). Super-resolution without explicit subpixel motion estimation. IEEE Transactions on Image Processing, 18(9), 1958–1975.

    Article  MathSciNet  MATH  Google Scholar 

  • Tao, X., Gao, H., Liao, R., Wang, J., & Jia, J. (2017). Detail-revealing deep video super-resolution. In Proceedings of the IEEE international conference on computer vision (ICCV) (pp. 2380–7504). Venice.

  • Unger, M., Pock, T., Werlberger, M., & Bischof, H. (2010). A convex approach for variational super-resolution. In Proceedings of the joint pattern recognition symposium, LNCS (Vol. 6376, pp. 313–322). Darmstadt.

  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.

    Article  Google Scholar 

  • Wang, Z., Yang, Y., Wang, Z., Chang, S., Yang, J., & Huang, T. S. (2015). Learning super-resolution jointly from external and internal examples. IEEE Transactions on Image Processing, 24(11), 4359–4371.

    Article  MathSciNet  MATH  Google Scholar 

  • Wanner, S., & Goldluecke, B. (2012). Spatial and angular variational super-resolution of 4D light fields. In Proceedings of the European conference on computer vision (ECCV) (pp. 608–621). Firenze.

  • Wanner, S., & Goldluecke, B. (2014). Variational light field analysis for disparity estimation and super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(3), 606–619.

    Article  Google Scholar 

  • Wanner, S., Meister, S., & Goldluecke, B. (2013). Datasets and benchmarks for densely sampled 4D light fields. In Proceedings of the international workshop on vision modeling and visualization (VMV) (pp. 225–226). Lugano.

  • Weinzaepfel, P., Revaud, J., Harchaoui, Z., & Schmid, C. (2013). Deepflow: Large displacement optical flow with deep matching. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1385–1392). Portland, OR.

  • Wu, J., Wang, H., Wang, X., & Zhang, Y. (2015). A novel light field super-resolution framework based on hybrid imaging system. In Proceedings of the IEEE conference on visual communications and image processing (VCIP) (pp. 1–4). Singapore.

  • Yamamoto, M., Boulanger, P., Beraldin, J. A., & Rioux, M. (1993). Direct estimation of range flow on deformable shape from a video rate range camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(1), 82–89.

    Article  Google Scholar 

  • Yang, Q., Yang, R., Davis, J., Nistér, D. (2007). Spatial-depth super resolution for range images. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1–8). Minneapolis, MN.

  • Yoon, Y., Jeon, H. G., Yoo, D., Lee, J. Y., & Kweon, I. S. (2017). Light-field image super-resolution using convolutional neural network. IEEE Transactions on Signal Processing, 24(6), 848–852.

    Article  Google Scholar 

  • Zach, C., Pock, T., & Bischof, H. (2007). A duality based approach for realtime TV-L1 optical flow. In Proceedings of the joint pattern recognition symposium, LNCS (Vol. 4713, pp. 214–223). Heidelberg.

  • Zhang, X., Burger, M., Bresson, X., & Osher, S. (2010). Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM Journal on Imaging Sciences, 3(3), 253–276.

    Article  MathSciNet  MATH  Google Scholar 

  • Zheng, H., Guo, M., Wang, H., Liu, Y., & Fang, L. (2017). Combining exemplar-based approach and learning-based approach for light field super-resolution using a hybrid imaging system. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2481–2486). Honolulu, HI.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. Duran.

Additional information

Communicated by Chen Change Loy.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors were supported by Grants TIN2014-53772-R and TIN2017-85572-P (MINECO/AEI/FEDER, UE).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Buades, A., Duran, J. & Navarro, J. Motion-Compensated Spatio-Temporal Filtering for Multi-Image and Multimodal Super-Resolution. Int J Comput Vis 127, 1474–1500 (2019). https://doi.org/10.1007/s11263-019-01200-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-019-01200-5

Keywords

Navigation