skip to main content
research-article
Open Access

Neural Field Convolutions by Repeated Differentiation

Published:05 December 2023Publication History
Skip Abstract Section

Abstract

Neural fields are evolving towards a general-purpose continuous representation for visual computing. Yet, despite their numerous appealing properties, they are hardly amenable to signal processing. As a remedy, we present a method to perform general continuous convolutions with general continuous signals such as neural fields. Observing that piecewise polynomial kernels reduce to a sparse set of Dirac deltas after repeated differentiation, we leverage convolution identities and train a repeated integral field to efficiently execute large-scale convolutions. We demonstrate our approach on a variety of data modalities and spatially-varying kernels.

Skip Supplemental Material Section

Supplemental Material

References

  1. J Harold Ahlberg, Edwin Norman Nilson, and Joseph Leonard Walsh. 2016. The Theory of Splines and Their Applications. Vol. 38. Elsevier.Google ScholarGoogle Scholar
  2. Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P. Srinivasan. 2021. Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. ICCV (2021).Google ScholarGoogle Scholar
  3. Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. 2022. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. CVPR (2022).Google ScholarGoogle Scholar
  4. E Oran Brigham. 1988. The fast Fourier transform and its applications. Prentice-Hall, Inc.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Jiawen Chen, Sylvain Paris, and Frédo Durand. 2007. Real-time edge-aware image processing with the bilateral grid. ACM Trans. Graph. 26, 3 (2007), 1--9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Franklin C Crow. 1984. Summed-area tables for texture mapping. In SIGGRAPH. 207--212.Google ScholarGoogle Scholar
  7. Yilun Du, M. Katherine Collins, B. Joshua Tenenbaum, and Vincent Sitzmann. 2021. Learning Signal-Agnostic Manifolds of Neural Fields. In NeurIPS.Google ScholarGoogle Scholar
  8. Emilien Dupont, Adam Goliński, Milad Alizadeh, Yee Whye Teh, and Arnaud Doucet. 2021. Coin: Compression with implicit neural representations. In ICLR (Neural Compression Workshop).Google ScholarGoogle Scholar
  9. Emilien Dupont, Hyunjik Kim, S. M. Ali Eslami, Danilo Jimenez Rezende, and Dan Rosenbaum. 2022a. From data to functa: Your data point is a function and you can treat it like one, Vol. 162. PMLR, 5694--5725.Google ScholarGoogle Scholar
  10. Emilien Dupont, Yee Whye Teh, and Arnaud Doucet. 2022b. Generative Models as Distributions of Functions, Vol. 151. PMLR, 2989--3015.Google ScholarGoogle Scholar
  11. Ziya Erkoç, Fangchang Ma, Qi Shan, Matthias Nießner, and Angela Dai. 2023. Hyperdiffusion: Generating implicit neural fields with weight-space diffusion. In ICCV.Google ScholarGoogle Scholar
  12. Zeev Farbman, Raanan Fattal, and Dani Lischinski. 2011. Convolution pyramids. ACM Trans. Graph. 30, 6 (2011), 1--8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Rizal Fathony, Anit Kumar Sahu, Devin Willmott, and J Zico Kolter. 2020. Multiplicative filter networks. In ICLR.Google ScholarGoogle Scholar
  14. Alain Fournier and Eugene Fiume. 1988. Constant-time filtering with space-variant kernels. ACM Trans. Graph. 22, 4 (1988), 229--238.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. William T Freeman, Edward H Adelson, et al. 1991. The design and use of steerable filters. IEEE TPAMI 13, 9 (1991), 891--906.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Paul S Heckbert. 1986. Filtering by repeated integration. ACM Trans. Graph. 20, 4 (1986), 315--321.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Pedro Hermosilla, Tobias Ritschel, Pere-Pau Vázquez, Àlvar Vinacua, and Timo Ropinski. 2018. Monte carlo convolution for learning on non-uniformly sampled point clouds. ACM Trans. Graph. 37, 6 (2018), 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Amir Hertz, Or Perel, Raja Giryes, Olga Sorkine-Hornung, and Daniel Cohen-Or. 2021. Sape: Spatially-adaptive progressive encoding for neural optimization. NeurIPS 34 (2021), 8820--8832.Google ScholarGoogle Scholar
  19. Brian K. S. Isaac-Medina, Chris G. Willcocks, and Toby P. Breckon. 2023. Exact-NeRF: An Exploration of a Precise Volumetric Parameterization for Neural Radiance Fields. CVPR (2023).Google ScholarGoogle Scholar
  20. Markus Kettunen, Marco Manzi, Miika Aittala, Jaakko Lehtinen, Frédo Durand, and Matthias Zwicker. 2015. Gradient-domain path tracing. ACM Trans. Graph. 34, 4 (2015), 1--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR.Google ScholarGoogle Scholar
  22. Georgios Kopanas, Thomas Leimkühler, Gilles Rainer, Clément Jambon, and George Drettakis. 2022. Neural Point Catacaustics for Novel-View Synthesis of Reflections. ACM Trans. Graph. 41, 6 (2022), 1--15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Aittala, and Timo Aila. 2018. Noise2Noise: Learning Image Restoration without Clean Data. In ICML. 2965--2974.Google ScholarGoogle Scholar
  24. Thomas Leimkühler, Hans-Peter Seidel, and Tobias Ritschel. 2018. Laplacian Kernel Splatting for Efficient Depth-of-field and Motion Blur Synthesis or Reconstruction. ACM Trans. Graph. 37, 4 (2018), 1--11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Tony Lindeberg. 2013. Scale-space theory in computer vision. Vol. 256. Springer Science & Business Media.Google ScholarGoogle Scholar
  26. David B Lindell, Julien NP Martel, and Gordon Wetzstein. 2021. Autoint: Automatic integration for fast neural volume rendering. In CVPR. 14556--14565.Google ScholarGoogle Scholar
  27. David B Lindell, Dave Van Veen, Jeong Joon Park, and Gordon Wetzstein. 2022. Bacon: Band-limited coordinate networks for multiscale scene representation. In CVPR. 16252--16262.Google ScholarGoogle Scholar
  28. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2020. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV. 405--421.Google ScholarGoogle Scholar
  29. Thomas W. Mitchel, Benedict Brown, David Koller, Tim Weyrich, Szymon Rusinkiewicz, and Michael Kazhdan. 2020. Efficient Spatially Adaptive Convolution and Correlation. Technical Report 2006.13188. arXiv preprint.Google ScholarGoogle Scholar
  30. Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM Trans. Graph. 41, 4 (2022), 1--15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Harald Niederreiter. 1992. Low-discrepancy point sets obtained by digital constructions over finite fields. Czechoslovak Mathematical Journal 42, 1 (1992), 143--166.Google ScholarGoogle ScholarCross RefCross Ref
  32. Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Love-grove. 2019. Deepsdf: Learning continuous signed distance functions for shape representation. In CVPR. 165--174.Google ScholarGoogle Scholar
  33. Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. 2021. Nerfies: Deformable neural radiance fields. In ICCV. 5865--5874.Google ScholarGoogle Scholar
  34. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. (2017).Google ScholarGoogle Scholar
  35. Kenneth Perlin. 1984. Personal communication with Paul Heckbert, mentioned in Heckbert [1986].Google ScholarGoogle Scholar
  36. Nicholas Sharp and Alec Jacobson. 2022. Spelunking the Deep: Guaranteed Queries on General Neural Implicit Surfaces via Range Analysis. ACM Trans. Graph. 41, 4 (2022), 1--16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Assaf Shocher, Ben Feinstein, Niv Haim, and Michal Irani. 2020. From discrete to continuous convolution layers. arXiv preprint arXiv:2006.11120 (2020).Google ScholarGoogle Scholar
  38. Patrice Simard, Léon Bottou, Patrick Haffner, and Yann LeCun. 1998. Boxlets: a fast convolution algorithm for signal processing and neural networks. NeurIPS 11 (1998).Google ScholarGoogle Scholar
  39. Gurprit Singh, Cengiz Öztireli, Abdalla GM Ahmed, David Coeurjolly, Kartic Subr, Oliver Deussen, Victor Ostromoukhov, Ravi Ramamoorthi, and Wojciech Jarosz. 2019. Analysis of sample correlations for Monte Carlo rendering. In Comp. Graph. Forum, Vol. 38. Wiley Online Library, 473--491.Google ScholarGoogle Scholar
  40. Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. 2020. Implicit neural representations with periodic activation functions. NeurIPS 33 (2020), 7462--7473.Google ScholarGoogle Scholar
  41. Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. 2019. Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations. In NeurIPS.Google ScholarGoogle Scholar
  42. Ilya Meerovich Sobol. 1967. On the distribution of points in a cube and the approximate evaluation of integrals. Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki 7, 4 (1967), 784--802.Google ScholarGoogle Scholar
  43. Kenneth O Stanley. 2007. Compositional pattern producing networks: A novel abstraction of development. Genetic programming and evolvable machines 8 (2007), 131--162.Google ScholarGoogle Scholar
  44. Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. 2020. Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS 33 (2020), 7537--7547.Google ScholarGoogle Scholar
  45. Ayush Tewari, Justus Thies, Ben Mildenhall, Pratul Srinivasan, Edgar Tretschk, W Yifan, Christoph Lassner, Vincent Sitzmann, Ricardo Martin-Brualla, Stephen Lombardi, et al. 2022. Advances in neural rendering. Comp. Graph. Forum 41, 2 (2022), 703--735.Google ScholarGoogle ScholarCross RefCross Ref
  46. Carlo Tomasi and Roberto Manduchi. 1998. Bilateral filtering for gray and color images. In ICCV. 839--846.Google ScholarGoogle Scholar
  47. Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, and Christian Theobalt. 2021. Non-rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video. In ICCV. 12959--12970.Google ScholarGoogle Scholar
  48. Cristina Vasconcelos, Kevin Swersky, Mark Matthews, Milad Hashemi, Cengiz Oztireli, and Andrea Tagliasacchi. 2023. CUF: Continuous Upsampling Filters. In CVPR.Google ScholarGoogle Scholar
  49. Delio Vicini, Sébastien Speierer, and Wenzel Jakob. 2022. Differentiable signed distance function rendering. ACM Trans. Graph. 41, 4 (2022), 1--18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Paul Viola and Michael Jones. 2001. Rapid object detection using a boosted cascade of simple features. In CVPR, Vol. 1. I-I.Google ScholarGoogle ScholarCross RefCross Ref
  51. Shenlong Wang, Simon Suo, Wei-Chiu Ma, Andrei Pokrovsky, and Raquel Urtasun. 2018. Deep parametric continuous convolutional neural networks. In CVPR. 2589--2597.Google ScholarGoogle Scholar
  52. Yinhuai Wang, Shuzhou Yang, Yujie Hu, and Jian Zhang. 2022. NeRFocus: Neural Radiance Field for 3D Synthetic Defocus. arXiv preprint arXiv:2203.05189 (2022).Google ScholarGoogle Scholar
  53. Lance Williams. 1983. Pyramidal parametrics. In SIGGRAPH, Vol. 17. 1--11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Andrew P Witkin. 1987. Scale-space filtering. In Readings in Computer Vision. 329--332.Google ScholarGoogle Scholar
  55. Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, and Srinath Sridhar. 2022. Neural fields in visual computing and beyond. Comp. Graph. Forum 41, 2 (2022), 641--676.Google ScholarGoogle ScholarCross RefCross Ref
  56. Dejia Xu, Peihao Wang, Yifan Jiang, Zhiwen Fan, and Zhangyang Wang. 2022. Signal Processing for Implicit Neural Representations. In NeurIPS.Google ScholarGoogle Scholar
  57. Guandao Yang, Serge Belongie, Bharath Hariharan, and Vladlen Koltun. 2021. Geometry processing with neural fields. NeurIPS 34 (2021), 22483--22497.Google ScholarGoogle Scholar
  58. Guandao Yang, Sagie Benaim, Varun Jampani, Kyle Genova, Jonathan Barron, Thomas Funkhouser, Bharath Hariharan, and Serge Belongie. 2022. Polynomial neural fields for subband decomposition and manipulation. NeuRIPS 35 (2022), 4401--4415.Google ScholarGoogle Scholar
  59. Yu-Jie Yuan, Yang-Tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, and Lin Gao. 2022. NeRF-editing: geometry editing of neural radiance fields. In ICCV. 18353--18364.Google ScholarGoogle Scholar
  60. Xian-Da Zhang. 2022. Modern signal processing. In Modern Signal Processing. De Gruyter.Google ScholarGoogle Scholar

Index Terms

  1. Neural Field Convolutions by Repeated Differentiation

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 42, Issue 6
        December 2023
        1565 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/3632123
        Issue’s Table of Contents

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 5 December 2023
        Published in tog Volume 42, Issue 6

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
      • Article Metrics

        • Downloads (Last 12 months)196
        • Downloads (Last 6 weeks)19

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader