Skip to main content
Log in

Neural network complexity of chaos and turbulence

  • Regular Article - Flowing Matter
  • Published:
The European Physical Journal E Aims and scope Submit manuscript

Abstract

Chaos and turbulence are complex physical phenomena, yet a precise definition of the complexity measure that quantifies them is still lacking. In this work, we consider the relative complexity of chaos and turbulence from the perspective of deep neural networks. We analyze a set of classification problems, where the network has to distinguish images of fluid profiles in the turbulent regime from other classes of images such as fluid profiles in the chaotic regime, various constructions of noise and real-world images. We analyze incompressible as well as weakly compressible fluid flows. We quantify the complexity of the computation performed by the network via the intrinsic dimensionality of the internal feature representations and calculate the effective number of independent features which the network uses in order to distinguish between classes. In addition to providing a numerical estimate of the complexity of the computation, the measure also characterizes the neural network processing at intermediate and final stages. We construct adversarial examples and use them to identify the two point correlation spectra for the chaotic and turbulent vorticity as the feature used by the network for classification.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding author, T.W., upon reasonable request.

Notes

  1. \(\xi _3=1\) is an exact result which can be derived analytically together with only a few other analytical cases [3].

  2. This friction term would be responsible for removing energy at large scales, making the inverse energy cascade stationary [38].

  3. \(\text {ReLU}(x)=x\) if \(x\ge 0\) and \(\text {ReLU}(x)=0\) if \(x<0\).

  4. Here we omit mentioning batch normalization layers and residual/skip connections which are not essential for our discussion.

  5. For simplicity, we consider here a classification task with one positive and one negative class.

  6. The final representation should be such that the two classes would be linearly separable.

  7. It is standard to separate the dataset into disjoint training part and a test set, and use the latter for determining accuracy and performing various analysis.

  8. More precisely after each residual block of the ResNet-56 network comprising two convolutional layers.

  9. Or a cut-off of 20 epochs.

  10. Downloaded from https://www.microsoft.com/en-us/download/details.aspx?id=54765.

  11. The target accuracy of \(99\%\) used for the remaining experiments in this paper could not be reached in this case, hence we chose a fixed number of epochs here.

  12. Turbulent images with two large vortices are an exception. This feature occurs, however, only in a subset of turbulent images and hence cannot be used as a key reason for classifying an image as turbulence.

  13. These features are evaluated in the same way at each point of the appropriately coarse-grained image [see (5)].

  14. See [56, 57] for some concrete analogies.

References

  1. U. Frisch, Turbulence: The Legacy of A. N. Kolmogorov (Cambridge University Press, Cambridge, 1995). https://doi.org/10.1017/CBO9781139170666

    Book  MATH  Google Scholar 

  2. A.N. Kolmogorov, The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. Comptes rendus Acad. Sci. URSS 30, 301 (1941)

    MathSciNet  Google Scholar 

  3. G. Falkovich, I. Fouxon, Y. Oz, New relations for correlation functions in Navier–Stokes turbulence. J. Fluid Mech. 644, 465 (2010). https://doi.org/10.1017/S0022112009993429. arXiv:0909.3404

    Article  ADS  MathSciNet  MATH  Google Scholar 

  4. R. Benzi, S. Ciliberto, C. Baudet, G.R. Chavarria, On the scaling of three-dimensional homogeneous and isotropic turbulence. Phys. D Nonlinear Phenom. 80, 385 (1995)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  5. S.Y. Chen, B. Dhruva, S. Kurien, K.R. Sreenivasan, M.A. Taylor, Anomalous scaling of low-order structure functions of turbulent velocity. J. Fluid Mech. 533, 183–192 (2005). https://doi.org/10.1017/S002211200500443X

    Article  ADS  MathSciNet  MATH  Google Scholar 

  6. L. Biferale, F. Bonaccorso, M. Buzzicotti, K.P. Iyer, Self-similar subgrid-scale models for inertial range turbulence and accurate measurements of intermittency. Phys. Rev. Lett. (2019). https://doi.org/10.1103/physrevlett.123.014503

    Article  Google Scholar 

  7. Z.-S. She, E. Leveque, Universal scaling laws in fully developed turbulence. Phys. Rev. Lett. 72, 336 (1994). https://doi.org/10.1103/PhysRevLett.72.336

    Article  ADS  Google Scholar 

  8. V. Yakhot, Mean-field approximation and a small parameter in turbulence theory. Phys. Rev. E 63, 026307 (2001). https://doi.org/10.1103/PhysRevE.63.026307

    Article  ADS  Google Scholar 

  9. C. Eling, Y. Oz, The anomalous scaling exponents of turbulence in general dimension from random geometry. J. High Energy Phys. 2015, 1 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  10. Y. Oz, Spontaneous symmetry breaking, conformal anomaly and incompressible fluid turbulence. JHEP 11, 040 (2017). https://doi.org/10.1007/JHEP11(2017)040. arXiv:1707.07855

    Article  ADS  MathSciNet  MATH  Google Scholar 

  11. Y. Jiang, B. Neyshabur, H. Mobahi, D. Krishnan, S. Bengio, Fantastic generalization measures and where to find them (2019). https://doi.org/10.48550/ARXIV.1912.02178

  12. M. Shavit, G. Falkovich, Singular measures and information capacity of turbulent cascades. Phys. Rev. Lett. (2020). https://doi.org/10.1103/physrevlett.125.104501

    Article  MathSciNet  Google Scholar 

  13. S. Pandey, J. Schumacher, K.R. Sreenivasan, A perspective on machine learning in turbulent flows. J. Turbul. 21, 567 (2020). https://doi.org/10.1080/14685248.2020.1757685

    Article  ADS  MathSciNet  Google Scholar 

  14. A.A. Moghaddam, A. Sadaghiyani, A deep learning framework for turbulence modeling using data assimilation and feature extraction (2018)

  15. B. Li, Z. Yang, X. Zhang, G. He, B.-Q. Deng, L. Shen, Using machine learning to detect the turbulent region in flow past a circular cylinder. J. Fluid Mech. 905, A10 (2020). https://doi.org/10.1017/jfm.2020.725

    Article  MathSciNet  MATH  Google Scholar 

  16. M. Buzzicotti, F. Bonaccorso, Inferring turbulent environments via machine learning. Eur. Phys. J. E 45, 102 (2022). https://doi.org/10.1140/epje/s10189-022-00258-3

    Article  Google Scholar 

  17. P. Clark Di Leoni, A. Mazzino, L. Biferale, Inferring flow parameters and turbulent configuration with physics-informed data assimilation and spectral nudging. Phys. Rev. Fluids 3, 104604 (2018). https://doi.org/10.1103/PhysRevFluids.3.104604

    Article  ADS  Google Scholar 

  18. M. Lellep, J. Prexl, B. Eckhardt, M. Linkmann, Interpreted machine learning in fluid dynamics: explaining relaminarisation events in wall-bounded shear flows. J. Fluid Mech. 942, A2 (2022). https://doi.org/10.1017/jfm.2022.307

    Article  ADS  MathSciNet  MATH  Google Scholar 

  19. C. Drygala, B. Winhart, F. di Mare, H. Gottschalk, Generative modeling of turbulence. Phys. Fluids 34, 035114 (2022). https://doi.org/10.1063/5.0082562

    Article  ADS  Google Scholar 

  20. D. Tretiak, A.T. Mohan, D. Livescu, Physics-constrained generative adversarial networks for 3D turbulence (2022)

  21. G. Yang, S. Sommer, A denoising diffusion model for fluid field prediction (2023)

  22. D. Shu, Z. Li, A.B. Farimani, A physics-informed diffusion model for high-fidelity flow field reconstruction. J. Comput. Phys. 478, 111972 (2023). https://doi.org/10.1016/j.jcp.2023.111972

    Article  MathSciNet  MATH  Google Scholar 

  23. A. Mohan, D. Daniel, M. Chertkov, D. Livescu, Compressed convolutional LSTM: an efficient deep learning framework to model high fidelity 3D turbulence (2019)

  24. R. King, O. Hennigh, A. Mohan, M. Chertkov, From deep to physics-informed learning of turbulence: diagnostics (2018)

  25. K. Fukami, K. Fukagata, K. Taira, Super-resolution reconstruction of turbulent flows with machine learning. J. Fluid Mech. 870, 106–120 (2019). https://doi.org/10.1017/jfm.2019.238

    Article  ADS  MathSciNet  MATH  Google Scholar 

  26. H. Kim, J. Kim, S. Won, C. Lee, Unsupervised deep learning for super-resolution reconstruction of turbulence. J. Fluid Mech. 910, A29 (2021). https://doi.org/10.1017/jfm.2020.1028

    Article  ADS  MathSciNet  MATH  Google Scholar 

  27. P. Clark Di Leoni, K. Agarwal, T.A. Zaki, C. Meneveau, J. Katz, Reconstructing turbulent velocity and pressure fields from under-resolved noisy particle tracks using physics-informed neural networks. Exp. Fluids 64, 95 (2023). https://doi.org/10.1007/s00348-023-03629-4

    Article  Google Scholar 

  28. M. Buzzicotti, F. Bonaccorso, P.C. Di Leoni, L. Biferale, Reconstruction of turbulent data with deep generative models for semantic inpainting from turb-rot database. Phys. Rev. Fluids 6, 050503 (2021). https://doi.org/10.1103/PhysRevFluids.6.050503

    Article  ADS  Google Scholar 

  29. Z. Li, N.B. Kovachki, K. Azizzadenesheli, K. Bhattacharya, A. Stuart, A. Anandkumar et al., Fourier neural operator for parametric partial differential equations, in International Conference on Learning Representations (2020)

  30. M. Rotman, A. Dekel, R.I. Ber, L. Wolf, Y. Oz, Semi-supervised learning of partial differential operators and dynamical flows. Uncertain. Artif. Intell. (2023). https://doi.org/10.48550/ARXIV.2207.14366

    Article  Google Scholar 

  31. A. Beck, M. Kurz, A perspective on machine learning methods in turbulence modeling. GAMM-Mitteilungen 44, e202100002 (2021). https://doi.org/10.1002/gamm.202100002

    Article  MathSciNet  Google Scholar 

  32. K. Duraisamy, G. Iaccarino, H. Xiao, Turbulence modeling in the age of data. Annu. Rev. Fluid Mech. 51, 357 (2019). https://doi.org/10.1146/annurev-fluid-010518-040547

    Article  ADS  MathSciNet  MATH  Google Scholar 

  33. F. Sofos, C. Stavrogiannis, K.K. Exarchou-Kouveli, D. Akabua, G. Charilas, T.E. Karakasidis, Current trends in fluid research in the era of artificial intelligence: a review. Fluids (2022). https://doi.org/10.3390/fluids7030116

    Article  Google Scholar 

  34. S.L. Brunton, B.R. Noack, P. Koumoutsakos, Machine learning for fluid mechanics. Annu. Rev. Fluid Mech. 52, 477 (2020). https://doi.org/10.1146/annurev-fluid-010719-060214

    Article  ADS  MathSciNet  MATH  Google Scholar 

  35. D. Panchigar, K. Kar, S. Shukla, R.M. Mathew, U. Chadha, S.K. Selvaraj, Machine learning-based CFD simulations: a review, models, open threats, and future tactics. Neural Comput. Appl. 34, 21677 (2022). https://doi.org/10.1007/s00521-022-07838-6

    Article  Google Scholar 

  36. R.A. Janik, P. Witaszczyk, Complexity for deep neural networks and other characteristics of deep feature representations. https://doi.org/10.48550/ARXIV.2006.04791

  37. K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition. CoRR arXiv:1512.03385 (2015)

  38. G. Boffetta, R.E. Ecke, Two-dimensional turbulence. Annu. Rev. Fluid Mech. 44, 427 (2012). https://doi.org/10.1146/annurev-fluid-120710-101240

    Article  ADS  MathSciNet  MATH  Google Scholar 

  39. C. Canuto, M. Hussaini, A. Quarteroni, T. Zang, Spectral methods. Evolution to complex geometries and applications to fluid dynamics

  40. L. Puggioni, A.G. Kritsuk, S. Musacchio, G. Boffetta, Conformal invariance of weakly compressible two-dimensional turbulence. Phys. Rev. E 102, 023107 (2020). https://doi.org/10.1103/PhysRevE.102.023107

    Article  ADS  MathSciNet  Google Scholar 

  41. J.C. Mcwilliams, The emergence of isolated coherent vortices in turbulent flow. Journal of Fluid Mechanics 146, 21–43 (1984). https://doi.org/10.1017/S0022112084001750

    Article  ADS  MATH  Google Scholar 

  42. M. Chertkov, C. Connaughton, I. Kolokolov, V. Lebedev, Dynamics of energy condensation in two-dimensional turbulence. Phys. Rev. Lett. 99, 084501 (2007). https://doi.org/10.1103/PhysRevLett.99.084501

    Article  ADS  Google Scholar 

  43. R.H. Kraichnan, Inertial ranges in two-dimensional turbulence. Phys. Fluids 10, 1417 (1967). https://doi.org/10.1063/1.1762301

    Article  ADS  Google Scholar 

  44. G.K. Batchelor, Computation of the energy spectrum in homogeneous two-dimensional turbulence. Phys. Fluids 12, II (1969). https://doi.org/10.1063/1.1692443

    Article  MATH  Google Scholar 

  45. M.A. Rutgers, Forced 2D turbulence: experimental evidence of simultaneous inverse energy and forward enstrophy cascades. Phys. Rev. Lett. 81, 2244 (1998). https://doi.org/10.1103/PhysRevLett.81.2244

    Article  ADS  Google Scholar 

  46. G. Boffetta, S. Musacchio, Evidence for the double cascade scenario in two-dimensional turbulence. Phys. Rev. E 82, 016307 (2010). https://doi.org/10.1103/PhysRevE.82.016307

    Article  ADS  Google Scholar 

  47. J.R. Westernacher-Schneider, L. Lehner, Y. Oz, Scaling relations in two-dimensional relativistic hydrodynamic turbulence. J. High Energy Phys. 2015, 1 (2015). https://doi.org/10.1007/JHEP12(2015)067

    Article  MathSciNet  MATH  Google Scholar 

  48. J.R. Westernacher-Schneider, L. Lehner, Numerical measurements of scaling relations in two-dimensional conformal fluid turbulence. J. High Energy Phys. 2017, 27 (2017). https://doi.org/10.1007/JHEP08(2017)027

    Article  MathSciNet  MATH  Google Scholar 

  49. J.D. Hunter, Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 9, 90 (2007). https://doi.org/10.1109/MCSE.2007.55

    Article  Google Scholar 

  50. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma et al., ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. IJCV 115, 211 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  51. F. Doshi-Velez, B. Kim, Towards a rigorous science of interpretable machine learning (2017). https://doi.org/10.48550/ARXIV.1702.08608

  52. M.T. Ribeiro, S. Singh, C. Guestrin, “Why should i trust you?”: explaining the predictions of any classifier, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)

  53. S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, W. Samek, On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015). https://doi.org/10.1371/journal.pone.0130140

    Article  Google Scholar 

  54. I.J. Goodfellow, J. Shlens, C. Szegedy, Explaining and harnessing adversarial examples (2014). https://doi.org/10.48550/ARXIV.1412.6572

  55. O. Aharony, S.S. Gubser, J.M. Maldacena, H. Ooguri, Y. Oz, Large N field theories, string theory and gravity. Phys. Rep. 323, 183 (2000). https://doi.org/10.1016/S0370-1573(99)00083-6. arXiv:ep-th/9905111

    Article  ADS  MathSciNet  MATH  Google Scholar 

  56. K. Hashimoto, S. Sugishita, A. Tanaka, A. Tomiya, Deep learning and holographic QCD. Phys. Rev. D (2018). https://doi.org/10.1103/physrevd.98.106014

  57. K. Hashimoto, AdS/CFT correspondence as a deep Boltzmann machine. Phys. Rev. D (2019). https://doi.org/10.1103/physrevd.99.106017

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank J.R. Westernacher-Schneider for discussions about numerically evolving weakly compressible flows. RJ was supported by the research project Bio-inspired artificial neural networks (Grant No. POIR.04.04.00-00-14DE/18-00) within the Team-Net program of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund and by a grant from the Priority Research Area DigiWorld under the Strategic Programme Excellence Initiative at Jagiellonian University. The work of Y.O. is supported in part by Israel Science Foundation Center of Excellence.

Author information

Authors and Affiliations

Authors

Contributions

TW and RJ carried out the numerical experiments and analyzed the results. All authors worked on developing the main idea, discussed the results and contributed to the final manuscript.

Corresponding author

Correspondence to Tim Whittaker.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Appendix A: Velocity images

Appendix A: Velocity images

In the previous sections, we calculated the effective using images of the vorticity pseudoscalar. It is natural to ask whether the same conclusions can be drawn if we use the velocity vector field. We present results for the effective dimension and adversarial example using images of the \(v_x\) and \(v_y\) components of the velocities instead of the vorticity. In the incompressible case, the vorticity contains all the information of the velocity field, but in the compressible case, the vorticity will miss the irrotational information. Alternatively, there is a statistical isotropy in the incompressible case, which is broken in the compressible case. We generate new Fourier noise using the statistics of the velocity images.

Table 4 The best case of the compressible velocity classifier
Table 5 The best case of the incompressible velocity classifier

1.1 1. Effective dimensions

In Fig. 11, we show the pattern of effective dimensions for classifying turbulence velocity images from chaos as well as the two types of noise discussed in Sect. 2.3. We show the results for the incompressible and weakly compressible case. The effective dimensions are similar to the vorticity ones except the weakly compressible chaos having overall larger effective dimension. We see that while in the incompressible case the effective dimensions using \(v_x\) or \(v_y\) are similar, this is not the case in the compressible case. This can be attributed to the broken statistical isotropy of the compressible fluid motion (Fig. 12).

1.2 2. Adversarial examples

As for the adversarial examples using the velocity data, we recover the same results as the vorticity data. In Table 4, we show the results of the adversarial examples for the weakly compressible case. Table 5 has the results of the adversarial examples for the incompressible case.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Whittaker, T., Janik, R.A. & Oz, Y. Neural network complexity of chaos and turbulence. Eur. Phys. J. E 46, 57 (2023). https://doi.org/10.1140/epje/s10189-023-00321-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1140/epje/s10189-023-00321-7

Navigation