Skip to main content

Triangle Attack: A Query-Efficient Decision-Based Adversarial Attack

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13665))

Abstract

Decision-based attack poses a severe threat to real-world applications since it regards the target model as a black box and only accesses the hard prediction label. Great efforts have been made recently to decrease the number of queries; however, existing decision-based attacks still require thousands of queries in order to generate good quality adversarial examples. In this work, we find that a benign sample, the current and the next adversarial examples can naturally construct a triangle in a subspace for any iterative attacks. Based on the law of sines, we propose a novel Triangle Attack (TA) to optimize the perturbation by utilizing the geometric information that the longer side is always opposite the larger angle in any triangle. However, directly applying such information on the input image is ineffective because it cannot thoroughly explore the neighborhood of the input sample in the high dimensional space. To address this issue, TA optimizes the perturbation in the low frequency space for effective dimensionality reduction owing to the generality of such geometric property. Extensive evaluations on ImageNet dataset show that TA achieves a much higher attack success rate within 1,000 queries and needs a much less number of queries to achieve the same attack success rate under various perturbation budgets than existing decision-based attacks. With such high efficiency, we further validate the applicability of TA on real-world API, i.e., Tencent Cloud API.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://cloud.tencent.com/.

References

  1. Ahmed, N., Natarajan, T., Rao, K.R.: Discrete cosine transform. IEEE Trans. Comput. 100, 90–93 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  2. Al-Dujaili, A., O’Reilly, U.M.: Sign bits are all you need for black-box attacks. In: International Conference on Learning Representations (2020)

    Google Scholar 

  3. Athalye, A., Carlini, N., Wagner, D.A.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: International Conference on Machine Learning (2018)

    Google Scholar 

  4. Bojarski, M., et al.: End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016)

  5. Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: International Conference on Learning Representations (2018)

    Google Scholar 

  6. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (2017)

    Google Scholar 

  7. Chen, C., Seff, A., Kornhauser, A., Xiao, J.: DeepDriving: learning affordance for direct perception in autonomous driving. In: International Conference on Computer Vision, pp. 2722–2730 (2015)

    Google Scholar 

  8. Chen, J., Jordan, M.I., Wainwright, M.J.: HopSkipJumpAttack: a query-efficient decision-based attack. In: IEEE Symposium on Security and Privacy (2020)

    Google Scholar 

  9. Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J., ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: ACM Workshop on Artificial Intelligence and Security (2017)

    Google Scholar 

  10. Chen, W., Zhang, Z., Hu, X., Wu, B.: Boosting decision-based black-box adversarial attacks with random sign flip. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12360, pp. 276–293. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58555-6_17

    Chapter  Google Scholar 

  11. M., Cheng, Thong Le, Pin-Yu Chen, Huan Zhang, Yi, J., Hsieh, C.J.: Query-efficient hard-label black-box attack: An optimization-based approach. In: International Conference on Learning Representations (2019)

    Google Scholar 

  12. Cheng, M., Singh, S., Chen, P.H., Chen, P.Y., Liu, S., Hsieh, C.J.: Sign-OPT: a query-efficient hard-label adversarial attack. In: International Conference on Learning Representations (2020)

    Google Scholar 

  13. Cohen, J.M., Rosenfeld, E., Kolter, J.Z.: Certified adversarial robustness via randomized smoothing. In: International Conference on Machine Learning (2019)

    Google Scholar 

  14. Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: International Conference on Machine Learning (2020)

    Google Scholar 

  15. Deng, Z., Peng, X., Li, Z., Qiao, Yu.: Mutual component convolutional neural networks for heterogeneous face recognition. IEEE Trans. Image Process. 28(6), 3102–3114 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  16. Dong, Y., et al.: Benchmarking adversarial robustness on image classification. In: Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  17. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Conference on Computer Vision and Pattern Recognition (2018)

    Google Scholar 

  18. Du, J., Zhang, H., Zhou, J.T., Yang, Y., Feng, J.: Query-efficient meta attack to deep neural networks. In: International Conference on Learning Representations (2020)

    Google Scholar 

  19. Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Conference on Computer Vision and Pattern Recognition (2018)

    Google Scholar 

  20. Gong, D., Li, Z., Liu, J., Qiao, Y.: Multi-feature canonical correlation analysis for face photo-sketch image retrieval. In: Proceedings of the 21st ACM International Conference on Multimedia, pp. 617–620 (2013)

    Google Scholar 

  21. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)

    Google Scholar 

  22. Guo, C., Frank, J.S., Weinberger, K.Q.: Low frequency adversarial perturbation. Uncertain. Artif. Intell. (2019)

    Google Scholar 

  23. Guo, C., Rana, M., Cissé, M., van der Maaten, L.: Countering adversarial images using input transformations. In: International Conference on Learning Representations (2018)

    Google Scholar 

  24. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  25. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  26. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. In: International Conference on Machine Learning (2018)

    Google Scholar 

  27. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012)

    Google Scholar 

  28. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: International Conference on Learning Representations, Workshop (2017)

    Google Scholar 

  29. Li, H., Xu, X., Zhang, X., Yang, S., Li, B.: QEBA: query-efficient boundary-based blackbox attack. In: Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  30. Li, Z., Gong, D., Qiao, Y., Tao, D.: Common feature discriminant analysis for matching infrared face images to optical face images. IEEE Trans. Image Process. 23(6), 2436–2445 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  31. Liang, S., Wu, B., Fan, Y., Wei, X., Cao, X.: Parallel rectangle flip attack: a query-based black-box attack against object detection. arXiv preprint arXiv:2201.08970 (2022)

  32. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: International Conference on Learning Representations (2017)

    Google Scholar 

  33. Liu, Y., Moosavi-Dezfooli, S.-M., Frossard, P.: A geometry-inspired decision-based attack. In: International Conference on Computer Vision, pp. 4890–4898 (2019)

    Google Scholar 

  34. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  35. Maho, T., Furon, T., Merrer, E.L.: SurFree: a fast surrogate-free black-box attack. In: Conference on Computer Vision and Pattern Recognition (2021)

    Google Scholar 

  36. Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  37. Qiu, H., Gong, D., Li, Z., Liu, W., Tao, D.: End2End occluded face recognition by masking corrupted features. IEEE Trans. Pattern Anal. Mach. Intell. (2021)

    Google Scholar 

  38. Rahmati, A., Moosavi-Dezfooli, S.-M., Frossard, P., Dai, H.: GeoDA: a geometric framework for black-box adversarial attacks. In: Computer Vision and Pattern Recognition, pp. 8446–8455 (2020)

    Google Scholar 

  39. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. In: International Booktitle of Computer Vision (2015)

    Google Scholar 

  40. Sallab, A.E.L., Abdou, M., Perot, E., Yogamani, S.: Deep reinforcement learning framework for autonomous driving. Electron. Imaging 2017(19), 70–76 (2017)

    Article  Google Scholar 

  41. Shafahi, A., et al.: Adversarial training for free! In: Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  42. Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: ACM SIGSAC conference on Computer and Communications Security (2016)

    Google Scholar 

  43. Shukla, S.N., Sahu, A.K., Willmott, D., Kolter, Z.: Simple and efficient hard label black-box adversarial attacks in low query budget regimes. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1461–1469 (2021)

    Google Scholar 

  44. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)

    Google Scholar 

  45. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  46. Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014)

    Google Scholar 

  47. Tang, X., Li, Z.: Video based face recognition using multiple classifiers. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 345–349. IEEE (2004)

    Google Scholar 

  48. Tu, C.-C., et al.: AutoZOOM: autoencoder-based zeroth order optimization method for attacking black-box neural networks. In: AAAI Conference on Artificial Intelligence (2019)

    Google Scholar 

  49. Wang, H., et al.: CosFace: large margin cosine loss for deep face recognition. In: Conference on Computer Vision and Pattern Recognition (2018)

    Google Scholar 

  50. Wang, X., He, K.: Enhancing the transferability of adversarial attacks through variance tuning. In: Conference on Computer Vision and Pattern Recognition (2021)

    Google Scholar 

  51. Wang, X., He, X., Wang, J., He, K.: Admix: Enhancing the transferability of adversarial attacks. In: International Conference on Computer Vision (2021)

    Google Scholar 

  52. Wang, X., Jin, H., Yang, Y., He, K.: Natural language adversarial defense through synonym encoding. In: Conference on Uncertainty in Artificial Intelligence (2021)

    Google Scholar 

  53. Wang, X., Lin, J., Hu, H., Wang, J., He, K.: Boosting adversarial transferability through enhanced momentum. In: British Machine Vision Conference (2021)

    Google Scholar 

  54. Wei, X., Liang, S., Chen, N., Cao, X.: Transferable adversarial attacks for image and video object detection. arXiv preprint arXiv:1811.12641 (2018)

  55. Wen, Y., Zhang, K., Li, Z., Qiao, Yu.: A discriminative feature learning approach for deep face recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9911, pp. 499–515. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46478-7_31

    Chapter  Google Scholar 

  56. Wong, E., Rice, L., Zico Kolter, J.: Fast is better than free: Revisiting adversarial training. In: International Conference on Learning Representations (2020)

    Google Scholar 

  57. Wu, B., et al.: Attacking adversarial attacks as a defense. arXiv preprint arXiv:2106.04938 (2021)

  58. Wu, W., Su, Y., Lyu, M.R., King, I.: Improving the transferability of adversarial samples with adversarial transformations. In: Conference on Computer Vision and Pattern Recognition (2021)

    Google Scholar 

  59. Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  60. Xiong, Y., Lin, J., Zhang, M., Hopcroft, J.E., He, K.: Stochastic variance reduced ensemble adversarial attack for boosting the adversarial transferability. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)

    Google Scholar 

  61. Xu, H., Gao, Y., Yu, F., Darrell, T.: End-to-end learning of driving models from large-scale video datasets. In: Conference on Computer Vision and Pattern Recognition, pp. 2174–2182 (2017)

    Google Scholar 

  62. Yang, X., Jia, X., Gong, D., Yan, D.-M., Li, Z., Liu, W.: LARNet: lie algebra residual network for face recognition. In: International Conference on Machine Learning, pp. 11738–11750. PMLR (2021)

    Google Scholar 

  63. Yao, Z., Gholami, A., Xu, P., Keutzer, K., Mahoney, M.W.: Trust region based adversarial attack on neural networks. In: Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  64. Zhang, H., Yu, Y., Jiao, J., Xing, E.P., El Ghaoui, L., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning (2019)

    Google Scholar 

  65. Zhao, P., Chen, P.-Y., Wang, S., Lin, X.: Towards query-efficient black-box adversary with zeroth-order natural gradient descent. In: AAAI Conference on Artificial Intelligence (2020)

    Google Scholar 

Download references

Acknowledgement

This work is supported by National Natural Science Foundation (62076105), International Coorperation Foundation of Hubei Province, China(2021EHB011) and Tencent Rhino Bird Elite Talent Training Program.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Kun He , Zhifeng Li or Wei Liu .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 175 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, X. et al. (2022). Triangle Attack: A Query-Efficient Decision-Based Adversarial Attack. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13665. Springer, Cham. https://doi.org/10.1007/978-3-031-20065-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20065-6_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20064-9

  • Online ISBN: 978-3-031-20065-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics