skip to main content
10.1145/3375627.3375815acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article

The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?

Published:07 February 2020Publication History

ABSTRACT

There is growing concern over the potential misuse of artificial intelligence (AI) research. Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse. This paper addresses the balance between these two effects. Our theoretical framework elucidates the factors governing whether the published research will be more useful for attackers or defenders, such as the possibility for adequate defensive measures, or the independent discovery of the knowledge outside of the scientific community. The balance will vary across scientific fields. However, we show that the existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this cannot be assumed for AI research. The AI research community should consider concepts and policies from a broad set of adjacent fields, and ultimately needs to craft policy well-suited to its particular challenges.

References

  1. Matt Blaze. 2003. Master-Keyed Lock Vulnerability. Retrieved November 3, 2019, from https://www.mattblaze.org/masterkey.html.Google ScholarGoogle Scholar
  2. Matt Blaze. 2003. Rights amplification in master-keyed mechanical locks. IEEE Security & Privacy 1, 2 (2003), 24--32. https://doi.org/10.1109/MSECP.2003.1193208Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Nicholas Bloom, Charles I Jones, John Van Reenen, and Michael Webb. 2017. Are Ideas Getting Harder to Find? National Bureau of Economic Research Working Paper Series 23782 (2017). https://doi.org/10.3386/w23782.Google ScholarGoogle Scholar
  4. Nick Bostrom. 2017. Strategic Implications of Openness in AI Development. Global Policy (2017). Retrieved from https://ora.ox.ac.uk/objects/uuid:83ea712faba3- 4176--957a-3bb4af0209d6.Google ScholarGoogle Scholar
  5. Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, and Dario Amodei. 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv:1802.07228 [cs.AI] (2018).Google ScholarGoogle Scholar
  6. Rebecca Crootof. 2019. Artificial Intelligence Research Needs Responsible Publication Norms. Retrieved October 26, 2019, from Lawfare website: https://www.lawfareblog.com/artificial-intelligence-research-needsresponsible- publication-norms.Google ScholarGoogle Scholar
  7. Ben Garfinkel and Allan Dafoe. 2019. How does the offense-defense balance scale? Journal of Strategic Studies 42, 6 (2019), 736--763. https://doi.org/10.1080/ 01402390.2019.1631810.Google ScholarGoogle ScholarCross RefCross Ref
  8. Sebastian Gehrmann, Hendrik Strobelt, and Alexander Rush. 2019. GLTR: Statistical Detection and Visualization of Generated Text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 111--116. https://doi.org/10.18653/v1/P19--3019.Google ScholarGoogle ScholarCross RefCross Ref
  9. Aaron Gokaslan and Vanya Cohen. 2019. OpenGPT-2: We Replicated GPT- 2 Because You Can Too. Retrieved October 25, 2019, from Medium website: https://blog.usejournal.com/opengpt-2-we-replicated-gpt-2-because-youcan- too-45e34e6d36dc.Google ScholarGoogle Scholar
  10. Robert Jervis. 1978. Cooperation Under the Security Dilemma. World Politics 30, 2 (1978), 167--214. https://doi.org/10.2307/2009958.Google ScholarGoogle ScholarCross RefCross Ref
  11. Sarah Kreps and Miles McCain. 2019. Not Your Father's Bots: AI Is Making Fake News Look Real. Foreign Affairs (2019). Retrieved from https://www. foreignaffairs.com/articles/2019-08-02/not-your-fathers-bots.Google ScholarGoogle Scholar
  12. Claire Leibowicz, Steven Adler, and Peter Eckersley. 2019. When Is It Appropriate to Publish High-Stakes AI Research? Retrieved October 26, 2019, from Partnership on AI website: https://www.partnershiponai.org/when-is-it-appropriate-topublish- high-stakes-ai-research/.Google ScholarGoogle Scholar
  13. Gregory Lewis, Piers Millett, Anders Sandberg, Andrew Snyder-Beattie, and Gigi Gronvall. 2018. Information Hazards in Biotechnology. Risk Analysis 39, 5 (2018), 975--981. https://doi.org/10.1111/risa.13235.Google ScholarGoogle ScholarCross RefCross Ref
  14. Alec Radford, Jeff Wu, Dario Amodei, Jack Clark, Miles Brundage, and Ilya Sutskever. 2019. Better Language Models and Their Implications. Retrieved October 24, 2019, from OpenAI Blog website: https://openai.com/blog/betterlanguage- models/.Google ScholarGoogle Scholar
  15. Kevin Rawlinson. 2019. Heathrow and Gatwick invest millions in anti-drone technology. The Guardian (2019). Retrieved from https://www.theguardian.com/ world/2019/jan/03/heathrow-and-gatwick-millions-anti-drone-technology.Google ScholarGoogle Scholar
  16. Janko Roettgers. 2019. Mark Zuckerberg Says Facebook Will Spend More Than $3.7 Billion on Safety, Security in 2019. Retrieved November 3, 2019, from Variety website: https://variety.com/2019/digital/news/facebook-2019-safety-speding- 1203128797/.Google ScholarGoogle Scholar
  17. Jacob N. Shapiro and David A. Siegel. 2010. Is this Paper Dangerous? Balancing Secrecy and Openness in Counterterrorism. Security Studies 19, 1 (2010), 66--98. https://doi.org/10.1080/09636410903546483.Google ScholarGoogle ScholarCross RefCross Ref
  18. Richard Socher. 2019. Introducing a Conditional Transformer Language Model for Controllable Generation. Retrieved October 25, 2019, from Einstein Blog website: https://blog.einstein.ai/introducing-a-conditional-transformer-languagemodel- for-controllable-generation/.Google ScholarGoogle Scholar
  19. Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, JeffWu, Alec Radford, and JasmineWang. 2019. Release Strategies and the Social Impacts of Language Models. arXiv:1908.09203 [cs.CL] (2019).Google ScholarGoogle Scholar
  20. Nick Statt. 2019. Thieves are now using AI deepfakes to trick companies into sending them money. Retrieved November 3, 2019, from The Verge website: https://www.theverge.com/2019/9/5/20851248/deepfakes-ai-fake-audiophone- calls-thieves-trick-companies-stealing-money.Google ScholarGoogle Scholar
  21. Peter P. Swire. 2004. A Model for When Disclosure Helps Security: What Is Different about Computer and Network Security Symposium - The Digital Broadband Migration: Toward a Regulatory Regime for the Internet Age. Journal on Telecommunications & High Technology Law 1 (2004), 163--208.Google ScholarGoogle Scholar
  22. Rowan Zellers. 2019. Why We Released Grover. The Gradient (2019). Retrieved from https://thegradient.pub/why-we-released-grover/.Google ScholarGoogle Scholar
  23. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Counteracting neural disinformation with Grover. Retrieved October 25, 2019, from Medium website: https://medium.com/ai2-blog/counteracting-neural-disinformation-withgrover- 6cf6690d463b.Google ScholarGoogle Scholar
  24. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending Against Neural Fake News. arXiv:1905.12616 [cs.CL] (2019).Google ScholarGoogle Scholar

Index Terms

  1. The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?

                  Recommendations

                  Comments

                  Login options

                  Check if you have access through your login credentials or your institution to get full access on this article.

                  Sign in
                  • Published in

                    cover image ACM Conferences
                    AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
                    February 2020
                    439 pages
                    ISBN:9781450371100
                    DOI:10.1145/3375627

                    Copyright © 2020 ACM

                    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

                    Publisher

                    Association for Computing Machinery

                    New York, NY, United States

                    Publication History

                    • Published: 7 February 2020

                    Permissions

                    Request permissions about this article.

                    Request Permissions

                    Check for updates

                    Qualifiers

                    • research-article

                    Acceptance Rates

                    Overall Acceptance Rate61of162submissions,38%

                  PDF Format

                  View or Download as a PDF file.

                  PDF

                  eReader

                  View online with eReader.

                  eReader