Skip to main content
Log in

Programmable access-controlled and generic erasable PUF design and its applications

  • Regular Paper
  • Published:
Journal of Cryptographic Engineering Aims and scope Submit manuscript

Abstract

Physical unclonable functions (PUFs) have not only been suggested as a new key storage mechanism, but—in the form of so-called strong PUFs—also as cryptographic primitives in advanced schemes, including key exchange, oblivious transfer, or secure multi-party computation. This notably extends their application spectrum, and has led to a sequence of publications at leading venues such as IEEE S&P, CRYPTO, and EUROCRYPT in the past. However, one important unresolved problem is that adversaries can break the security of all these advanced protocols if they gain physical access to the employed strong PUFs after protocol completion. It has been formally proven that this issue cannot be overcome by techniques on the protocol side alone, but requires resolution on the hardware level—the only fully effective known countermeasure being so-called erasable PUFs. Building on this work, this paper is the first to describe a generic method of how any given silicon strong PUF with digital CRP-interface can be turned into an erasable PUF. We describe how the strong PUF can be surrounded with a trusted control logic that allows the blocking (or “erasure”) of single CRP. We implement our approach, which we call “GeniePUF,” on FPGA, reporting detailed performance data and practicality figures. Furthermore, we develop the first comprehensive definitional framework for erasable PUFs. Our work so re-establishes the effective usability of strong PUFs in advanced cryptographic applications, and in the realistic case, adversaries get access to the strong PUF after protocol completion. As an extension to earlier versions of this work, we also introduce a generalization of erasable PUFs in this paper, which we call programmable access-controlled PUFs (PAC PUFs). We detail their definition, and discuss various exemplary applications of theirs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. Weak PUFs [4] are not suited for the application as cryptographic primitive in advanced protocols in the above sense: This scenario inevitably requires a large, inexhaustible CRP space with many possible challenges, numerically unpredictable responses, and a publicly accessible CRP-interface of the PUF, where every protocol participant and also adversaries can apply challenges and read-out responses freely [14, 15, 19]—or, in one term, a strong PUF [4].

  2. We would like to mention that this article is a journal version of an earlier publication at the ASHES workshop [30]. Together with several smaller adaptions, the concept of a programmable access-controlled PUF has been added to this work; Sects. 6 and 7 are completely new.

  3. We assume that the physical handover procedures in Step 1 and Step 3, as well as the choice and presentation of \(c^j\) in Step 4, are carried out in negligible time compared to the rest of the security game, i.e., we model them to take time of 0 \(\sec \), not causing any additional delays.

  4. Note that \(\mathcal {A}\) may have potentially physically altered or even destroyed P.

  5. As a self-balancing binary search tree, a RBT will adjust (rotate) its tree structure to maintain the balance of itself, when it is unbalanced. Detailed description of the rotations can be found in [46], and examples can be found in Appendix B.

  6. At the time of our implementational work, this size of the iPUF was considered secure; we remark that this no longer holds due to some recent advances in iPUF modeling attacks [50, 54]. However, this does not affect our evaluation results, as we are mainly evaluating the interface design, not the underlying PUF. Since our GeniePUF technique is generic, it could also be implemented with larger iPUF sizes that are secure, PUFs whose security can be reduced to computational hardness assumptions [55, 56], or with alternative future secure implementations of strong PUFs, of course.

  7. Count-limited access PUFs alone do not solve the reliability-based attacks on XOR PUFs, due to the existence of correlated CRPs in XOR PUFs.

References

  1. Lofstrom, K., Daasch, W.R., Taylor, D.: IC identification circuit using device mismatch. In: 2000 IEEE International Solid-State Circuits Conference. Digest of Technical Papers (Cat. No. 00CH37056) (IEEE), pp. 372–373 (2000)

  2. Gassend, B., Clarke, D., van Dijk, M., Devadas, S.: Silicon physical random functions. In: Proceedings of the 9th ACM Conference on Computer and Communications Security (ACM), pp. 148–160 (2002)

  3. Pappu, R., Recht, B., Taylor, J., Gershenfeld, N.: American association for the advancement of science. Physical one-way functions. Science 297(5589), 2026–2030 (2002)

    Google Scholar 

  4. Rührmair, U., Holcomb, D.E.: PUFs at a glance. In: Proceedings of the Conference on Design, Automation and Test in Europe (European Design and Automation Association), p. 347 (2014)

  5. Holcomb, D.E., Burleson, W.P., Fu, K.: Initial SRAM state as a fingerprint and source of true random numbers for RFID tags. In: Proceedings of the Conference on RFID Security, vol. 7 (2007)

  6. Jaeger, C., Algasinger, M., Rührmair, U., Csaba, G., Stutzmann, M.: Random pn-junctions for physical cryptography. Appl. Phys. Lett. 96(17), 172103 (2010)

    Article  Google Scholar 

  7. Xiong, W., Schaller, A., Anagnostopoulos, N.A., Saleem, M.U., Gabmeyer, S., Katzenbeisser, S., Szefer, J.: Run-time accessible DRAM PUFs in commodity devices. In: International Conference on Cryptographic Hardware and Embedded Systems (Springer), pp. 432–453 (2016)

  8. Kumar, S.S., Guajardo, J., Maes, R., Schrijen, G.J., Tuyls, P.:The butterfly PUF protecting IP on every FPGA. In: 2008 IEEE International Workshop on Hardware-Oriented Security and Trust (IEEE), pp. 67–70 (2008)

  9. Holcomb, D.E., Burleson, W.P., Fu, K.: Power-up SRAM state as an identifying fingerprint and source of true random numbers. IEEE Trans. Comput. 58(9), 1198–1210 (2009)

    Article  MathSciNet  Google Scholar 

  10. Simons, P., vander Sluis, E., vander Leest, V.: In: Buskeeper PUFs, a promising alternative to D flip-flop PUFs. In: 2012 IEEE International Symposium on Hardware-Oriented Security and Trust (IEEE), pp. 7–12 (2012)

  11. Maes, R., Van Herrewege, A., Verbauwhede, I.: PUFKY: a fully functional PUF-based cryptographic key generator. In: International Workshop on Cryptographic Hardware and Embedded Systems (Springer), pp. 302–319 (2012)

  12. Maes, R., Van DerLeest, V., Van DerSluis, E., Willems, F.: Secure key generation from biased PUFs. In: International Workshop on Cryptographic Hardware and Embedded Systems (Springer), pp. 517–534 (2015)

  13. Suh, G.E., Devadas, S.: Physical unclonable functions for device authentication and secret key generation. In: Proceedings of the 44th Annual Design Automation Conference (ACM), pp. 9–14 (2007)

  14. Brzuska, C., Fischlin, M., Schröder, H., Katzenbeisser, S.: Physically uncloneable functions in the universal composition framework. In: Advances in Cryptology CRYPTO 2011 (Springer), pp. 51–70 (2011)

  15. Ostrovsky, R., Scafuro, A., Visconti, I., Wadia, A.: Universally composable secure computation with (malicious) physically uncloneable functions. In: Advances in Cryptology–EUROCRYPT 2013 (Springer), pp. 702–718 (2013)

  16. Damgård, I., Scafuro, A.: Unconditionally secure and universally composable commitments from physical assumptions. In: International Conference on the Theory and Application of Cryptology and Information Security (Springer), pp. 100–119 (2013)

  17. Dachman-Soled, D., Fleischhacker, N., Katz, J., Lysyanskaya, A., Schröder, D.: Feasibility and infeasibility of secure computation with malicious PUFs. In: Advances in Cryptology CRYPTO 2014 (Springer), pp. 405–420 (2014)

  18. Badrinarayanan, S., Khurana, D., Ostrovsky, R., Visconti, I.: Unconditional UC-secure computation with (stronger-malicious) PUFs. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques (Springer), pp. 382–411 (2017)

  19. Rührmair, U.: Oblivious transfer based on physical unclonable functions. In: Trust and Trustworthy Computing (Springer), pp. 430–440 (2010)

  20. Fischlin, M., Mazaheri, S.: Self-guarding cryptographic protocols against algorithm substitution attacks. In: 2018 IEEE 31st Computer Security Foundations Symposium (CSF) (IEEE), pp. 76–90 (2018)

  21. Chen, L., Chen, L., Jordan, S., Liu, Y.K., Moody, D., Peralta, R., Perlner, R., Smith-Tone, D.: Report on post-quantum cryptography. US Department of Commerce, National Institute of Standards and Technology (2016)

    Book  Google Scholar 

  22. Perlman, R.J., Hanna, S.R.: Methods and systems for establishing a shared secret using an authentication token (2001). US Patent 6,173,400 (2001)

  23. Rührmair, U., van Dijk, M.: Pufs in security protocols: attack models and security evaluations. In: Security and Privacy (SP), 2013 IEEE Symposium on (IEEE), pp. 286–300 (2013)

  24. van Dijk, M., Rührmair, U.: Physical unclonable functions in cryptographic protocols: security proofs and impossibility results. IACR Cryptol. ePrint Archive 2012, 228 (2012)

    Google Scholar 

  25. Rührmair, U., Jaeger, C., Algasinger, M.: An attack on PUF-based session key exchange and a hardware-based countermeasure: Erasable PUFs. In: Financial Cryptography and Data Security (Springer), pp. 190–204 (2011)

  26. Katzenbeisser, S., Kocabaş, Ü., van DerLeest, V., Sadeghi, A.R., Schrijen, G.J., Wachsmann, C.: Recyclable pufs: logically reconfigurable pufs. J. Cryptographic Eng. 1(3), 177–186 (2011)

    Article  Google Scholar 

  27. Zhang, L., Kong, Z.H., Chang, C.H., Cabrini, A., Torelli, G.: Exploiting process variations and programming sensitivity of phase change memory for reconfigurable physical unclonable functions. IEEE Trans. Inf. Forensics Secur. 9(6), 921–932 (2014)

    Article  Google Scholar 

  28. Kursawe, K., Sadeghi, A.R., Schellekens, D., Skoric, B., Tuyls, P.: Reconfigurable physical unclonable functions-enabling technology for tamper-resistant storage, In: Hardware-Oriented Security and Trust, 2009. HOST’09. IEEE International Workshop on (IEEE), pp. 22–29 (2009)

  29. Eichhorn, I., Koeberl, P., vander Leest, V.: Logically reconfigurable PUFs: memory-based secure key storage. In: Proceedings of the Sixth ACM Workshop on Scalable Trusted Computing (ACM), pp. 59–64 (2011)

  30. Jin, C., Burleson, W., van Dijk, M., Rührmair, U.: Erasable PUFs: formal treatment and generic design. In: Proceedings of the 4th ACM Workshop on Attacks and Solutions in Hardware Security, pp. 21–33 (2020)

  31. Rührmair, U., Jaeger, C., Bator, M., Stutzmann, M., Lugli, P., Csaba, G.: Applications of high-capacity crossbar memories in cryptography. IEEE Trans. Nanotechnol. 10(3), 489–498 (2011)

    Article  Google Scholar 

  32. Gassend, B., Clarke, D., van Dijk, M., Devadas, S.: Controlled physical random functions. In: Computer Security Applications Conference, 2002. Proceedings. 18th Annual (IEEE), pp. 149–160 (2002)

  33. Gassend, B., Dijk, M.V., Clarke, D., Torlak, E., Devadas, S., Tuyls, P.: Controlled physical random functions and applications. ACM Trans. Inf. Syst. Secur. 10(4), 3 (2008)

    Article  Google Scholar 

  34. Rostami, M., Majzoobi, M., Koushanfar, F., Wallach, D.S., Devadas, S.: Robust and reverse-engineering resilient puf authentication and key-exchange by substring matching. IEEE Trans. Emerg. Top. Comput. 2(1), 37–49 (2014)

    Article  Google Scholar 

  35. Yu, M.D., Hiller, M., Delvaux, J., Sowell, R., Devadas, S., Verbauwhede, I.: A lockdown technique to prevent machine learning on pufs for lightweight authentication. IEEE Trans. Multi-Scale Comput. Syst. 2(3), 146–159 (2016)

    Article  Google Scholar 

  36. Majzoobi, M., Koushanfar, F., Potkonjak, M.: Techniques for design and implementation of secure reconfigurable pufs. ACM Trans. Reconfigurable Technol. Syst. 2(1), 1–33 (2009)

    Article  Google Scholar 

  37. Rührmair, U., van Dijk, M.: On the practical use of physical unclonable functions in oblivious transfer and bit commitment protocols. J. Crypt. Eng. 3(1), 17–28 (2013)

    Article  Google Scholar 

  38. Rührmair, U., Sölter, J., Sehnke, F.: On the foundations of physical unclonable functions. IACR Cryptol. ePrint Arch. 2009, 277 (2009)

    Google Scholar 

  39. Rührmair, U., Busch, H., Katzenbeisser, S.: Strong PUFs: models, constructions, and security proofs. In: Towards Hardware-intrinsic Security (Springer), pp. 79–96 (2010)

  40. Armknecht, F., Moriyama, D., Sadeghi, A.R., Yung, M.: Towards a unified security model for physically unclonable functions. In: Cryptographers’Track at the RSA Conference (Springer), pp. 271–287 (2016)

  41. Rührmair, U.: Physical turing machines and the formalization of physical cryptography. IACR Cryptol. ePrint Arch. 2011, 188 (2011)

    Google Scholar 

  42. Rührmair, U., Sehnke, F., Sölter, J., Dror, G., Devadas, S., Schmidhuber, J.: Modeling attacks on physical unclonable functions. In: Proceedings of the 17th ACM Conference on Computer and Communications Security (ACM), pp. 237–249 (2010)

  43. Rührmair, U., Sölter, J.: PUF modeling attacks: an introduction and overview. In: 2014 Design, Automation and Test in Europe Conference and Exhibition (DATE) (IEEE), pp. 1–6 (2014)

  44. Herder, C., Yu, M.D., Koushanfar, F., Devadas, S.: Physical unclonable functions and applications: a tutorial. Proc. IEEE 102(8), 1126–1141 (2014)

    Article  Google Scholar 

  45. Buldas, A., Laud, P., Lipmaa, H.: Accountable certificate management using undeniable attestations. In: Proceedings of the 7th ACM Conference on Computer and Communications Security (ACM), pp. 9–17 (2000)

  46. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C., et al.: Introduction to Algorithms, vol. 2. MIT Press, Cambridge (2001)

    MATH  Google Scholar 

  47. Rührmair, U.: SoK: Towards secret-free security, In: 2020 Workshop on Attacks and Solutions in Hardware Security (ASHES@ CCS 2020) (2020)

  48. Standaert, F.X.: Introduction to side-channel attacks. In: Secure Integrated Circuits and Systems (Springer), pp. 27–42 (2010)

  49. Wang, H., Forte, D., Tehranipoor, M.M., Shi, Q.: Probing attacks on integrated circuits: challenges and research opportunities. IEEE Des. Test 34(5), 63–71 (2017)

    Article  Google Scholar 

  50. Wisiol, N., Mühl, C., Pirnay, N., Nguyen, P.H. , Margraf, M., Seifert, J.P., van Dijk, M., ührmair, U.R.: Splitting the interpose puf: a novel modeling attack strategy. IACR Trans. Cryptographic Hardware Embedded Syst. pp. 97–120 (2020)

  51. Tajik, S., Dietz, E., Frohmann, S., Seifert, J.P., Nedospasov, D., Helfmeier, C., Boit, C., Dittrich, H.: Physical characterization of arbiter pufs. In: Cryptographic Hardware and Embedded Systems—CHES 2014 (Springer), pp. 493–509 (2014)

  52. Barenghi, A., Breveglieri, L., Koren, I., Naccache, D.: Fault injection attacks on cryptographic devices: theory, practice, and countermeasures. Proc. IEEE 100(11), 3056–3076 (2012)

    Article  Google Scholar 

  53. Nguyen, P.H., Sahoo, D.P., Jin, C., Mahmood, K., Rührmair, U., van Dijk, M.: The interpose puf: Secure puf design against state-of-the-art machine learning attacks. IACR Trans, Cryptographic Hardware Embedded Syst (2019)

    Google Scholar 

  54. Tobisch, J., Aghaie, A., Becker, G.T.: Combining optimization objectives: new machine-learning attacks on strong pufs. IACR Cryptol. ePrint Arch. 2020, 957 (2020)

  55. Herder, C., Ren, L., VanDijk, M., Yu, M.D., Devadas, S.: Trapdoor computational fuzzy extractors and stateless cryptographically-secure physical unclonable functions. IEEE Trans. Depend. Secure Comput. 14(1), 65–82 (2016)

    Article  Google Scholar 

  56. Jin, C., Herder, C., Ren, L., Nguyen, P.H., Fuller, B., Devadas, S., van Dijk, M.: Fpga implementation of a cryptographically-secure puf based on learning parity with noise. Cryptography 1(3), 23 (2017)

    Article  Google Scholar 

  57. Menezes, A.J., van Oorschot, P.C., Vanstone, S.A.: Handbook of Applied Cryptography (CRC Press, 1996)

  58. AES, NIST, Advanced encryption standard. Federal Information Processing Standard, FIPS-197, 12 (2001)

  59. Tuyls, P., Škorić, B.: Strong authentication with physical unclonable functions. In: Security, Privacy, and Trust in Modern Data Management (Springer), pp. 133–148 (2007)

  60. Kilian, J.: Founding crytpography on oblivious transfer. In: Proceedings of the Twentieth Annual ACM Symposium on Theory of computing (ACM), pp. 20–31 (1988)

  61. Becker, G.T.: The gap between promise and reality: On the insecurity of XOR arbiter PUFs. In: Cryptographic Hardware and Embedded Systems–CHES 2015 (Springer), pp. 535–555 (2015)

  62. Liu, Q., Safavi-Naini, R., Sheppard, N.P.:The gap between promise and reality: on the insecurity of XOR arbiter PUFs. In: Proceedings of the Australasian Information Security Workshop Conference on ACSW Frontiers 2003-Vol. 21 (Citeseer), pp. 49–58 (2003)

  63. Sarmenta, L.F., van Dijk, M., O’Donnell, C.W. , Rhodes, J., Devadas, S.: Virtual monotonic counters and count-limited objects using a TPM without a trusted OS. In: Proceedings of the First ACM Workshop on Scalable Trusted Computing (ACM), pp. 27–42 (2006)

  64. Bayer, R.: Symmetric binary b-trees: data structure and maintenance algorithms. Acta Inform. 1(4), 290–306 (1972)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Chenglu Jin was supported by NSF award CNS 1617774, NYU CCS, and NYU CUSP. Wayne Burleson was supported by NSF/SRC grant CNS-1619558. Marten van Dijk was supported by NSF award CNS 1617774. Ulrich Rührmair acknowledges support by BMBF-project QUBE and by BMBF-project PICOLA and by the AFOSR Project on Highly Secure Nonlinear Optical PUFs.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chenglu Jin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Background on authenticated search trees and red-black trees

Fig. 6
figure 6

Proof construction in an authenticated search tree. Suppose that one needs to prove that \(c_{new}\) does not exist in the authenticated search tree (containing \(c_0\) to \(c_5\)). For example, the dashed node shows the location where \(c_{new}\) is supposed to be. The green information is included in the proof of non-existence for \(c_{new}\). Note that the hash value stored in the left child of \(c_4\) is also needed in the proof, but it is omitted in the diagram, because it is a nil node in the tree (color figure online)

Fig. 7
figure 7

Insertion of a new node 4

An authenticated search tree was introduced in [45] as an undeniable attester. In the context of our GeniePUF, an untrusted Red-Black Tree (RBT) interface is used, which manages LIST of size n. It takes a challenge as input, and generates a proof of non-existence/existence of this challenge in LIST. Notice that, the length of the proof is only \(\mathcal {O}(log(n))\) long. Upon receiving the non-existence/existence proof, the TCB around the PUF can then verify the proof by checking against a constant-sized (\(\mathcal {O} (1)\)) root hash stored in the TCB. This root hash does not need to be kept secret, i.e., it can be known to adversaries; it must merely be secure against alteration or overwriting by adversaries.

To further improve the performance of an authenticated search tree in the worst-case scenario, where a standard search tree will become extremely unbalanced, we merge a red-black tree (RB tree) [46, 64] with the authenticated search tree in the untrusted memory. In short, a red-black tree is one self-balancing binary search tree structure [46, 64], which checks and balances the depth of the tree after every node insertion and deletion. Hence, a Red-Black tree can guarantee searching in \(\mathcal {O}(log(n))\) time in the average and the worst scenario, where n is the total number of nodes in the tree [46].

In the following, we will describe the necessary procedures of our authenticated red-black trees (e.g., we only describe node insertion, not deletion, because in the GeniePUF application, LIST can only grow). In particular, we present the high-level idea of the following basic schemes of our combined tree structure to prepare the readers for understanding this paper.

An authenticated search tree is sorted according to the challenges stored in each node, and it is constructed in such a way that each node consists of a unique challenge \(c_i\) in the LIST and a hash value \(h_i = F_\mathsf{Hash}(c_i, left(c_i).hash, right(c_i).hash)\), where \(left(c_i).hash\) and \(right(c_i).hash\) are the hash values stored in the left or right child of node \(c_i\), respectively. The hash values of the children of the bottom leaves are considered to be 0 by default. An example tree structure is shown in Fig. 6.

Scheme 3

(Searching for a Challenge \(c_i\) in a RBT)

  1. 1

    The RBT interface receives a challenge \(c_i\).

  2. 2

    The RBT interface searches for \(c_i\), using the RBT as an ordinary binary search tree.

  3. 3

    In the end, it results in two cases:

    • If \(c_i\) is found, then a pointer to the node associated with \(c_i\) is returned.

    • If the binary search for \(c_i\) within RBT reaches a leaf node, where no challenge is stored, then the interface returns a pointer to the parent node of the leaf node. (This parent node is the lowest node in the tree whose child \(c_i\) would have supposed to be, if \(c_i\) was part of the RBT.) In the example in Fig. 6, the returned pointer will be pointing to the node containing \(c_4\).

Scheme 4

(Generating a PROOF of Existence/Non-Existence of a Challenge \(c_i\) in a RBT)

  1. 1

    After a search for \(c_i\), as described in Scheme 3, is completed in the LIST (either found or not found), the RBT interface gets a node in the tree from the search procedure. It sets this node as the starting node of the PROOF.

  2. 2

    The interface adds the challenge of the starting node and the hash values stored in the children nodes of the starting node into the PROOF. Again, taking the example of Fig. 6, the information added is \(c_4\) and the hash values of its two children (two nil nodes).

  3. 3

    Then, the RBT interface fetches the challenge of the node and the hash value in the sibling node of each node along the path in the tree from the starting node to the root of the tree to generate the completed PROOF of non-existence/existence of \(c_i\), e.g., adding \((c_1, h_3)\) and \((c_0, h_2)\) into the PROOF, as shown in Fig. 6.

  4. 4

    It returns the completed PROOF.

The proof construction process is also illustrated in Fig. 6.

Scheme 5

(Verifying a PROOF of Existence/Non-Existence)

  1. 1

    All the proofs generated by the RBT interface have to be verified by the trusted control logic CL. After a proof is received, the CL checks the starting node first. If it is a proof of non-existence, the CL checks whether the left/ right child of the starting node is a leave node based on whether c is smaller/ greater than the challenge in the starting node. In the case of an existence proof, the CL verifies the order of the two children and the starting node. If any of the above check failed, return “\(\bot \)”.

  2. 2

    Then, the CL hashes every node from the starting node of the proof all the way to the root, using the challenge value of each node and their sibling hash values provided in the proof. The order of left and right child is determined by comparing two consecutive challenges in the PROOF. The final result is RootHash’.

  3. 3

    Check if RootHash’ = RootHash stored in the TCB:

    • If yes, we conclude that the PROOF is a valid proof. Based on whether it’s an existence proof or a non-existence proof, we conclude whether \(c_i\) is in the LIST or not.

    • If no, the PROOF is considered as invalid, and we conclude that either the LIST or the RBT Interface has been tampered with by an attacker.

Scheme 6

(Adding a New Challenge \(c_i\) to the RBT)

  1. 1

    In the case that a new challenge \(c_i\) needs to be added to the LIST, the RBT interface first proves that \(c_i\) is not in the LIST using the above schemes.

  2. 2

    If the non-existence of \(c_i\) gets accepted by the verifier, then \(c_i\) is added as a child of the node returned by the search procedure.

  3. 3

    After insertion, a red-black tree fixup is triggered. It may rotate the structure of the tree to re-balance it. More details about the red-black tree fixup can be found in the example in Appendix B and [46].

  4. 4

    After fixup, a new RootHash will be generated by the trusted control logic CL according to the fixup information of the tree and the proof of non-existence used in Scheme 3.

Note that, based on the way the authenticated search tree is constructed and verified, its security solely relies on the collision resistance of the underlying hash function.

Appendix B: Example rotation of an authenticated RB tree

Figure 7 depicts an example of consecutive operations in Red-Black Tree Insert-Fixup, see [46]. (a) A new node 4 is inserted. The dashed path in (a) is PROOF. All of the information in nodes 5, 7, 2, and 11 are included in PROOF, together with the hash values of nodes 8, 1, and 14, called the sibling’s hash values. In order to verify non-existence, we need to reconstruct the root hash using PROOF and compare with the trusted root hash stored in the TCB. In addition, we need to check whether new node 4 is added at the correct location, which means \(2< 4 <5\), and node 5 has no left child. Here, case 1 in [46] applies, so nodes 5 and 7 are recolored but the structure remains the same.

There are six possible cases in a RB tree fixup, in which only case 2, 3, 5, and 6 in [46] will rotate the structure of the tree; this example shows three cases (the other three cases are similar in that they are mirrored versions of the three in the example). In (b), (c), and (d), the nodes in dashed blocks are the nodes which hash values need to be updated; the transition from (b) to (c) is a rotation and the transition from (c) to (d) is a rotation. Note that, PROOF already provides all the information needed for updating these hash values. In this example, in order to compute the hash of nodes 2, 7, and 11 in (d), we need the hash value of node 5, which was updated in case 1 during the transition from (a) to (b), and the hash values of nodes 1, 8, and 14, which are exactly the sibling’s hash values that are contained in PROOF.

Appendix C: Proofs of theorems

In the following proofs, we assume that ignoring operations or communication does not increase the original execution time \(t_\mathsf{att}\) of an adversary.

1.1 C.1 Proof of Theorem 1

Proof

We will show the contraposition of the above statement, assuming that P is not an \((k, t_\mathsf{att}, \epsilon )\)-secure strong PUF with respect to some adversary \(\mathcal {A}\). By Definition 2, this implies that there exists an adversary \(\mathcal {A}\) who is capable of winning the security game \(\mathbf {SecGameStrong} \, (P, \mathcal {A}, k, t_\mathsf{att})\) of Definition 2 with probability greater than \(\epsilon \). This, in turn, means that \(\mathcal {A}\) can predict the correct response to one out of k uniformly randomly chosen challenges \(c^j \in C_P\) with probability greater than \(\epsilon \), whereby the time that \(\mathcal {A}\) requires for his physical actions and numeric computations does not exceed \(t_\mathsf{att}\).

We notice that the very same adversary \(\mathcal {A}\) will also win the security game \(\mathbf {SecGameErasable} \, (P, \mathcal {A}, k, t_\mathsf{att})\) with probability greater than \(\epsilon \). The reason for this is that the execution of the security game \(\mathbf {SecGameErasable} \, (P, \mathcal {A}, k, t_\mathsf{att})\) with \(c^j=c^j_\mathsf{erase}\) is identical to the execution of the security game \(\mathbf {SecGameStrong} \, (P, \mathcal {A}, k, t_\mathsf{att})\) because adversary \(\mathcal {A}\) in \(\mathbf {SecGameErasable} \, (P, \mathcal {A}, k, t_\mathsf{att})\) never attempts to query an erased challenge \(c^j=c^j_\mathsf{erase}\). This implies that P is not a \((k, t_\mathsf{att}, \epsilon )\)-secure erasable PUF, completing our contraposition argument. \(\square \)

1.2 C.2 Proof Sketch of Theorem 2

Proof Sketch. Let \(\mathcal {A}\) be any adversary that is modeled by Definition 5. We define a series of games that reduce

$$\begin{aligned} \mathbf {SecGameErasable} \, (P, \mathcal {A}, k, t_\mathsf{att}), \end{aligned}$$

with probability of winning denoted by \(\epsilon _\mathsf{erase}\), to

$$\begin{aligned} \mathbf {SecGameStrong} \, (P, \mathcal {A}', k, t_\mathsf{att}), \end{aligned}$$

where \(\epsilon \) is the probability of winning as stated in the theorem.

We first modify \(\mathbf {SecGameErasable}\) by assuming an adversary \(\mathcal {A}^0\) who is like \(\mathcal {A}\) but who cannot produce a valid PROOF for an invalid claim that a challenge was not erased in its interactions with GeniePUF(P). We call this new game \(\mathbf {SecGameErasable}^0\) and denote the probability of winning this game by \(\epsilon _0\). By the implicit assumptions on the capabilities of the adversary in Definition 5, we know that the control logic CL and PUF P cannot be modified. Therefore, the only way to produce a valid PROOF for an (erased) challenge c in RBT is to find a collision for the hash function. By Theorem 1 in Section 6.2 of [45], the probability of finding a valid PROOF is at most \(\rho \). This shows that

$$\begin{aligned} \epsilon _\mathsf{erase} \le \epsilon _0 + \rho . \end{aligned}$$

Not being able to provide a valid PROOF for an invalid claim in \(\mathbf {SecGameErasable}^0\) means that the GeniePUF(P) does not produce responses for erased challenges. This is similar to the same game \(\mathbf {SecGameErasable}^0\) where in Step 4a only a challenge \(c^j_\mathsf{erase}\) is chosen at random but not erased, and with the restriction that the adversary is not allowed to query \(c_\mathsf{erase}^{j}\) after \(c_\mathsf{erase}^{j}\) is given to the adversary in Step 4b. We call this game \(\mathbf {SecGameErasable}^1\). We now define \(\mathcal {A}^1\) as adversary \(\mathcal {A}^0\) by discarding any erasure operations which \(\mathcal {A}^0\) asks for in Step 2 or Step 4c (these operations cannot lead to feedback from GeniePUF(P) which contains predictive information that can be used in Step 5). For \(\mathcal {A}^1\), we can now conclude that game \(\mathbf {SecGameErasable}^1\) has winning probability \(\epsilon _1\) for which

$$\begin{aligned} \epsilon _0= \epsilon _1. \end{aligned}$$

Notice that \(\mathbf {SecGameErasable}^1\) does not implement any erasure operations. Because \(\mathbf {SecGameErasable}^1\) disallows querying any of the \(c_\mathsf{erase}^j\) after being selected in Step 4a and communicated to \(\mathcal {A}^1\) in Step 4b of game \(\mathbf {SecGameErasable}^1\), we know that the control logic CL of GeniePUF(P) simply provides direct access to P for the queries by \(\mathcal {A}^1\). Therefore, the control logic of GeniePUF(P) provides direct access to P in \(\mathbf {SecGameErasable}^1\) and provides no other functionality. This means \(\mathbf {SecGameErasable}^1\) results directly in a game for PUF P where we have conceptually stripped away the control logic of GeniePUF(P).

Unrolling all the steps in \(\mathbf {SecGameErasable}^1\) for P shows its equivalence with \(\mathbf {SecGameStrong}\). We now define \(\mathcal {A}'\) as \(\mathcal {A}^1\) where any attempt by \(\mathcal {A}^1\) to read state in RBT or control logic CL is replaced by dummy observations. For \(\mathcal {A}'\), we may now conclude that \(\mathbf {SecGameStrong}\) has winning probability

$$\begin{aligned} \epsilon _1=\epsilon . \end{aligned}$$

By combining all inequalities and equations we have

$$\begin{aligned} \epsilon _\mathsf{erase}\le \epsilon +\rho . \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, C., Burleson, W., van Dijk, M. et al. Programmable access-controlled and generic erasable PUF design and its applications. J Cryptogr Eng 12, 413–432 (2022). https://doi.org/10.1007/s13389-022-00284-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13389-022-00284-z

Keywords

Navigation