Delay-based PUFs Attack Analysis with Minimal Adversary Model
Core Concepts
Delay-based PUFs are vulnerable to machine learning attacks, and a generic framework can efficiently model various PUF types with minimal adversarial knowledge.
Attacking Delay-based PUFs with Minimal Adversary Model
Stats
R¨uhrmair et al. [4]の方法は、XOR-APUFの小規模ステージでは他の方法よりも優れており、トレーニング時間(低いほど良い)と精度で勝利しています。
ロジスティック回帰法は古典的な機械学習攻撃の中でも最も強力な攻撃ですが、XOR数が増えるとLRは最も多くのトレーニングデータと時間を消費します。
Asseriら[10]およびMursiら[11]による攻撃は、提案された方法よりもトレーニングデータと時間を消費しました。
Quotes
"Physically Unclonable Functions (PUFs) offer a streamlined approach to security, suitable for both authentication and secure key generation."
"Machine learning attacks on PUFs have been a significant pain point when developing Strong PUFs, almost since their conception."
"A shared characteristic among these methods is their reliance on the type or structure of the target PUF as foundational information for modelling."