The paper addresses the challenge of efficiently verifying cryptographic commitments to machine learning (ML) models and data in zero-knowledge machine learning (zkML) pipelines. While recent advances in zkML have substantially improved the efficiency of proving ML computations correct, the overhead associated with verifying the necessary commitments has remained a significant bottleneck.
The paper introduces two new Commit-and-Prove SNARK (CP-SNARK) constructions:
Apollo: This construction simplifies the process of aligning external commitments with the internal witness representation in Plonk-style proof systems. It achieves substantial performance improvements over the state-of-the-art Lunar CP-SNARK.
Artemis: This construction makes only black-box use of the underlying proof system and supports any homomorphic polynomial commitment scheme, including those used in modern proof systems like Halo2 that do not require a trusted setup.
The key insights behind these constructions are:
The paper provides formal security proofs for Artemis and presents the first implementation of these CP-SNARK constructions. Evaluation on a diverse set of ML models, including large-scale models like GPT-2, demonstrates substantial performance improvements over existing approaches, reducing the overhead of commitment checks by over an order of magnitude. These contributions help move zkML towards practical deployment, particularly for scenarios involving complex and large-scale ML models.
Egy másik nyelvre
a forrásanyagból
arxiv.org
Mélyebb kérdések