The paper introduces the concept of ML property attestation, which allows a prover (e.g., a model trainer) to demonstrate relevant properties of a machine learning model to a verifier (e.g., a customer or regulator) while preserving the confidentiality of sensitive training data. The authors focus on the attestation of distributional properties of training data, such as the diversity of the population represented, without revealing the data itself.
The authors identify four key requirements for property attestation mechanisms: effectiveness, efficiency, confidentiality-preservation, and adversarial robustness. They discuss three different approaches to distributional property attestation: inference-based attestation, cryptographic attestation, and a hybrid approach combining the benefits of both.
The inference-based attestation adapts property inference attacks to the attestation setting, where the verifier runs a property inference protocol against the prover's model. The cryptographic attestation uses secure multi-party computation (MPC) protocols to prove the distributional properties and that the model was trained on the attested data. The hybrid approach first uses the inference-based attestation and falls back on the cryptographic attestation if needed.
The authors provide extensive empirical evaluation of the different approaches, demonstrating their strengths and limitations. They show that the inference-based attestation can be effective for certain property values but lacks robustness, while the cryptographic attestation is effective and robust but inefficient. The hybrid approach balances the trade-offs, providing a practical solution for distributional property attestation.
Para Outro Idioma
do conteúdo original
arxiv.org
Principais Insights Extraídos De
by Vasisht Dudd... às arxiv.org 04-02-2024
https://arxiv.org/pdf/2308.09552.pdfPerguntas Mais Profundas