A novel probabilistic approach for enhancing the robustness of trigger set-based watermarking techniques against model stealing attacks.
AI ethics must advance beyond decontextualized discussions of ethics and toward value chain perspectives that situate actors in context, account for the many types of resources involved in co-creating AI systems, and integrate a wider range of ethical concerns across contexts and scales.
Enforcing fairness constraints on samples with reliable sensitive attribute predictions can significantly improve the fairness-accuracy tradeoff compared to using all samples or samples with uncertain sensitive attributes.
A protocol for secure delegated quantum computation that drastically reduces the technological requirements for both the client and the server, while providing information-theoretic composable security.
It is feasible to automatically generate smart contract code based on a semantic knowledge graph, in a way that respects the economic rules of blockchain, to enable trustworthy healthcare decision-making in a distributed setting.
Acoustic features extracted from speech recordings can be biased by heterogeneous recording conditions, leading to spurious correlations between acoustic characteristics and the patient's diagnosis.
Reporting non-consensual intimate media (NCIM) under the Digital Millennium Copyright Act (DMCA) leads to successful and prompt removal of content on X (Twitter), while reports made under the platform's internal non-consensual nudity policy result in no action taken over a three-week period.
WMCodec is an end-to-end neural speech codec that jointly optimizes compression-reconstruction and watermark embedding-extraction, enabling robust authenticity verification through deep cross-modal feature integration.
A lightweight defense mechanism, PAD-FT, that effectively disinfects poisoned deep neural network models without requiring additional clean data.
This paper introduces two new Commit-and-Prove SNARK constructions, Apollo and Artemis, that efficiently address the challenge of commitment verification in zero-knowledge machine learning (zkML) pipelines. These constructions significantly improve the efficiency of commitment checks compared to existing approaches, enabling practical deployment of zkML, particularly for large-scale models.