核心概念
Major AI developers should provide legal and technical safe harbors to protect independent public interest research on generative AI systems.
摘要
独立した公共利益研究を保護するために、主要なAI開発者は法的および技術的な安全地帯を提供すべきです。これにより、ジェネレーティブAIシステムに関する重要なリサーチが促進され、コミュニティの努力が強化されます。この提案は、アカウントの停止や法的報復から研究者を保護し、包括的で包括的な評価を可能にします。
統計資料
Generative AI systems have raised concerns for widespread misuse, bias, hate speech, privacy concerns, disinformation, self-harm, copyright infringement, fraud, weapons acquisition, and proliferation of non-consensual images.
Transparent audits show only 25% of policy enforcement criteria were satisfied on average.
Leading AI companies' terms of service prohibit independent evaluation into sensitive model flaws.
Companies like OpenAI have attempted to dismiss lawsuits against them by alleging hacking.
Midjourney updated its Terms of Service to include penalties for conducting research that infringes intellectual property.
引述
"Companies cannot be allowed to assign and mark their own homework." - Ada Lovelace Institute
"Independent research has uncovered unexpected flaws, aiding company efforts and expanding collective knowledge." - Content excerpt
"A legal safe harbor could mitigate risks from civil litigation, providing assurances that AI platforms will not sue researchers if their actions were taken for research purposes." - Content excerpt