المفاهيم الأساسية
AI systems must be designed to be regulatable to ensure adherence to regulatory requirements.
الملخص
The article discusses the challenges in regulating AI systems, focusing on technical gaps and policy opportunities. It explores the need for AI systems to be vetted for adherence to regulatory requirements and the importance of technical innovation in creating regulatable AI systems. The content is structured into sections covering Algorithmic Impact Assessment, Transparency, Quality Assurance, Recourse, Reporting, Data Checks, System Monitoring, Model Validation, Local Explanations, Objective Design, and Privacy. It delves into the technical criteria from public sector AI procurement checklists, highlighting the need for interdisciplinary approaches and technical innovations to bridge gaps in regulating AI systems effectively.
الإحصائيات
"Before launching into production, developing processes so that the data and information used by the Automated Decision Systems are tested for unintended data biases and other factors that may unfairly impact the outcomes."
"Validating that the data collected for, and used by, the Automated Decision System is relevant, accurate, up-to-date, and in accordance with the Policy on Service and Digital and the Privacy Act."
"Establishing measures to ensure that data used and generated by the automated decision system are traceable [fingerprinting], protected and accessed appropriately, and lawfully collected, used, retained, and disposed."
اقتباسات
"Public institutions cannot rely on black-box algorithms to justify decisions that affect individual and collective citizens’ rights, especially with the increased understanding about algorithmic bias and its discriminatory effects on access to public resources."
"With AI solutions that make decisions affecting people’s rights and benefits, it is less important to know exactly how a machine-learning model has arrived at a result if we can show logical steps to achieving the outcome."