The paper discusses the importance of ensuring the replicability of both model performance claims and social claims made in machine learning (ML) research. It argues that the current focus on replicating model performance does not guarantee the replicability of social claims, which can have significant consequences when these claims are used to justify the deployment of ML methods in real-world applications.
The paper first defines and distinguishes between model performance replicability (MPR) and claim replicability (CR), highlighting that CR corresponds to individual claims made in a paper, while MPR focuses on the replicability of the overall model performance. It then emphasizes the importance of social claims, which often loosely connect to the main body of the paper and are rarely engaged in depth, despite their significant impact on how ML methods are adopted and used in practice.
The paper then makes the case for how CR can help bridge the responsibility gap by holding ML scientists directly accountable for producing non-replicable claims. It draws on the concepts of vicarious responsibility and moral entanglement to argue that ML scientists have a strong moral obligation to ensure the replicability of their claims, as their identity as scientists is central to their professional role. The paper also discusses the challenges in assigning blame for violating the norm of replicability, which surfaces competing epistemological perspectives, and suggests that these can be reconciled by developing a more nuanced understanding of different types of claims and their intended audiences.
Finally, the paper explores the practical implications of CR, including its impact on the phenomenon of "circulating references" in ML research, the distribution of interpretive labor between ML scientists and users of their work, and the need for more thoughtful research communication practices.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Tianqi Kou at arxiv.org 04-23-2024
https://arxiv.org/pdf/2404.13131.pdfDeeper Inquiries