The article starts by discussing the problem of computational dualism, where the behavior of software is dependent on the hardware that interprets it, undermining claims about the behavior of theorized software superintelligence. The author argues that this problem has a broader significance, echoing Descartes' interactionist substance dualism between mental and physical substances.
The author then proposes an alternative formulation based on enactivism, which holds that mind and body are inseparable and embedded in time and place. The author formalizes this by using a pancomputational model of the environment, where everything is a computational system. This allows the author to describe artificial minds in a purely behaviorist manner, focusing on inputs and outputs rather than the mechanism that maps one to the other.
The author then formalizes the concepts of abstraction layers, tasks, inference, and learning, using a proxy called "weakness" to estimate the sample efficiency of policies. This allows the author to define the objective upper bound of intelligent behavior, which is attained by using the weakness proxy to maximize the utility of an uninstantiated task across all possible vocabularies.
The article concludes by discussing the implications of these results for understanding problems in AI safety and general intelligence.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Michael Timo... alle arxiv.org 04-12-2024
https://arxiv.org/pdf/2302.00843.pdfDomande più approfondite