The article explores the limitations of an "objective" approach to understanding artificial intelligence (AI). It begins by introducing the concept of "Universal Intelligence" proposed by Legg & Hutter, which defines intelligence as an agent's ability to achieve goals in a wide range of environments. The author then presents a hypothetical example of an agent, "GoldAI," which demonstrates high Universal Intelligence by effectively harvesting gold across various planets.
However, the author argues that this objective account of AI leaves out vital questions about the agent's consciousness, emotions, and subjective experiences - concerns that are central to the human perspective on AI. The author suggests that an objective approach ignores these pressing human concerns, akin to a military history that omits the human suffering of war.
Furthermore, the author contends that the core elements of Universal Intelligence, such as goal-setting and the environment-agent relationship, are still fundamentally rooted in human constructs. The author proposes an alternative set of assumptions about intelligence, where goals are shared and achieved collaboratively, and they are qualitative and intangible rather than purely quantifiable.
The article concludes by emphasizing the need to tackle the philosophical questions surrounding AI seriously, rather than treating them as a "quaint sideshow." It suggests that reallocating resources from engineering to critical reflection may be necessary to gain a more comprehensive understanding of the complexities inherent in artificial intelligence.
翻譯成其他語言
從原文內容
medium.com
從以下內容提煉的關鍵洞見
by Paul Siemers 於 medium.com 06-10-2024
https://medium.com/brain-labs/the-essential-humanity-of-ai-e31587fc3bee深入探究