The article critically examines the common definition of Artificial General Intelligence (AGI) as proposed in the "Levels of AGI" paper. It highlights three key assumptions underlying this definition and argues that each of these assumptions is deeply flawed.
The first assumption is that cognitive tasks, rather than physical tasks, are the true measure of intelligence. The author argues that physical embodiment and capabilities are crucial for an intelligent agent to truly understand and interact with the real world.
The second assumption is that tasks can be neatly divided into two categories: physical tasks and cognitive tasks. The author contends that this dualistic view contradicts our everyday experience, where most tasks require a seamless integration of physical and cognitive competencies.
The third and most fundamental assumption is that AGI can be defined in terms of the successful accomplishment of a set of tasks. The author argues that this view fails to capture the essence of human-level intelligence, which is not merely about completing tasks but about pursuing lofty goals and expressing unique insights.
The article suggests that until an AI system can take on profoundly human roles like being a statesman, poet, or parent, it is hard to consider it as having "human-like intelligence." The author concludes that the current definition of AGI is flawed and may lead to another crisis of confidence in AI, similar to the past "AI Winters."
翻譯成其他語言
從原文內容
ai.gopubby.com
從以下內容提煉的關鍵洞見
by Paul Siemers 於 ai.gopubby.com 07-21-2024
https://ai.gopubby.com/the-false-dawn-of-agi-d8cd45fdd9e3深入探究