Pundits claim o3 is AGI. Is it really?
We should expect more premature "AGI" claims as these systems approach viability. This is typical of any emerging technology.
What this does show is that no human-made benchmark is safe from an AI model’s progress. There’s nothing uniquely special about human cognition that can’t be replicated in software, given enough training.
No fundamental difference exists between AI and human cognitive ability. With sufficient hardware and the right evolutionary pressures, reaching human-level AI isn’t far off.