Artificial Intelligence (AI) hype is mainly associated with Large Language Models (LLM), which became available due to neural networks trained on very large datasets. Yet can those applied chatbots be considered intelligent?
I've recently heard OpenAI claiming their recent chatbot o3 progressed so much in Artificial General Intelligence (AGI), that they succeed in solving PhD-level problems. Ambitious, yet ridiculous claim.
Although the term AGI is often used to describe a computing system that meets or surpasses human cognitive abilities across a broad range of tasks, no technical definition for it exists. As a result, there is no consensus on when AI tools might achieve AGI. Some say the moment has arrived; others say it is still far away.
Source: How should we test AI for human-level intelligence? Nicola Jones (2025). Nature | Vol 637 | 775.
And since all of these tests, such as ARC-AGI, are based on questions-answers, it is interesting to note even OpenAI's o3 still fails to solve plenty of questions that humans consider straightforward.
So no human-level intelligence any time soon, folks 