freeatlantis.com is one of the many independent Mastodon servers you can use to participate in the fediverse.

Administered by:

Server stats:

191
active users

#agi

2 posts2 participants0 posts today
Replied in thread

(3/3)

"... the belief that AGI can be realized is harmful. If the power of technology is overestimated and human skills are underestimated, the result will in many cases be that we replace something that works well with something that is inferior.."

#RagnarFjelland, 2020

doi.org/10.1057/s41599-020-049

This is what's happening, eg government thinking that replacing human judges with Trained MOLEs allows them to cut costs *and* get more "rational" judgments. It doesn't do either.

If we achieve #AGI will we live in a utopia, where #AI will do all the heavy lifting and we can focus on the nice things in life?

I really like using AI as an assistance system and I really like using the Internet. I once thought that the Internet would lead us to a better future because we would be able to share knowledge, learn from each other and develop understanding for one another: A utopian platform for free communication and democracy. Ok we got the Fediverse and Wikipedia etc. but 1/x

Artificial Intelligence (AI) hype is mainly associated with Large Language Models (LLM), which became available due to neural networks trained on very large datasets. Yet can those applied chatbots be considered intelligent?

I've recently heard OpenAI claiming their recent chatbot o3 progressed so much in Artificial General Intelligence (AGI), that they succeed in solving PhD-level problems. Ambitious, yet ridiculous claim.

Although the term AGI is often used to describe a computing system that meets or surpasses human cognitive abilities across a broad range of tasks, no technical definition for it exists. As a result, there is no consensus on when AI tools might achieve AGI. Some say the moment has arrived; others say it is still far away.

Source: How should we test AI for human-level intelligence? Nicola Jones (2025). Nature | Vol 637 | 775.

And since all of these tests, such as ARC-AGI, are based on questions-answers, it is interesting to note even OpenAI's o3 still fails to solve plenty of questions that humans consider straightforward.

So no human-level intelligence any time soon, folks :blobcatjustright:

#ai#aislop#science