Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There will come a day when 50% of jobs are being done by AI, major decisions are being made by AI, we're all riding around in cars driven by AI, people are having romantic relationships with AI... and we'll STILL be debating whether what has been created is really AGI.

AGI will forever be the next threshold, then the next, then the next until one day we'll realize that we passed the line years before.



"Is this AGI"? doesn't seem like a useful question for precisely this reason - it's ill-defined and hard to prove or falsify. The pertinent questions are more along the lines of "what effect will this have on society", "what are the risks of this technology" etc.


Reminds me of a short story I read in which humans outsource more and more of their decision making to AI’s, so that even if there are no AGI’s loose in the world, it’s unclear how much of the world is being run by them: https://solquy.substack.com/p/120722-nudge

I also think it’s funny how people rarely bring up the Turing Test anymore. That used to be THE test that was brought up in mainstream re: AGI, and now it’s no longer relevant. Could be moving goalposts, could also just be that we think about AGI differently now.


GPT-4 doesn't pass the turing test, it's frequently wrong and nonsensical in an inhuman way. But I think this new "agi" probably does from the sound of it, and it would be the real deal.


Teachers use websites to try and detect if AI wrote essays (and often it gets it wrong, and they believe it) we've defacto passed it.


Turing test is not do AI sound like humans some of the time, but is it possible to tell an AI is AI just by speaking with it.

The answer is definitely yes, but it's not by casual conversation, but by asking weird logic problems it has tremendous problems solving and will give totally nonsensical inhuman answers to.


I'm not convinced. Openai specifically trained their models in a way that is not trying to pass the Turing test. I suspect current models are more than capable of passing Turing tests. For example, i suspect most humans will give nonsense answers to many logic problems!


It's pretty inhuman in the ways it messes up. For example try asking GPT-4 to write you a non-rhyming poem. It gives you a rhyming poem instead. Complain about the rhyming and ask it to try again, gives you another rhyming poem after apologizing about the inadvertent rhymes. It clearly understands what rhyming is and its apologies sound sincere, yet it's incapable of writing a poem that doesn't rhyme. That's pretty inhuman.

Also the way and context that it gets logic puzzles wrong is pretty inhuman. First of all, it's capable of doing some pretty hard puzzles that would stump most people. Yet if you change the wording of it a bit so that it no longer appears in training data, it's suddenly wrong. Humans are frequently wrong of course, but the way they're wrong is that they give vague solutions, then muddle through an answer while forgetting important pieces. This is contrary to GPT-4 which will walk you through the solution piece by piece while confidently saying things that make no sense.


Agreed.

To me, GPT-4 is an AGI: it knows how to cook, write code, make songs, navigate international tax law, write business plans, etc.

Could it be more intelligent? Sure. Is it a capable general intelligence? 100%.


GPT-4 still makes plenty of mistakes when programming that reveal that it doesn’t fully understand what it’s doing. It’s very good, but it doesn’t reach the level of human intellect. Yet.

It is A and gets the G but fails somewhat on the I of AGI.


Humans also make mistakes, all the time.


Yes, but we expect an AGI to not make mistakes that a human wouldn’t make.

This is easier to see with AI art. The artwork is very impressive but if the hand has the wrong number of fingers or the lettering is hilariously wrong, there’s a tendency to dismiss it.

Nobody complains that dall-e can’t produce artwork on par with Da Vinci because that’s not something we expect humans to do either.

For us to start considering these AIs “intelligent” they first need to nail what we consider “the basics”, no matter how hard those basics are for a machine.


The frog might be boiled slowly. One day we are replacing parts of our brain with AI. Find it hard to remember names? We can fix that for $20/m plus some telemetry.


> we're all riding around in cars driven by AI, people are having romantic relationships with AI...

ASI (domain-specific superintelligence) and AGI (general intelligence) are different things. ASI already exists in multiple forms, AGI doesn't.


> AGI doesn't

AGI hasn’t been publicly demonstrated and made available to the masses… but it may exist secretly in one or more labs. It may even be being used in the field under pseudonyms, informing decisions, etc.


> day when 50% of jobs are being done by AI

By OpenAI definition 50% is not enough to qualify for AGI, it has to be "nearly any economically valuable work"


It would be hard to find a single human who could handle nearly any/all economically valuable work. Getting good enough to get paid in one field is an achievement.


Not sure it has to replace plumbers to be AGI


maybe they ment to say intellectual work/knowledge work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: