Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not following - if you think AGI is uncertain, shouldn't you actually be more surprised? I mean, looking at it from a bayesian lens, the lower your prior, the more (in absolute percentage points) you would need to adjust it based on new supporting evidence, no?


If you think AGI is uncertain, then maybe it's just an improvement on a paradigm that is near its end state: amazing autocomplete.


Enlighten me then - what is this limit point of "amazing autocomplete"?


To clarify, I'm really curious about this question. Is there some limit to autocomplete that falls short of continuing a prompt such as: "The following is a literate programming compendium on how to stimulate the human brain in software ..."


I don't have a good answer for your question, I was just making the point that if you think this is a step toward a dead end and not AGI your attitude regarding the step changes.


I understood that. What I meant to say (apologies if that was unclear) is that if you think we're getting close to a dead end, you should be more rather than less surprised at signs of significant further progress, no?

Continuing with the physical movement metaphor, if I believe that the train I'm on will stop at the next station, I'll be more surprised at the fact that we're still accelerating, compared to the person next to me who's not sure if this is a local train or an express train.

Generally speaking, the lower my prior probability of continued progress, the more I should be surprised by the lack of slowdown.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: