> One thing it's not likely to be, is a neo-classical capitalist system based on the value of human labor.
I'm finding it difficult to believe this. For me, your comment is accurate (and very insightful) except even a mostly vanilla continuation of the neoliberal capitalist system seems possible. I think we're literally talking about a "singularity" where by definition our fate is not dependent on our actions, and of something we don't have the full capacity to understand, and next to no capacity to influence. It needs tremendous amount of evidence to claim anything in such an indeterminate system. Maybe 100 rich people will own all the AI and the rest will be fixing bullshit that AI doesn't even bother fixing like roads, rusty farms etc, similar to Kurt Vonnegut's first novel "Player Piano". Not that the world described in that novel is particularly neoliberal capitalist (I suppose it's a bit more "socialistic" (whatever it means)) than that, but I don't think such a future can be ruled out.
My bias is that, of course, it's going to be a bleak future. Because when humanity loses all control, it seems unlikely to me a system that protects the interests of individual or collective humans will take place. So whether it's extinction, cyberpunk, techno-socialism, techno-capitalist libertarian anarchy, neoclassical capitalism... whatever it is, it will be something that'll protect the interest of something inhuman, so much more so than the current system. It goes without saying, I'm an extreme AI pessimist: just making my biases clear. AGI -- while it's unclear if it's technically feasible -- will be the death of humanity as we know it now, but perhaps something else humanity-like, something worse and more painful will follow.
Pay attention to the whole sentence, especially the last section : "... based on the value of human labor."
It's not that I'm ruling out capitalism as the outcome. I'm simply ruling out the combined JOINT possibility of capitalism COMBINED WITH human labor remaining the base resource within it.
If robotics is going in the direction I expect there will simply be no jobs left that will be done more efficiently by humans than by machines. (ie that robots will match or exceed the robustness, flexibility and cost efficiency of all biology based life forms through breakthroughs in either nanotech or by simply using organic chemistry, DNA, etc to build the robots).
Why pay even $1/day for a human to do a job when a robot can do it for $1/week?
Also, such a capitalist system will almost certainly lead to AGI's becoming increasingly like a new life form, as capitalism between AGI's introduce a Darwinian selection pressure. That will make it hard even for the 100 richest people to retain permanent control.
IF humanity is to survive (for at least a few thousand more years, not just the next 100), we either need some way to ensure alignment. And to do that, we have to make sure that AGI's that optimize resource-control-seeking behaviours have an advantage over those who don't. We may even have to define some level of sophistication where further development is completly halted.
At least until we find ways for humans to merge with them in a way that allows us (at least some of us) to retain our humanity.
I'm finding it difficult to believe this. For me, your comment is accurate (and very insightful) except even a mostly vanilla continuation of the neoliberal capitalist system seems possible. I think we're literally talking about a "singularity" where by definition our fate is not dependent on our actions, and of something we don't have the full capacity to understand, and next to no capacity to influence. It needs tremendous amount of evidence to claim anything in such an indeterminate system. Maybe 100 rich people will own all the AI and the rest will be fixing bullshit that AI doesn't even bother fixing like roads, rusty farms etc, similar to Kurt Vonnegut's first novel "Player Piano". Not that the world described in that novel is particularly neoliberal capitalist (I suppose it's a bit more "socialistic" (whatever it means)) than that, but I don't think such a future can be ruled out.
My bias is that, of course, it's going to be a bleak future. Because when humanity loses all control, it seems unlikely to me a system that protects the interests of individual or collective humans will take place. So whether it's extinction, cyberpunk, techno-socialism, techno-capitalist libertarian anarchy, neoclassical capitalism... whatever it is, it will be something that'll protect the interest of something inhuman, so much more so than the current system. It goes without saying, I'm an extreme AI pessimist: just making my biases clear. AGI -- while it's unclear if it's technically feasible -- will be the death of humanity as we know it now, but perhaps something else humanity-like, something worse and more painful will follow.