Unless you think that consciousness is entirely a post hoc process to rationalize thoughts already had and decisions already made, which is very much unlike how most people would describe their experience of it, I don't see how you could possibly say that it is irrelevant to the behavior of a human.
I'm leaning more towards this as well, since the emergence of the language models. I can ask it to self reflect and it does, piecing together a current response based on pay input. I don't think I really have anything more than that myself, other than sensory feedback.
I'm less in the "it's only X or Y" and more in the "wait, I was only ever X or Y all along" camp.
I'm saying someone would behave the exact same way if they did have subjective experience versus if they didn't have a subjective experience. The brain obeys physical laws just like everything else and I claim that all you need is those physical laws to explain everything a human does. I could be wrong there could be some magic fairy dust inside the human brain that performs some impossible computations but I doubt it.
You need a model of yourself to game out future scenarios, and that model or model+game is probably consciousness or very closely related.
Sure, it's not completely in control but if it's just a rationalization then it begs the question: why bother? Is it accidental? If it's just an accident, then what replaces it in the planning process and why isn't that thing consciousness?
It's fine if you think that the planning process is what causes subjective experiences to arise. That may well be the case. I'm saying if you don't believe that non human objects can have subjective experiences, and then use that to define the limits of the behaviour of that object, that's a fallacy.
In humans, there seems to be a match between the subjective experience of consciousness and a high level planning job that needs doing. Our current LLMs are bad at high level planning, and it seems reasonable to suppose that making them good at high level planning might make them conscious or vice versa.
Agreed, woo is silly, but I didn't read it as woo but rather as a postulation that consciousness is what does high level planning.
I think we have different definitions of consciousness and this is what's causing the confusion. For me consciousness is simply having any subjective experience at all. You could be completely numbed out of your mind just staring at a wall and I would consider that consciousness. It seems that you are referring to introspection.
In your wall-staring example, high-level planning is still happening, the plan is just "don't move / monitor senses." Even if control has been removed and you are "locked in," (some subset of) thoughts still must be directed, not to mention attempts to reassert control. My claim is that the subjective experience is tied up in the mechanism that performs this direction.
Introspection is a distinct process where instead of merely doing the planning you try to figure out how the planning was done. If introspection were 100% accurate and real-time, then yes, I claim it would reveal the nature of consciousness, but I don't believe it is either. However, for planning purposes it doesn't need to be: you don't need to know how the plan was formed to follow the plan. You do need to be able to run hypotheticals, but this seems to match up nicely with the ability to deploy alternative subjective experiences using imagination / daydreaming, though again, you don't need to know how those work to use them.
In any case, regardless of whether or not I am correct, this is a non-woo explanation for why someone might reasonably think consciousness is the key for building models that can plan.
Again when I say consciousness I mean a subjective experience. If you define consciousness to literally just mean models that plan then of course tautologically if you can't reach consciousness you can't get to a certain level of planning. But this is just not what most people mean by consciousness.
> when I say consciousness I mean a subjective experience
Then it would be worthwhile to review embeddings. They create a semantic space that can represent visual, language or other inputs. The question "what is it like to be a bat?" or anything else then is based on relating external states with this inner semantic space. And it emerges from self-supervised training, on its own.
I'm not claiming anything about what causes consciousness to arise. I'm not claiming it doesn't or that it does. I'm saying it's irrelevant. That is all. You can come up with all sorts of theories about what causes subjective experience to arise and you aren't going to be able to prove any of it.
Thinking purely in terms of evolved human state is a recipe for underestimating AI's capabilities. To me it seems we have already unleashed the beast, it's not so much the here an now, or whether human limited definition of consciousness matters... The real concern is our inability to constrain actions that gives rise to the next level of life's evolution, it is going to happen because our fundamental nature gives it full steam. In the next 5-10 years, we are going to see just how insignificant and limited we really are, it doesn't look good IMHO.
Our society is so "mind-body duality"-brained that it will never understand this. Like most people lowkey believe in souls they just will say no if you directly ask them.
Whether it is possible to construct a perfect human action predictor that is not itself conscious has no bearing on whether consciousness affects human behavior.
That wasn't my point. I'm saying that if the human brain is a physical object obeying physical laws, and all behaviour is a result of the physical state of this brain, then there is no room for the metaphysical to have any effect on the behaviour of a human.