Westworld (season 1) based their fictional theory of robot consciousness on Bicameral Mind.
I thought it was brilliant, and honestly can't shake the feeling that it might work. It kind of got me fired up about chatbots as a basis for consciousness.
We're typically concerned that chatbots can't be interrogated. They don't know why they said X or Y.
We don't know why either. Why we did or said something. We can reason about the why, finding rational reasons and adopt them as beliefs ex post. Well... Chatbots can do this too. They just need to get better at it... or maybe we just need to get better at engineering them creatively.
What's cool about robotics is that we don't have to prove it theoretically. We need to build it, and then study what we built. Jaynes is a workable model for a dialectical mind. And once we're trying to build a dialectical mind, we'll probably imagine more models to try.
"Technology Science" like much of CS is, weirdly, a good fit for Jaynes-like pseudoscience... using that term sans negative connotations. It does not fit into a scientifically rigorous framework.
That approach is excellent for creative ideas, which we need. OTOH, pseudoscience doesn't have built-in reality checks... making it a poor method for discovering narrow and/or abstract truths.
To a technology scientist, this doesn't necessarily matter. Technologists need good ideas to try. What you build doesn't need to prove or disprove your priors. It just needs to work, or do something interesting. At that point you can study it and bring everything full circle.
While true for frozen models, this may not be so true for historic recall of memories and semantic lookups of memories or relevant information to the current conversation. We may be able to understand that awareness, or a type of consciousness emerges from a mix of both rational and irrational thought.
Irrational thought could be that which comes from the frozen model and rational thought could come from a lookup of previous input/output, for example.
I submitted a talk by Thomas Metzinger recently which didn't get much traction but is very relevant to this specific topic, "Three Types of Arguments for a Global Moratorium on Synthetic Phenomenology".
I would guess that we will have created, tortured, and deleted millions of conscious AIs before we even come close to recognizing their rights or, even, the fact of their consciousness.
I thought it was brilliant, and honestly can't shake the feeling that it might work. It kind of got me fired up about chatbots as a basis for consciousness.
We're typically concerned that chatbots can't be interrogated. They don't know why they said X or Y.
We don't know why either. Why we did or said something. We can reason about the why, finding rational reasons and adopt them as beliefs ex post. Well... Chatbots can do this too. They just need to get better at it... or maybe we just need to get better at engineering them creatively.
What's cool about robotics is that we don't have to prove it theoretically. We need to build it, and then study what we built. Jaynes is a workable model for a dialectical mind. And once we're trying to build a dialectical mind, we'll probably imagine more models to try.
"Technology Science" like much of CS is, weirdly, a good fit for Jaynes-like pseudoscience... using that term sans negative connotations. It does not fit into a scientifically rigorous framework.
That approach is excellent for creative ideas, which we need. OTOH, pseudoscience doesn't have built-in reality checks... making it a poor method for discovering narrow and/or abstract truths.
To a technology scientist, this doesn't necessarily matter. Technologists need good ideas to try. What you build doesn't need to prove or disprove your priors. It just needs to work, or do something interesting. At that point you can study it and bring everything full circle.