Except this was entirely possible with API, and the dead stupid obvious thing to do, even as far back as OG ChatGPT (pre-GPT-4). Assistants don't seem to introduce anything new here, at least not anything one could trivially make with API access, a Python script, and a credit card.
So I don't thing it's this - otherwise someone would've done this long time ago and killed us all.
Also not like all the "value adds" for ChatGPT are in any way original or innovative - "plugins" / "agents" were something you could use months ago via alternative frontend like TypingMind, if you were willing to write some basic JavaScript and/or implement your own server-side actions for the LLM to invoke. So it can't be this.
I'd agree that what is available publicly isn't anything that hasn't been in wide discussion for an agent framework since maybe ~march/april of this year, and many people had just hacked together their own version with an agent/RAG pipeline and API to hide their requests behind.
I'm very sure anything revolutionary would have been more of a leap than deeply integrating a agent/RAG pipeline into the OpenAI API. They have the compute...
So I don't thing it's this - otherwise someone would've done this long time ago and killed us all.
Also not like all the "value adds" for ChatGPT are in any way original or innovative - "plugins" / "agents" were something you could use months ago via alternative frontend like TypingMind, if you were willing to write some basic JavaScript and/or implement your own server-side actions for the LLM to invoke. So it can't be this.