OpenAI is so far ahead of the competition. They're able to implement anything they like from competitors, and then some.
Claude really needs a sandbox to execute code.
If Anthropic would be smart about it, they'd offer developers ("advanced users") containers which implement sandboxes, which they can pull to their local machines, which then connect to Claude so that it can execute code on the user's machine (inside the containers), freeing up resources and having less security concerns on their side. It would be up to us if we wrap it in a VM, but if we're comfortable about it, we could even let it fetch things from the internet. They should open source it, of course.
In the meantime Google still dabbles in their odd closed system, where you can't even download the complete history in a JSON file. Maybe takeout allows this, but I wouldn't know. They don't understand that this is different than their other services, where they (used to) gatekeep all the gathered data.
This is an odd comment, because you mention Claude and Google, both of which already have similar/adjacent features. For a while. OpenAI is actually defensive/behind.
1. Claude has “artifacts” which are documents or interactive widgets that live next to a chat.
2. Claude also has the ability to run code and animated stuff in Artifacts already. It runs in a browser sandbox locally too.
3. Gemini/Google has a ton of features similar. For example, you can import/export Google docs/sheets/etc in a Gemini chat. You can also open Gemini in a doc to have it manipulate the document.
4. Also you can use takeout, weird of you to criticize a feature as missing, then postulate it exists exactly where you’d expect.
If anything this is OpenAI being defensive because they realize that models are a feature not a product and chat isn’t everything. Google has the ability and the roadmap to stick Gemini into email clients, web searches, collaborative documents, IDEs, smartphone OS apis, browsers, smart home speakers, etc and Anthropic released “Artifacts” which has received a ton of praise for the awesome usability for this exact use case that OpenAI is targeting.
Which has interesting consequences, because I saw it self-execute code it generated for me and fix the errors contained in that code by itself two times until it gave me a working solution.
(Note that I am no longer a Plus user)
---
Claude: I apologize, but I don't have the ability to execute code or generate images directly. I'm an AI language model designed to provide information and assist with code writing, but I can't run programs or create actual files on a computer.
---
Gemini: Unfortunately, I cannot directly execute Python code within this text-based environment. However, I can guide you on how to execute it yourself.
---
> 4. Also you can use takeout
I just checked and wasn't able to takeout Gemini interactions. There are some irrelevant things like "start timer 5 minutes" which I triggered with my phone, absolutely unrelated to my Gemini chats. takeout.google.com has no Gemini section.
Claude really needs a sandbox to execute code.
If Anthropic would be smart about it, they'd offer developers ("advanced users") containers which implement sandboxes, which they can pull to their local machines, which then connect to Claude so that it can execute code on the user's machine (inside the containers), freeing up resources and having less security concerns on their side. It would be up to us if we wrap it in a VM, but if we're comfortable about it, we could even let it fetch things from the internet. They should open source it, of course.
In the meantime Google still dabbles in their odd closed system, where you can't even download the complete history in a JSON file. Maybe takeout allows this, but I wouldn't know. They don't understand that this is different than their other services, where they (used to) gatekeep all the gathered data.