As someone growing up with shared hosting, VPS and eventually K8s, I never really got Cloudflare's offering (apart from CDN/DDOS/DNS). I'm not sure if it's their positioning or if I never had the problems they're trying to solve, but it just doesn't click for me. Durable objects, Wrangler, D1, some custom Node.js API... it's all kind of opaque to me how it really solves any problem better than just using Postgres, Redis, etc on top of K8S or something like that.
The workers execute from the same colos as the CDN, which are regionally distributed. They respond fast because they are physically close to the visitor and CloudFlare limits which runtimes they support to only very highly optimized ones.
And for my money, any platform that doesn’t require K8s is superior thank any which does.
Same for me, these things you mentioned either felt like stuff for edge or "convoluted hobby project", with maybe some cv padding along. Perhaps we need to buy into the full ecosystem to understand the value.
Cloudflare seems to exclusively offer "serverless" products, which rules out applications like Postgres (or any other "standard" database technology).
Why don't they just offer "managed Postgres"? This is because their infrastructure is as homogenized as possible so does not offer hosting of arbitrary services or software, the only customizable code made available to customers are things like workers which are deliberately constrained (in execution time, resource usage, etc) to, again, allow them to keep all their infrastructure as homogenized as possible.
Most of their other products are to provide supplementary capabilities to workers.
For example, their durable objects are comparable (in terms of technical approach, problems they solve and trade-offs) to AWS's DynamoDB or Azure's Cosmos DB. These products are distributed by nature and work very well for certain kinds of projects and not so well for others. They're also fully in-line with the generally homogenous infrastructure that Cloudflare is engineered to work on.
In summary, Cloudflare has essentially homogenous infrastructure globally and is able to make their extensive edge infrastructure available to customers for customized applications by constraining it to "serverless" offerings. For customers that can work within the trade-offs of these serverless products, it's an appealing product.
It's just marketing bullshit.
Make no mistake, the people using those things don't understand much more than you do; they are just going after shiny new toys, because that's much easier than building something solid that lasts and is cost-effective.
> GIDs are not checked for authorization when doing the lookup - they are meant to be generated above the authorization layer, and to be consumed above the authorization layer
Then the problem with this post boils down to applying the authorization layer in any tool call, just like you do in controllers. Seems obvious?
I‘ve had no success using Antigravity, which is a shame because the ideas are promising, but the execution so far is underwhelming. Haven‘t gotten past an initial plannin doc which is usually aborted due to model provider overload or rate limiting.
Give it a try now, the launch day issues have gone.
If anyone uses Windsurf, Anti Gravity is similar but the way they have implemented walkthrough and implementation plan looks good. It tells the user what the model is going to do and the user can put in line comments if they want to change something.
it's better than at launch, but I still get random model response errors in anti-gravity. it has potential, but google really needs to work on the reliability.
It's also bizarre how they force everyone onto the "free" rate limits, even those paying for google ai subscriptions.
Yeah, I’ve used vatiations of the “get frontier models to cross-check and refine each others work” pattern for years now and it really is the path to the best outcomes in situations where you would otherwise hit a wall or miss important details.
It’s my approach in legal as well. Claude formulates its draft, then it prompts codex and gemini for theirs. Claude then makes recommendations for edits to its draft based on others. Gemini’s plan is almost always the worst, but even it frequently has at least one good point to make.
If you're not already doing that you can wire up a subagent that invokes codex in non interactive mode. Very handy, I run Gemini-cli and codex subagents in parallel to validate plans or implementations.
I was doing this but I got worried I will lose touch with my critical thinking (or really just thinking for that matter). As it was too easy to just copy paste and delegate the thinking to The Oracle.
That's something I'd like to explore more. It's one of the reason I created "trusted roots". So I can open new worktrees and open claude in them all in one step without any confirmation.
If you want to suggest anything specific feel free to open an issue and we explore it more.
I feel the same way about OpenAI‘s new responses API. Under the cover of DX they‘re marketing a new default, which is we hold your state and sell it back to you.
OpenAI is tedious to work with. Took me a solid day of fooling around with it before I realized the chat api and the chat completions api are two entirely different apis. Then you have the responses api which is a third thing.
The irony is that gpt4 has no clue which approach is correct. Give it the same prompt three times and you’ll get a solution that uses each of these that has a wildly different footprint, be it via function calls or system prompts, schema or no schema, etc.
The European Commission has recognized the Swiss Data Protection Act as equivalent to the GDPR. This allows data to continue to flow freely between Switzerland and the EU.