Hacker Newsnew | past | comments | ask | show | jobs | submit | resonious's commentslogin

Personally I wouldn't let AI influence this decision at all.

Today's AI is built on human-made content, and if we want "more" AI then we will need more human-made stuff. So it's a moot point. Unless you are OK with AI causing a plateau in human progress, don't let it get in the way of you (a human) from making progress.

That said, I cannot really comment on your first or third blockers. I have the exact same problems.


To add another angle to the "run it in Docker" comments (which are right), do you not get a fear response when you see Claude asking to run `rm` commands? I get a shot of adrenaline whenever I see the "run command?" prompt show up with an `rm` in there. Clearly this person clicked the "yes, allow any rm commands" button upon seeing that which is unthinkable to me.

Or maybe it's just fake. It's probably easy Reddit clout to post this kind of thing.


Actually I think the name is apt. It's artificial. It's like how an "artificial banana" isn't actually a banana. It doesn't have to be real thinking or real learning, it just has to look intelligent (which it does).

I'll be honest Pulumi is pretty cool but I'm a little worried by how high on the stack it is. I wonder if the same thing won't happen to them that's happening to CDKTF here.

Terraform is ugly but it works well enough for me and seems ingrained enough to be durable to this kind of thing (i.e. I bet for sure the community would pick it up (I wish I could say that I'm part of that community but I can't say I use it quite that often))


> I wonder if the same thing won't happen to them that's happening to CDKTF here.

This is clearly a business decision rather than technical.

Pulumi is meant to be semi-automated (in generating the bridges) so perhaps is slightly better off in maintenance.


I wonder about APNs and Apple Business Manager. I've heard from people seeing weird stuff happening on those products but I don't see it on the report here.

> This type of code error is prevented by languages with strong type systems.

True, as long as you don't call unwrap!


That's a different kind of error. And even then unwrap is opt-in whereas this is opt-out if you're lucky.

Kind of funny that we get something showing the benefits of Rust so soon after everyone was ragging on a out unwrap anyway!


Very timely as I just recently ended up with a URL query string so big that CloudFront rejected the request before it even hit my server.. Ended up switching that endpoint to POST. Would've liked QUERY for that!


I have come across systems that use GET but with a payload like POST.

This allows the GET to bypass the 4k URL limit.

It's not a common pattern, and QUERY is a nice way to differentiate it (and, I suspect will be more compatible with Middleware).

I have a suspicion that quite a few servers support this pattern (as does my own) but not many programmers are aware of it, so it's very infrequently used.


Sending a GET request with a body is just asking for all sorts of weird caching and processing issues.


I get the GPs suggestion is non-conventional but I don’t see why it would cause caching issues.

If you’re sending over TLS (and there’s little reason why you shouldn’t these day) then you can limit these caching issues to the user agent and infra you host.

Caching is also generally managed via HTTP headers, and you also have control over them.

Processing might be a bigger issue, but again, it’s just any hosting infrastructure you need to be concerned about and you have ownership over those.

I’d imagine using this hack would make debugging harder. Likewise for using any off-the-shelf frameworks that expect things to confirm to a Swagger/OpenAPI definition.

Supplementing query strings with HTTP headers might be a more reliable interim hack. But there’s definitely not a perfect solution here.


To be clear, it's less of a "suggestion" and more of a report of something I've come across in the wild.

And as much as it may disregard the RFC, that's not a convincing argument for the customer who is looking to interact with a specific server that requires it.


Cache in web middleware like Apache or nginx by default ignores GET request body, which may lead to bugs and security vulnerabilities.


But as I said, you control that infra.

I don’t think it’s unreasonable to expect your sysadmins, devops, platform engineers, or whatever title you choose to give them, to set up these services correctly, given it’s their job to do so and there’s a plethora of security risks involved.

If you can’t trust them to do that little, then you’re fuck regardless of whether you decide to send payloads as GET bodies.

And there isn’t any good reason not to contract pen testers to check over everything afterwards.


> I don’t think it’s unreasonable to expect your sysadmins, devops, platform engineers, or whatever title you choose to give them, to set up these services correctly, given it’s their job to do so and there’s a plethora of security risks involved.

Exactly, and the correct way to setup GET requests is to ignore their bodies for caching purposes because they aren't expected to exist: "content received in a GET request has no generally defined semantics, cannot alter the meaning or target of the request, and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack" (RFC 9110)

> And there isn’t any good reason not to contract pen testers to check over everything afterwards.

I am pretty sure our SecOps and Infra Ops and code standards committee will check it and declare that GET bodies is a hard no.


> Exactly, and the correct way to setup GET requests is to ignore their bodies for caching purposes because they aren't expected to exist

No. The correct way to set up this infra is the way that works for a particular problem while still being secure.

If you’re so inflexible as an engineer that you cannot set up caching correctly for a specific edge case because it breaks you’re preferred assumptions, then you’re not a very good engineer.

> and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack"

Once again, you have control over the implementations you use in your infra.

Also It’s not a RSA if the request is supposed to contain a payload in the body.

> I am pretty sure our SecOps and Infra Ops and code standards committee will check it and declare that GET bodies is a hard no.

I wouldn’t be so sure. I’ve worked with a plethora of different infosec folk from those who will mandate that PostgreSQL needs to use non-standard ports because of mandating strict compliance with NIST, even for low risk reports. To others that have been fine with some pretty massive deviations from traditionally recommended best practices.

The good infosec guys, and good platform engineers too, don’t look at things in black and white like you are. They build up a risk assessment and judge each deviation on its own merit. Thus GET body payloads might make sense in some specific scenarios.

This doesn’t mean that everyone should do it nor that it’s a good idea outside of those niche circumstances. But it does mean that you shouldn’t hold on to these rigid rules like gospel truths. Sometimes the most pragmatic solution is the unconditional one.

That all said, I can’t think of any specific circumstance where you’d want to do this kind of hack. But that doesn’t mean that reasonable circumstances would never exist.


I work as an SRE and would fight tooth and nail against this. Not because I can’t do it, but because it’s a terrible idea.

For one, you’re wrong about TLS meaning only your infra and the client matter. Some big corps install a root CA on everyone’s laptop and MITM all HTTP/S traffic. The one I saw was Bluecoat, no idea if it follows your expected out-of-spec behavior or not.

For two, this is likely to become incredibly limiting at some point and require a bunch of work to re-validate or re-implement. If you move to AWS, do ELBs support this? If security wants you to use Envoy for a service mesh, is it going to support this? I don’t pick all the tools we use, so there’s a good chance corporate mandates something incompatible with this.

You would need very good answers to why this is the only solution and is a mandatory feature. Why can’t we cache server side, or implement our own caching in the front end for POST requests? I can’t think of any situations where I would rather maintain what is practically a very similar fork of HTTP than implement my own caching.


> Not because I can’t do it, but because it’s a terrible idea.

To be clear, I'm not advocating it as a solution either. I'm just saying all the arguments being made for why this wouldn't work are solvable. Just like you've said there that it's doable.

> Some big corps install a root CA on everyone’s laptop and MITM all HTTP/S traffic.

I did actually consider this problem too but I've not seen this practice in a number of years now. Though that might be more luck on my part than a change in industry trends.

> this is likely to become incredibly limiting at some point and require a bunch of work to re-validate or re-implement.

I would imagine if you were forced into a position where you'd need to do this, you'd be able to address those underlying limitations when you come to the stage that you're re-implementing parts of the wider application.

> If you move to AWS, do ELBs support this?

Yes they do. I've actually had to solve similar problems quite a few times in AWS over the years when working on broadcast systems, and later, medical systems: UDP protocols, non-standard HTTP traffic, client certificates, etc. Usually, the answer is an NLB rather than ALB.

> You would need very good answers to why this is the only solution and is a mandatory feature.

Indeed


There is no secure notion of "correctly" that goes both directly against specs and de facto standards. I am struggling to even imagine how one could make an L7 balancer that should take into account possibilities that someone would go against HTTP spec and still get their request served securely and timely. I personally don't even know which L7 balancer my company uses and how it would cache GET requests with bodies, because I don't have to waste time on such things.

Running PostgreSQL on any non-privileged port is a feature, not a deviation from a standard. If you want to run PostgreSQL on a port under 1024 now that would be security vulnerability as it requires root access.

There is no reason to "build up a risk assessment and judge each deviation on its own merit" unless it is an unavoidable technical limitation. Just don't make your life harder for no reason.


> There is no secure notion of "correctly" that goes both directly against specs and de facto standards.

That's clearly not true. You're now just exaggerating to make a point.

> I am struggling to even imagine how one could make an L7 balancer that should take into account possibilities that someone would go against HTTP spec and still get their request served securely and timely

You don't need application-layer support to load balance HTTP traffic securely and timely.

> Running PostgreSQL on any non-privileged port is a feature, not a deviation from a standard. If you want to run PostgreSQL on a port under 1024 now that would be security vulnerability as it requires root access.

I didn't say PostgreSQL listening port was a standard. I was using that as an example to show the range of different risk appetites I've worked to.

Though I would argue that PostgreSQL has a de facto standard port number. And thus by your own reasoning, running that on a different port would be "insecure" - which clearly is BS (as in this rebuttal). Hence why I called your "de facto" comment an exaggeration.

> There is no reason to "build up a risk assessment and judge each deviation on its own merit" unless it is an unavoidable technical limitation.

...but we are talking about this as a theoretical, unavoidable technical limitation. At no point was anyone suggesting this should be the normal way to send GET requests.

Hence why i said you're looking at this far too black and white.

My point was just that it is technically possible when you said it wasn't. But "technically possible" != "sensible idea under normal conditions"


> I have come across systems that use GET but with a payload like POST.

I think that violates the HTTP spec. RFC 9110 is very clear that content sent in a GET request cannot be used.

Even if both clients and servers are somehow implemented to ignore HTTP specs and still send and receive content in GET requests, the RFC specs are very clear that participants in HTTP connections, such as proxies, are not aware of this abuse and can and often do strip request bodies. These are not hypotheticals.


Elasticsearch comes to mind.[0]

The docs state that is query is in the URL parameters, that will be used.I remember that a few years back it wasn't as easy - you HAD to send the query in the GET requests body. (Or it could have been that I had a monster queries that didn't fit through the URL character limits.)

0: https://www.elastic.co/docs/api/doc/elasticsearch/operation/...


> you HAD to send the query in the GET requests body.

I remember this pain, circa 2021 perhaps?


Probably closer to 2019. Maybe the optionality is a relatively new feature then.


I think graphQL as a byproduct of some serious shenanigans.

"Your GraphQL HTTP server must handle the HTTP POST method for query and mutation operations, and may also accept the GET method for query operations."

Supporting body in the get request was an odd requirement for something I had to code up with another engineer.


And the whole GET/POST difference matters for GraphQL at scale: we saved a truckload of money by switching our main GraphQL gateway requests to GET wherever possible.


Servers can support it but not browsers.


I see them outside. I live in a big city though which may explain it.


Just on Claude Code, I didn't notice any performance difference from Sonnet 4.5 but if it's cheaper then that's pretty big! And it kinda confuses the original idea that Sonnet is the well rounded middle option and Opus is the sophisticated high end option.


It does, but it also maps to the human world: Tokens/Time cost money. If either is well spent, then you save money. Thus, paying an expert ends up costing less than hiring a novice, who might cost less per hour, but takes more hours to complete the task, if they can do it at all.

It's both kinda neat and irritating, how many parallels there are between this AI paradigm and what we do.


It feels like "sharp edges" often means "I once had a horrible bug due to accidentally misusing this". But if you cut features based on that definition, you'd soon have an empty programming language.


Java was apparently quite successful, though. So maybe they got the balance right for their goal?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: