AFAICS, 412 is only meaningful when the request did specify a precondition (which failed), while 428 means that the request didn't specify some precondition (which was required by server).
These all seem immediately practical status codes that add semantics I've been wanting. Great! In particular I am happy about the 429 Too Many Requests header, as every time I've done rate throttling I've had to quibble over what code to actually send back.
What code did you use? I've typically used a 408 in a throttling use case but it never seemed to fit well (our client handled this response from our server with an exponential back off so we were not relying on another client to handle it correctly).
503 seems the most appropriate of the current codes ("The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay.")
A bit offtopic, but does anybody know what typesetting system was used to produce this document? It looks like troff manpages. Is that what is used for RFC documents as well? I love the oldschool look of it.
It was created using rfcmarkup[1], a tool made by the ietf. A quick look at the source of the page in the "generator" meta tag would have told you that (that's how I just found out) :-).
The general consensus seems to be that since this can never be all things to all people, we can't ever implement it. To me it seems like this is crying out for a Worse is Better solution.
511 seems a little pointless -- browsers can treat it differently, but if the intermediate gateway intends to be malicious, then it won't return 511 anyway. The only useful thing I can see is avoiding accidents/attacks on the gateway compromising its clients.
It has nothing to do with security, but it's not pointless. It tells the client that this response did not come from the server it attempted to contact. This is especially useful for non-browser clients that will otherwise simply choke on the 302 redirect normally used for this purpose, but it could also be useful for browsers to present a better UI for signing on to the internet.
My only concern is that 511 could be seen as legitimizing the practice of putting silly click-through terms-of-service roadblocks on free wi-fi (making it impossible for devices to connect without a human operating a web browser), but since people are doing it anyway we might as well support it properly.
It'd be interesting to see whether adding authentication/sign on to a protocol like DHCP would fit better. The hijacking of HTTP, while it obviously works well in the default case, seems nasty and this error code fixes the wrong problem. Better to have a DHCP field that tells you to visit a specific website to log in; then the OS could display that website when you connect to the wireless.
A further advantage of putting it into the connectivity protocols is it makes automatic payment and negociation by the client's software possible—for example, connecting to the cheapest wifi in range.
There are some problems worth solving that take longer than 10 years to figure out. Given time, a standard for discovering and connecting to a commercial network will emerge. Cell phones already have something like this with their preferred roaming lists, and AIUI long-haul telecoms will switch call routing and pricing on the fly based on other telecoms' pricing and routing changes. It's a matter of getting the standard designed, ratified (optional step), widely adopted by commercial access point manufacturers, then deployed on end-user devices.
I got bit by this using the Emacs package manager. It follows redirects, so it ended up trying to fetch a package and getting back a bunch of HTML with a 200 response code, which it tried to byte-compile with about as spectacular a failure as you would imagine.
> My only concern is that 511 could be seen as legitimizing the practice of putting silly click-through terms-of-service roadblocks on free wi-fi (making it impossible for devices to connect without a human operating a web browser), but since people are doing it anyway we might as well support it properly.
Filter on User-Agent to only show terms to browser. If non-human agent spoofs browser signature, it’s their own fault.
Changing behavior based on user agent is antithetical to the goals of having a semantic web. The ideal is to allow the user to use whatever software they want to connect to the network and access information.
I don't think 511 is so much for security, I see it as being more focused on usability. For example if you have thick client program that talks to a backend server (perhaps for updates) and it receives a 511 response it can then tell the user that they need to login to the proxy (or just pop up a web browser for them).
Exactly: a good example is how OS X currently works - everytime you connect to a new WiFi hotspot it fires a request to a static page on Apple's server in the background, to determine whether or not you have to sign in. If it detects unexpected content it assumes it's a sign-in page for a WiFi hotspot, and presents it modally.
Under this proposed system the OS can simply check for an appropriate HTTP response code, which makes life a little easier.
Isn't this something that the http server handles? I am not in web development except for basic php and cgi scripts. Are people actually writing web app code to dictate what http code comes back on each request?
I think it's fine. 4xx is a way of saying the client did something wrong that it can potentially fix (i.e. slow down requests). 5xx is more opaque meaning there is nothing the client can do to fix the problem.
Too continue. It's "you're sending ttoo many requests" not "server is overloaded/getting too many requests". Hence it's a client error 4xx, and not server 5xx.
One quibble. Perhaps I don't quite get the philosophy behind these, but wouldn't it be more useful to have a uniform structure to the responses. E.g., how useful is it to the average user to get a human-readable response for 428 when it's likely to be consumed by an app (i.e., JS) which would have to parse the suggested <p> tag?
You can return any content type you want and you should use content negotiation to return the type the client is expecting. The HTML descriptions are only suggestions.
428 seems interesting as a way to serialize objects and store them client-side, but assure the version stays consistent with the server. Or am I overthinking it?
511: 'Unknown clients then have all traffic blocked, except for that on TCP port 80, which is sent to a HTTP server (the "login server") dedicated to "logging in" unknown clients, and of course traffic to the login server itself.'
OMG really? Do we want to make HTTP even bigger? Don't we want to dump it for good and look for something more suitable to what the modern web needs instead?
Rewriting a large and complex enterprise-class application is tough enough, and a task that can variously (and expensively) fail.
Rewriting the whole of the World Wide Web?
Your replacement had better have a solid compatibility and migration path with "legacy" HTTP, and provide a substantial improvement over what HTTP and the existing tools provides, and clients and a migration path for a majority of the platforms and tools and browsers and embedded browsers and embedded web servers in use, and the budget and the time to make the replacement push.
SPDY isn't really intended to replace HTTP I don't think, it's just speeding it up quite a bit. All the messages exchanged between client and server are still HTTP when using SPDY.
No, SPDY replaces HTTP but keeps many of the same high-level semantics. The messages are not HTTP; for example, the headers are a binary format and compressed which isn't possible with HTTP.
Well technically HTTP as an application-layer protocol is unchanged. So I don't really understand your comment. The messages are HTTP. Whether the headers are compressed by an underlying protocol (such as SPDY) doesn't seem to be relevant.
An application layer protocol is the layer above TCP and defines the formats of those messages over that transport. Other application layer protocols are, for example, FTP, DNS, and SMTP. SPDY and HTTP have completely different (and incompatible) message formats even though they are meant for the same task. And it isn't HTTP tunneled through SPDY; there are significant differences.
For example, although SPDY supports HTTP methods (GET, POST, PUT) the method and parameters are specified as headers in the request. Also, all the header names are lower-cased in SPDY. The client and server don't communicate by a single stream as in HTTP but instead communicate in frames over the stream that can contain multiplexed requests and responses.
At the very high level, you might be able to build an API that could handle web requests and responses over HTTP or SPDY interchangeably but that API isn't "HTTP".
I don't really want to argue about the semantics of OSI and what is or isn't an application layer protocol. I'll just point you to Google's own diagram of where SPDY fits in which is in the "SPDY design and features" section of the whitepaper here: http://www.chromium.org/spdy/spdy-whitepaper.
Here's the text from that section:
"SPDY adds a session layer atop of SSL that allows for multiple concurrent, interleaved streams over a single TCP connection.
"The usual HTTP GET and POST message formats remain the same; however, SPDY specifies a new framing format for encoding and transmitting the data over the wire."
Check out the "Main differences from HTTP" section. It's clearly not the same format as HTTP. Whatever they mean by "GET and POST message formats remain the same" it's not what you're thinking it means.
There's no confusion about the "semantics of OSI" -- you can't take a client that talks only HTTP and get it talk to a server that talks only SPDY (and vice-versa). They are different application level protocols, period.
The "Main differences from HTTP" section says this:
"SPDY is intended to be as compatible as possible with current web-based applications. This means that, from the perspective of the server business logic or application API, nothing has changed. To achieve this, all of the application request and response header semantics are preserved. SPDY introduces a "session" which resides between the HTTP application layer and the TCP transport to regulate the flow of data."
This even explicitly says that SPDY resides underneath the HTTP application layer.
So from the point of view of e.g. GMail, it is making HTTP requests via XmlHttpRequest still is it not? And from the point of view of my Django application sitting behind some future apache/nginx SPDY module I will still be accepting HTTP requests and responding with HTTP responses will I not?
It seems like SPDY sits in the same layer as SSL/TLS in HTTPS. It doesn't replace HTTP, merely changes how the messages are transported over the wire. To use your logic, you can't point an HTTPS-only client at an HTTP server and have it work or vice-versa, and yet I quote from wikipedia:
"HTTP operates at the highest layer of the OSI Model, the Application layer; but the security protocol operates at a lower sublayer, encrypting an HTTP message prior to transmission and decrypting a message upon arrival. Strictly speaking, HTTPS is not a separate protocol, but refers to use of ordinary HTTP over an encrypted SSL/TLS connection."
> So from the point of view of e.g. GMail, it is making HTTP requests via XmlHttpRequest still is it not?
No, it's making SDPY requests. It is, however, making that difference insignificant to the application developer using the xmlHttpRequest API. HTTP is a protocol not an API. This is exactly what the paper says; the protocol is designed to make the API differences very minimal.
> And from the point of view of my Django application sitting behind some future apache/nginx SPDY module I will still be accepting HTTP requests and responding with HTTP responses will I not?
No, Django doesn't talk HTTP -- Django talks WSGI or CGI. That hides many of the details of the protocol in use -- I imagine that you could run Django with a server that sends/receives web requests over FTP. That doesn't make FTP into HTTP.
SPDY does not sit in the same level as SSL. With SSL, HTTP packets are tunneled through it. SDPY replaces HTTP; there is no fully formed HTTP message inside. If SPDY was merely multiplexing and compressing HTTP streams in frames, I would agree that it would be like HTTPS. But SPDY doesn't contain HTTP streams; it's all right there in the document.
> No, Django doesn't talk HTTP -- Django talks WSGI or CGI. That hides many of the details of the protocol in use -- I imagine that you could run Django with a server that sends/receives web requests over FTP.
I don't know how you would make e.g that[1] work on FTP. Basically you would end up encapsulating HTTP inside FTP, not using one in place of the other.
A CGI script takes raw (yet conveniently split by the web server) HTTP data from ENV vars and stdin, and outputs HTTP data directly to stdout. Django HttpRequest and HttpResponse object merely provides a convenient helper for that and does not actually abstract anything. The web server pipes the response as is straight to the client via TCP (unless there is SSL, where it will just obliviously encrypt the stream).
To comply with that, as SPDY is a replacement to HTTP, mod_spdy transcodes stuff live [2].
Why exactly do you say HTTP is unsuitable for the modern web? If anything, the problem is that we don't abide by it enough.
More codes - which, by the way, do not increase the size of the packets - are better for the "modern web", the one where REST APIs are being adopted and which has more and more automated clients which require computer readable information.