Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
TLS 1.0, 1.1 officially deprecated (ietf.org)
390 points by cmckn on March 23, 2021 | hide | past | favorite | 134 comments


I work in the video streaming industry, and we continue to support TLS 1.1 extensively for a wide range of "smart" TVs and set-top boxes, very frustrating, there was even a long period where large CDNs were trying to shut down 1.1 and realised they'd lose a lot of business in the streaming space if they did...


Customers regularly ask us whether we support TLS 1.0 and 1.1 for this very reason. We do, and support policies to raise the min version, but that’s not the default.

Smart TVs and many streaming boxes often ship with vendored libcurl/OpenSSL/other libraries that are already years old when the device itself is new. It is frustrating and insecure, but cutting these users out isn’t a clear solution either.


...then again, I imagine the security of me streaming Resident Alien on my Roku isn't really a big concern for me.

hmm... unless my Amazon credential handshake is in that?


It's not an issue, until someone finds a way to mitm the request, send a special payload to your Roku that is then opened by an unpatched ffmpeg (https://www.linuxcompatible.org/story/asa2020074-ffmpeg-arbi...) that allows an RCE and turn your Resident Alien into a Resident Zombie.


Your credentials might go to a different endpoint, but the device is still limited in its TLS support as a client.

The attacks aren’t entirely practical, and the threat model for “someone cares enough to MitM my streaming connection” isn’t a common one, but it’s much closer to practical compared to attacks on TLS 1.2 & 1.3.


Possibly.

But it's not like the attacks on TLS 1.0 and 1.1 are trivial. To successfully break a single encrypted connection requires massive server farms.

The people actually attempting to maliciously break TLS handshakes are working on much bigger targets.


But if these smart TVs are connected, they can also be updated, right?


I know [edit: guessing] you're not being serious, but until there are laws mandating this in given markets, the security risks will ALWAYS be unknowingly carried by the consumer. In a hypothetical world where a manufacturer differentiates by advertising a product as "Now with security updates guarenteed for 10 years!" consumers will suddenly realise they've been sold garbage until that point. It just won't happen.


Without a promised based on source code escrow, it's just not worth anything to me. I VERY much support this sort of thing, though.


Or for that matter: any digital device that is not a computer or a mobile phone.

When was the last time you applied a security update for your networked printer? I'm guessing never, because no printer vendor has a security department, none have security updates, etc...

This is why smart networked printers are where state-sponsored attackers like the NSA like to hide their persistent malware...


They can be, but like phones the manufacturers often at best keep up with fixes/upgrades for a year or so after release after which they only care about newer products unless there is a really embarrassing security problem.


They are rarely updated


They won't, because the manufacturers aren't bothered about it.


"Oh, there's a critical security hole in one of our products? Sorry, it's discontinued, buy a new one to fix it." So frustrating.


I am in the same industry and we have the same problem. We are moving to having "insecure" proxies that support TLS 1.1 for devices that can't update. It won't add much security but at least demonstrates its was an active decsion to support it rather than a config mistake.


Interesting, I wonder if we'll ultimately see legislation pass at some point that breaks these older TVs or non-supported devices.


I hope right-to-repair passes first.

Only the tech industry is so brazenly authoritarian about breaking things that used to work.


I don't think that saying technology is an evolving problem is an "authoritarian" stance

Software is written by human beings, human beings make mistakes. The fact these devices can't be updated by the vendor is an implicitly economic problem

Right-to-repair likely wouldn't "solve" these problems for 99% of people, unfortunately. Netflix would never have its customer service people advocating to download custom firmware for smart TV's from "some Russian website" for instance

The real change it would cause is allowing "tech savvy" people to carve out a niche repairing and reselling used, but functional devices. Which even if it's only a 1% decline in sales is an unacceptable proposition for companies.

Selling millions of something every year somehow isn't worth it if even a single PENNY is left on the table, or "spent" in the wrong place instead of lining their pockets


The authoritarianism is that companies should be aware that technology progresses and still lock users out from their hardware so they can’t fix it when the world inevitably changes.


> Netflix would never have its customer service people advocating to download custom firmware for smart TV's from "some Russian website" for instance

They wouldn't tell that to people directly, but they could tell you to go to a repair man. Who then, would proceed to download that same firmware from the Russian website and apply it, and the TV now works. Netflix gets to keep their reputation, and the customer is happy with the TV.


If you're interested in real-world SSL/TLS stats (updated monthly):

https://www.ssllabs.com/ssl-pulse/

Feb 2021 data:

- 99.3% sites support TLS 1.2 (or better)

- 42.9% sites support TLS 1.3

- 0.7% sites support only TLS 1.0


Could you please ELI5 the difference between ssl and tls. My office is also moving to tls 1.2 and the communication seemed to use both ssl and tls interchangeably. The Winndows registry has entrees for both ssl and tls. I am confused.


There is no difference. SSL was invented by Netscape, when they brought it to IETF to get standardized Microsoft did not want to use a name Netscape has advertised, so they forced a name change. Thus "SSL 3.1" (IETF update to SSL 3) was called "TLS 1.0". This was in 1999.

TLS 1.1 uses version encoded in two bytes that should mean "SSL 3.2", TLS 1.2 uses bytes that should mean "SSL 3.3". TLS 1.3 pretends to be TLS 1.2 to pass through proxies but internally uses two bytes that should mean SSL 3.4 to indicate its version.


The confusion is understandable.

_TECHNICALLY_ the only thing that exists is TLS. The implementation history is SSL 3.0 < TLS 1.0 ... 1.3

However, SSL is still used colloquially in conversation, e.g. "SSL certificate" and in many legacy config flags, such as Firefox's `about:config`.

For best clarity, check the details of all those settings, even the SSL ones. But TLS is the term for the modern standards.


SSL = TLS version 0.x. Earlier versions of the same thing.

Colloquially we still say "SSL" a lot when referring to TLS.


This is one of my greatest pet peeves, hearing people ask for "SSL certificates" and seeing "your payment card information is secured with SSL" at the bottom of websites everywhere.

I know it really doesn't matter in the grand scheme of things, but man does it make my eye twitch lol.


Meh, semantics. Lot of things live on. "Save to disk"? Granted hard drives kind of resemble "hard disks" still. TV? There are probably dozens of easily accessible examples for a well rested brain but mine is not.

I've tried using TLS instead of SSL but due to the friction of general acceptance of SSL I don't see much point in trying to fight.

I wish that TLS 1.4 will be called Transport Security System Layer (TSSL) or something which can be referred to as SSL and we can all go back to normal again. ;)


> Granted hard drives kind of resemble "hard disks" still.

No, it is a good example by now. A modern M.2 SSD looks much more like a stick of RAM than any kind of spinning disk.

Sure, spinning hard disks are still around, and might stick around for quite a while (data centers, archival...), but when someone at their desk says "save on disk" nowadays, it increasingly will be on such an entirely non-disk-looking disk.


I've got boxes of 3D printed versions of the "Save" icon from years ago.

They even have aluminium sliding parts and what looks like some sort of disk inside...

/s


I recently saw an audio recorder that is marketed as automatically transcribing your meetings and sending the text to an app. As a non-native speaker of the language where I live this was appealing, until I read the FAQ. It said something like "what about security of my data? A: Don't worry all data is transmitted using a secure tsl connection" .. like that was the beginning and end of the data security story for audio recordings of your private meetings.. Some times I thinks this country still being in the paper and fax age is the only thing saving it from it's self..


I’m routinely amazed by this too. Honestly when I see that this is the sole privacy or security disclaimer in anything but the most trivial of websites’ FAQ, I run the other way. Nothing on privacy, nothing about storage at rest, nothing about access controls and what customer data staff can access. It just comes across as indicative of a security-through-checkboxing view on security and a “heh, yeah sure buddy” approach to privacy.


At this point, just say HTTPS, and refer to TLS if you're speaking about the protocol itself.


Yeah this is what I've been leaning towards as well. People (both technical and non-technical) understand it. If you say TLS people either know what you mean, they think you're being a smartass of they have no idea what you're talking about. With https everyone knows.


Must be nice to have such an informed audience. I have to call it "the lock icon" with my team. They don't really get security, but they do get client perceptions of security.


TLS can be used on other protocols than HTTP (think SMTPs, IMAPs, OpenVPN, etc.)


Urgh! With SMTP it is even worse, because people talk about “SSL” vs “TLS” in the config dialogs.

But they don't actually mean SSL vs TLS. They mean a TLS connection (like HTTPS uses) vs StartTLS (where you start out SMTP plaintext then negotiate TLS as an extension.


I'm aware of that and that's why I said in that case, say "TLS" when you're referring to the protocol itself.


And for the client-side: 90.29% support according to [0]

[0] https://caniuse.com/tls1-3


When you are in a browser. API calls are not evergreen as Browsers are.


Same with compiled applications. Windows is the clear laggard here but still

https://docs.microsoft.com/en-us/windows/win32/secauthn/prot...

.Net says to use the OS schannel protocols. Even the latest Windows doesnt do TLS 1.3 yet.

MSSQL has no support for it either.

Even on the end user side. One i like to point out is disable TLS 1.0 and try and launch discord. Their update cdn only supports TLS 1.0. So the application doesnt function unless the OS client also will use TLS 1.0

I opened a ticket with them about it and was marked solved and responded with.

>Sadly, this is currently working as intended. However, if you would like to see changes to these systems in the future, then you can definitely vote up the suggestions at feedback.discordapp.com.

We are a long way from removing TLS 1.0.


Some details missing on that page: Windows supports TLS 1.3 as of version 1903 [1], just disabled by default.

I believe it's enabled by default in insider builds as of this summer.

[1] https://devblogs.microsoft.com/premier-developer/microsoft-t...


Thats where "supported as in can the OS attempt to use it" and "supported as in stable and ready for prod" are diverged.

Also the link in that document is unsurprisingly 404'd. If only IIS had a method of doing redirects and rewrites :?

https://web.archive.org/web/20200327011332/https://docs.micr...


I recently tried to get an old version of Debian up to speed, to bring some old and unique hardware back to life. The lack of support for TLS 1.3 made it impossible to download much of anything. It was basically impossible to update or upgrade. Most of the web (and HTTPS APIs) was inaccessible via curl. I imagine similar issues plague retrocomputing with other OSes.

Are there any resources for retrocomputing in the TLS 1.2+ era? Maybe something like an HTTPS downgrade proxy that requires manual installation of a new cert and is targeted at retro OS isolation?


You can definitely put a reverse proxy in front which will terminate the connections. We can't compromise security because of legacy hardware.


Wouldn't that be a regular proxy, not a reverse proxy?


Yes, but only by techie-social convention. Outside of web servers, the party hidden by the proxy is conveyed in context.


I think it’s pretty relevant actually? You can’t use HAProxy, it won’t work, because it’s only a reverse proxy. Squid will work.


I wasn't saying it's irrelevant, I was saying that stipulating reverse (or sometimes 'forward-proxy' if there's a need to disambiguate from a reverse-proxy) is somewhat specific to web proxies or other network protocol proxies. This is in contrast to say, a legal or rhetorical proxy.


I put this together for Macs, and have been using it on my own machine. Works very well. https://jonathanalland.com/legacy-mac-proxy.html

Only thing I can’t seem to get working is PPC support, so you’d have to run the proxy on a secondary Intel Mac.


Doesn't APT work over http with GPG signatures, by default?

Why are you using an old version of curl to update your debian, instead of pulling a new binary (or source) package?


> Are there any resources for retrocomputing in the TLS 1.2+ era? Maybe something like an HTTPS downgrade proxy that requires manual installation of a new cert and is targeted at retro OS isolation?

I was looking into this last year when Wikipedia cut off access for older cipher suites. mitmproxy worked, but is the wrong tool for the job (too slow for everyday use). I needed something leaner that I could chain into other proxies, but got distracted before I could find a solution.


> The lack of support for TLS 1.3 made it impossible to download much of anything.

I think you must have lacked support for TLS-1.2? Very few websites/servers require TLS-1.3


Yes, I believe you are correct.


I met this problem recently for old Debian containers, I had to use statically built curl. See my repo[1] with these templates.

[1] https://github.com/XVilka/debian-oldies


Cant you install a more recent curl from a non https repository ?

EDIT you can also use a more recent OpenSSL backport https://github.com/mezentsev/OpenSSL-Backport


I went down the rabbit hole of trying to piecewise update packages, and ran into glibc version differences, other packages absolutely requiring gnutls and some absolutely requiring openssl and conflicting, issues with changed certificate paths in newer versions of Debian that many apps don't know about so TLS was just broken in everything, and after many hours I gave up on that route.


I had a similar experience a few months ago - A fresh debian and then stuff needed to run the Matrix/Synapse stuff - then went to do some addons and they balked at one installed thing not having the proper ssl stuff - which led into a rabbit hole of error messages and python stuff with very few search results.. only solution I could find that actually worked was someone posting a thing saying to downgrade the TLS to lower version..

Figured that might fix the main problem at the time, but then create new problems for other things. So I gave up.

Will likely do a new box and new install with a different domain name to see if I can get it all working proper with modern tls/ssl stuff (think i saw something about a recent debian update doing some things with ssl and tls) -

But no one got back to me on exporting user data from Synapse - of which I really just need usernames, email addys to a fresh install - so I gave up on that too.


Which hardware were you trying to bring back to life?


My old IoT (but without the cloud) startup's products. I still have a few left over that I wanted to use around my house. I had a crosscompile and flashing process set up that worked well in 2011, fixed it in 2014, but it's broken again. All of the embedded software was designed around a realtime kernel and SysVInit, and SystemD is just one obstacle of a thousand that has to be surmounted to even get the hardware and software to boot in a special snowflake setup on a newer Debian, let alone something reliable and repeatable.


sysvinit is still available in Debian, it just isn't the default.


On a related note, thinking of setting my personal site to TLS 1.3 only. It's still allowing for TLS 1.2 with strong chipers, but except for shutting out anyone using older smartphones or operating systems, what's the harm? Could I fall out of favour with Google & co due to compliance reasons?

Thinking of SSL scan tools like this one [1] which gave me an "F" for how I configure my SSH servers, only allowing for very modern ciphers and kex without backward compatibility.

[1] https://github.com/rbsec/sslscan


> thinking of setting my personal site to TLS 1.3 only

I wouldn't recommend doing that. You might end up blocking users using a proxy (e.g. for privacy reasons), people on corporate networks, people in China [1], and many other non-traditional browsers (e.g. browsers for the blind, game consoles), users on older versions of curl/wget/lynx, older mobile phones, etc.

TLS 1.2 when correctly configured is still perfectly fine. And users with modern browsers will connect with TLS 1.3. TLS 1.3 also has protections against downgrade attacks.

Another good scanner for SSL is Qualys SSL Server Test [2]. Getting an A+ score there doesn't require disabling TLS 1.2

For secure configuration, you can use Mozilla SSL Configuration Generator [3]

1. https://www.zdnet.com/article/china-is-now-blocking-all-encr...

2. https://www.ssllabs.com/ssltest/

3. https://ssl-config.mozilla.org/


> perfectly secure.

I think you're trying to use 'perfectly secure' here the way one might say 'perfectly fine', which changes the reading a lot from a first pass.

That said, I agree, there is a subset of TLS 1.2 that is suitable.


Agreed, updated that


Honest question but no deep answer requested, why do we have to choose the ciphers?

Not only it adds 2^(number of ciphers) ways to misconfigure the server with a gaping security leak, but names are obscure and strings are NEVER the same between nginx and the SSLlabs website which advises what is correct, plus who knows whether SSLLabs is a trustworthy website. Also, 6 months later the ciphers might not be up to date. Why do I have to even choose ciphers? Why isn’t this TLS 1.2.1, then 1.2.2, and so on?

It’s like going to Amazon, choosing n resistors by guessing their value, going to m people asking them if it’s 12 ohms, most of them having no clue what they are talking about, and using it for an airport security device that can put people in jail.


Back when TLS (or rather SSL) was originally designed, people were very aware of cipher vulnerabilities - there was a lot of academic attention on them, there was a recent history of ciphers being broken, and US export restrictions forced international programs to support a known-weak cipher (DES). People were much less aware of protocol vulnerabilities - security protocols were nowhere near as widespread and weren't really the subject of academic study. So at the time people expected to need to upgrade ciphers relatively often but upgrade the protocol rarely, if at all, and designing the majority of the protocol to be fixed with the ciphers as a pluggable, swappable part made sense.


Different ciphers have different tradeoffs (security/vulnerablities/performance/hardware acceleration/compatibility/newness/etc.).

Indeed, more choice means more ways to mess things up, more complexity, more bugs and more vulnerabilities.

That's why TLS 1.3 reduces the cipher suite choice to 5 ciphers, down from 37 from TLS 1.2 (in previous versions there were 319 in total) [1]

1. https://owasp.org/www-chapter-london/assets/slides/OWASPLond...


In some ways this is OpenSSL's fault. So the first answer is, your tooling decided to make you answer this question instead of picking its own answer based on the expertise of the authors. A 1990s TV could have a UI where you need to explicitly configure which pins of a SCART connector have what signals on them, but that would be crazy so the TV just worked with the usual SCART pins as defined and if your pin-out is weird it won't work. The TLS 1.2 standard itself doesn't say "Make the user or administrator pick from this huge list". You can make cipher suite decisions like this in SSH too, but you probably mostly don't because the defaults are fine.

Now, the technical answer as to why there's something to pick from at all goes like this:

TLS is a negotiated protocol. Some HN regulars are convinced negotiation is a bad idea, and indeed they will point to TLS as an example. But the idea is that with a vast population of servers and clients on the Internet it isn't actually practical to hold a flag day (replacing all software on clients and servers immediately) to upgrade TLS. Different clients and servers may have different priorities, so we'd like them to be able to agree each time on something they're both content with. For example on your general purpose laptop there is hardware AES, so AES ciphers are fast and secure, but some cheap devices don't have that, so for them AES is annoyingly slow/ power hungry. As a result they'd prefer ChaCha20 if possible.

How did the list get so long? There's a point around the turn of the century when countries realise oh, this Internet thing is important, and you get a rash of "cryptographic nationalism" where a government that likes to think of itself as important notices the US government made things like DES and it decides it can do that too, resulting in vanity ciphers which are less analysed, less popular, and basically have no reason to exist. TLS 1.2 doesn't explicitly discourage you from supporting these, and OpenSSL is a stamp collector's library, if a cipher exists and you can implement it, why not? So you get this unwieldy list nobody actually needs, and then apps present it to non-experts like "Pick from this list or be doomed".

You can get advice on how to configure web servers from Mozilla, https://ssl-config.mozilla.org/

For other types of TLS server you should seek advice from experts in the appropriate protocol.

In TLS 1.3 they learned from this mistake and discourage cryptographic nationalism, you have a literal handful of choices, all of them are currently believed to be safe, some are likely poor (slow, power hungry) choices on simple hardware, some would be weak if large quantum computers were actually cheap and readily available, rather than horribly expensive and non-existent. So "I don't care, whatever" is thus safe in TLS 1.3 although presumably OpenSSL still expects you to actually pick anyway.


That’s the answer I wanted but didn’t dare to ask. Thank you!


Hm, thanks for your reply. I'm already obsessing over the SSL test in your 2nd link, having set everything to achieve an A+, including DNS CAA and all the recommended technical mitigations. But your point regarding accessibility is an important one, I think. Even though my page doesn't generate a lot of traffic that's something I wouldn't want to shut out.


> people on corporate networks, people in China

By supporting older protocols on sites, we're enabling this kind of abuse to run rampant. Maybe it's best to just cut them off.


I believe that ends up with hurting more the people who are already affected by these policies.

In the end they just get replaced with something worse.

e.g. I think that Google offering censored search in China is a lesser evil than Baidu, which is not only censored, but likely also tracks and reports all your searches to the government


RFC7457[1] and Wikipedia[2] offer an overview of many of the attacks on older versions of TLS. Some of those attacks have been mitigated to varying extents in implementations of the affected versions. TLSv1.3 is meant to resolve completely as many of these issues as possible.

When using older protocol versions, it can be complicated to validate that the TLS implementation you are using has the necessary mitigations in place. It can be complicated to correctly configure TLS to minimize the effects of known attacks. Doing that properly requires a fair amount of research, threat modelling, and risk assessment both for yourself and on behalf of anyone accessing your website or service.

IME, TLSv1.2 is still a big chunk of legitimate web traffic. It has been steadily dropping since standardization, and TLSv1.3 is the majority by a wide margin from what I can see. I wouldn't be surprised to see some websites and services still needing to support it for a couple years more, at least, depending on their target audience.

[1] https://tools.ietf.org/html/rfc7457

[2] https://en.wikipedia.org/wiki/Transport_Layer_Security#Attac...


Most of those attacks require ssl2 or cooperation from client.


When I looked into this over 3 years ago, 99%+ of user agents were capable of TLS1.2. I disabled everything below TLS1.2 on all public facing httpd with no negative consequences. I would not recommend going to TLS1.3-only for quite some time yet due to people who are very slow updating their clients.


I recently did the opposite and re-enabled 1.0 and 1.1

Wrote some thoughts on why here https://blog.nyman.re/2021/02/07/usability-security.html but in short, it's several magnitudes more likely someone will want to check out my blog using a old device vs someone trying to exploit vulnerabilities in the old protocols.

Google.com still allows TLS1.0/1.1 https://www.ssllabs.com/ssltest/analyze.html?d=google.com&s=...


Yeah no, that's where I'd draw the line. I have one nginx instance serving my blog and various resources via proxypass (nextcloud, grafana, icinga, kibana etc.). For the sake of keeping things maintainable, there's one wildcard cert for my domain and one global SSL configuration. I'd rather shut out people with Windows XP than enabling < TLS 1.2 globally.


Everyone has different priorities of course. But note that as long as you use a modern client TLS_FALLBACK_SCSV will ensure you won't be at risk of downgrade attacks or similar. Without that I also don't think I would run it.


I personally did it recently and it was a breath of fresh air not having to worry about cipher suites or DH params or other garbage as TLS1.3's defaults are secure.

Although note that I did see a drop of ~2%-3% in traffic after that. I don't directly make any money of these websites (just my personal blog and similar) and most of my viewers are likely to be technical (i.e. not using IE which doesn't support 1.3), so I decided the sacrifice was worth it for another small reduction in needed sysadmin thought+work.


Based on the numbers in the other comments, you risk losing ~ 10% of the readers. It is up to you to decide if you want to do it or not.


Pretty much anyone on the wrong end of middleboxes. A lot of "security" folks and vendors are addicted to MITM of secure network connections. You'll likely find anyone working behind a corporate firewall will be unable to access your site, since most of these folks are determined to break/block 1.3.


I MITM my own connections too, to do content filtering/adblocking/etc., although in my case the proxy upgrades lower versions to TLS 1.2.


Is TLS 1.3 is not rolled out to Windows 10? It rolled out to insider builds during last autumn but did it arrive already to the masses


Some interesting draft names there.. 'draft-moriarty-tls-oldversions-diediedie'


Authors: Kathleen Moriarty, Stephen Farrell

It seems it's a last name of an author and not just the arch nemesis of Sherlock Holmes. ;)


As far as I can tell, the 'diediedie' naming is historical; for example drafts for RFC 8758 Deprecating RC4 in Secure Shell (SSH) were named draft-luis140219-curdle-rc4-die-die-die (https://tools.ietf.org/html/draft-ietf-curdle-rc4-die-die-di...) and similarly for RFC 7568 Deprecating Secure Sockets Layer Version 3.0 which had drafts named draft-ietf-tls-sslv3-diediedie (https://tools.ietf.org/html/draft-ietf-tls-sslv3-diediedie-0...).

The mailing list archives also say that someone raised a concern at a meeting, which is why the later drafts were named -deprecate. (https://www.mail-archive.com/tls@ietf.org/msg09563.html)


I wonder if the origin of diediedie is a geek reference to the Usenet newsgroup alt.wesley.crusher.die.die.die, which was created not long after the premier of Star Trek: The Next Generation in 1987.


And let's not overlook the uncredited contributions from Dieter D. Dietrich.


Maybe now it's finally time for Discord to update their auto-updater. Last I checked, you still have to enable TLS 1.1 to get updates. https://twitter.com/discord/status/958508910536790016


In a similar vein, 2021 is the first time that I've been able to update TurboTax without having to resort to re-enabling TLS 1.0 and 1.1


According to [1] 68% of websites still support TLS 1.0 as of 2019. Unfortunately, the pace of deprecation is slow.

1. https://hostingtribunal.com/blog/ssl-stats/


There is a difference between "supporting old TLS" and "requiring old TLS". Neither are great, but one is terrible.


"Deprecate" means "to discourage use of", and it doesn't mean "to end support or to desupport". But it's the most misused word in tech that I've seen in the past 25 years.

So I'm not actually sure if this RFC is using it in the correct way, or the incorrect way.



> Deprecation of these versions is intended to assist developers as additional justification to no longer support older (D)TLS versions and to migrate to a minimum of (D)TLS 1.2.

Seems to me like this document is trying to be a tool for developers to support telling decision makers that using those versions is discouraged.


Right. One of the things it does is Update existing RFCs which are otherwise unrelated except that they say you must use TLS (fine) and that TLS 1.0 is Mandatory To Implement.

Without this RFC, inevitably a bureaucrat in some organisation is going to argue that the shiny 2021 project to implement Protocol X must offer the horribly obsolete TLS 1.0 or a 3DES ciphersuite because it says so in this document written in 2010 and surely if that wasn't important it wouldn't say that.

This RFC means you get to say no, see, that document you're pointing at has been updated by this newer document which says I can use TLS 1.2 and AES-128 instead as I was planning to before this stupid Zoom call I'm in now.


IETF is a standards body. It doesn't provide support for implementations so im not sure how it could withdraw what it doesn't provide in the first place.

Regardless i think you're splitting hairs in the definition.


What sort of "support" do you imagine the IETF should be offering, and to whom?

   The mission of the IETF is to produce high quality, relevant
   technical and engineering documents that influence the way people
   design, use, and manage the Internet in such a way as to make the
   Internet work better.  These documents include protocol standards,
   best current practices, and informational documents of various kinds.
https://datatracker.ietf.org/doc/rfc3935/

This RFC is labeled "Best Current Practice".

What else is "discouraging use of" if not publishing a document that asserts that the best current practice is to not use it?

If you want the TLS 1.0 protocol specification, they still provide it in the same place as before, https://datatracker.ietf.org/doc/rfc2246/


The intent seems clear, though: "TLS 1.0 MUST NOT be used" "TLS 1.1 MUST NOT be used"


It's interesting that you're both literally incorrect from the perspective of the English language, but also incorrect with regards to the goal of the announcement. Kudos!


I'm literally correct. Look up "to deprecate" in the dictionary. In APIs like Java it's clear that you can still use the API, but its use is discouraged because it will soon become unsupported.

A deprecated API is one that you are no longer recommended to use, due to changes in the API. While deprecated classes, methods, and fields are still implemented, they may be removed in future implementations, so you should not use them in new code, and if possible rewrite old code not to use them.

https://docs.oracle.com/javase/8/docs/technotes/guides/javad...


Are you under the impression its now impossible to use TLS1.0? What exactly do you think would happen? The web security police come and arrest you?


Yeah, I mean, you look it up? lol someone even linked it elsewhere.

And the article is about recommending against its use.


Maybe the IETF could set an example by using TLS 1.3

(Just a suggestion, but hey, at least they use DNSSEC)


According to [1] IPv6 is only at about 33% penetration and this is for a backwards compatible 25 year old standard. In contrast TLS 1.3 is supported by approx. 14% of websites in 2019 [2]. So in comparison to IPv6, TLS 1.3 is being adopted quite rapidly.

1. https://www.google.com/intl/en/ipv6/statistics.html

2. https://hostingtribunal.com/blog/ssl-stats/


TLS 1.3 was designed to be easy to deploy, and the deployment was tested before the standard was finalized.

IPv6 not so much.

Also, TLS 1.3 is an application protocol, so you only need the server and client to support it. You don't need an OS change for either server or client; although if you rely on TLS libraries shipped with the OS, you would. You also don't need network support, although if your network is particularly hostile, it could cause issues.


Because it requires only the ends to adopt, whereas IPv6 requires cooperation from e.g., my ISP.


They're some of the only people that use DNSSEC; DNSSEC is a dead letter.

It's useful to compare the uptake of TLS 1.3 (quite widespread; will within a few years be required for conformance; took just a couple years) to that of DNSSEC (it's been decades).


That depends on the TLD. For the American TLDs (.com, .net, etc.) you're right, there's barely any uptake. Several other TLDs (.cz, .no, .nl, .se, for example) have over half of their registered domains using DNSSEC. That's much more than TLS before the Let's Encrypt move happened.

The problem is also with cloud providers. Amazon Route 53, has only announced support a few months ago [1] even though the standard has existed since before Route 53 came into existence.

Perhaps it's time for browsers to consider DNSSEC to determine the security of the website, similar to how TLS and CORS have been added to protect web resources. The uptake would improve quickly if people wouldn't be lazy or lackluster about their DNS security.

[1]: https://aws.amazon.com/about-aws/whats-new/2020/12/announcin...


Browsers implemented DNSSEC years ago, and then struck their support. A lot of the things DNSSEC proponents believe it's good at don't actually hold up in the real world; for instance, you can't practically deploy DANE without creating a scheme where the DNS simply becomes another CA that has to be trusted alongside all the others, and one you can't revoke when misissuance occurs.

It's funny to watch as the "serious" efforts to get some semblance of DANE working all involve some variant of stapling to bypass the actual DNS. DNSSEC is a weird, clunky, 1990s PKI that has been trying desperately for decades to find some reason to exist, even if that reason has nothing to do with the DNS.

A thing to pay attention to with European DNSSEC adoption is that it tends to happen at the registrar, automatically, without customer opt-in. The registrar controls the customer zone keys. That's security theater.


The actual DNS server still needs to implement DNSSEC, the registrar won't do that by themselves. Still, as an end user, I don't care who implements it, all I know is that the records I received are intact and have not been tampered with by any intermediary parties. This makes it a lot easier to trust my DNS provider.

Yes, DNS becomes a CA in schemes such as DANE, but a CA that the domain administrator controls. I don't see any problem with that. We've been spoiled by Let's Encrypt by now, but free, easy TLS certificates and management for even small businesses are a very recent thing.

I much prefer the decentralised nature of DANE over current CAs, even from parties like Let's Encrypt. LE is still an American company and the USA has been proven to be all but transparent and friendly to its allies when it comes to using their power over digital infrastructure for their gains. I am 100% sure that if LE receives a red security letter instructing them to generate a certificate for a certain domain, they will, just like any CA would, before that CA would collapse as soon as anyone finds out. My bank's website security depends on nobody on the other side of the Atlantic getting any funky ideas.

The decentralised nature of DANE makes it a nice system because worst case scenario, some TLDs do not get signatures during the next key rollover. This would be immediately obvious to any observer, so actions like these cannot be done in secret.


The problem isn't simply that the DNS becomes a CA. It's that it can't replace existing CAs --- the browser still has to support the WebPKI. So what you have in effect is another CA, a 1001th, if you will, and one that the browsers can't revoke.


> where the DNS simply becomes another CA that has to be trusted alongside all the others

Isn't DNS already implicitly trusted?


As a user, one can bypass root servers and TLD servers, and caches, and go straight to the designated authoritative server(s) to get RR data. That is the most secure way to obtain DNS data, IMO. No recursion. Ideally the authoritative servers should support encryption of DNS data in transit, e.g., per packet encryption via DNSCurve.

Control over RR data belongs to the person publishing it and she is free to distribute it however she likes, e.g., using ICANN DNS to list the IP address of her authoritative DNS server(s). If she chooses ICANN DNS, the ICANN-approved TLD registry and the entity that controls ICANN DNS root servers have no control over the content of the RR data (cf. the domainname), e.g., they cannot declare a RR as "false", "invalid", "revoked", etc. DNSSEC gives them this control.

DNSSEC as used in practice requires that the RR data must be signed ("approved") by a third party, e.g., a TLD registry, whose own RR data must in turn must be signed ("approved") by another third party, e.g., ICANN.

DNSSEC was designed for people who get their DNS data second hand, e.g., from a remote cache run by an ISP or some other third party, including "open resolvers" such as Google. That method carries some additional risk, e.g., RR data in the cache may be manipulated, as compared with retrieving the RR data directly, with no third party ISP/Google middleman, from its source: the authoritative server(s) listed by the person publishing her RR data. There are existing solutions for encrypting individual DNS packets (i.e., not streams, not TLS) travelling from the authoritative server to the user, such as DNSCurve. Alas, few authoritative servers are encrypting DNS packets.

The DNS data travelling between authoritative servers and third party DNS providers running caches is, generally, not secured. I never see any discussion of this online.


> As a user, one can bypass root servers and TLD servers, and caches, and go straight to the designated authoritative server(s) to get RR data.

In practice everyone finds the designated authoritative servers through the root servers and TLD servers. They're trusted already.


The addresses for those rarely change, some are unlikely to change in a lifetime. They have the same addresses year after year. You are free to keep looking them up every day, but, with very few exceptions, they will still be the same. Don't take my word for it, try storing these addresses for the major TLDs. With very few exceptions, they will work for decades. I know the addresses for a root and a .com server from memory. They do not change.


Not in the same way. Again: you can't revoke the DNS.


Any CA that issues domain validated certs can be duped by a fraudulent DNS record. All it takes is one CA failing to promptly revoke and blacklist the offending domain and the attacker is off to the races.


What threats does DNSSEC defend against?

How does it change the "security" of a site I might visit?


DNS cache poisoning, bit flips, and other such attacks. It does nothing for your site's confidentiality or availability, but it ensures the validity of the DNS records.

Scummy DNS providers like commercial ISPs tend to hijack certain DNS queries for their own gain. With DNSSEC they cannot do so.

Furthermore, with the slow but soon irreversible switch over to DNS over HTTPS, with all the DNS centralisation it brings, knowing for sure that nobody tampered with DNS records is a necessity.

Additionally, DNS records are also used in technologies like encrypted eSNI headers. If you are able to supply bad or old eSNI data to another site's cache, that might cause slowdowns or even breakages when eSNI or its successor eventually rolls out. Alternative PKI solutions also store TLS public keys in DNS records, so those are essential to get right as well.


DNSSEC does not protect your scummy commercial ISP DNS server from lying to you. Your stub resolver trusts that scummy ISP DNS server to validate DNSSEC for it.

People have a lot of funny ideas about what problems DNSSEC solves. This is demonstrably not one of them.


> DNS cache poisoning, bit flips, and other such attacks. It does nothing for your site's confidentiality or availability, but it ensures the validity of the DNS records.

Sure, but at what cost? It's hilariously easy to misconfigure effictively removing the entire domain from anyone who decides to enfirce DNSSEC validation, and DNSSEC gives you no choice in who to trust (Verisign own .com I think, state governments tend to own a lot of the national TLDs etc.)

If your threat is a poisoned DNS cache then... don't use a DNS cache?

> Scummy DNS providers like commercial ISPs tend to hijack certain DNS queries for their own gain. With DNSSEC they cannot do so.

The problem here is that as an end user, I want to resolve gmail.com to the correct endpoint. If my upstream DNS cache is giving me the wrong results, DNSSEC doesn't help me - NXDOMAIN'ing a valid DNS request leaves me in the same position as without DNSSEC. I still can't get my email!

In the face of a malicious upstream handing out the wrong DNS records, a far more sensible solution is to bypass that upstream, optionally encryping the transport so that they can't fiddle with the results in-flight. This actually gets me what I need - the correct DNS record for my request.

> Furthermore, with the slow but soon irreversible switch over to DNS over HTTPS, with all the DNS centralisation it brings, knowing for sure that nobody tampered with DNS records is a necessity.

> Additionally, DNS records are also used in technologies like encrypted eSNI headers. If you are able to supply bad or old eSNI data to another site's cache, that might cause slowdowns or even breakages when eSNI or its successor eventually rolls out. Alternative PKI solutions also store TLS public keys in DNS records, so those are essential to get right as well.

If you care strongly about the integrity of your DNS records, there are better solutions. DNS-over-TLS/HTTPS should in theory let you trust only the owner of the authoritative DNS server, if you can make a direct connection to it to ask questions. DNSSEC forces me to trust a whole load of intermediaries forever. I'm not sure why the latter is better.

Maybe it's better to stop designing things for which the security rests on being able to fully trust DNS? Gmail (and everything else etc.) seems to work quite well right now without depending on DNS being 100% trustworthy, mostly because if I do somehow get an evil-controlled DNS record back, my browser's going to start sounding alarm bells when the TLS cert doesn't work.


If you use third party DNS, e.g., ISP , Google, NextDNS, etc., the third party provide you with access to a DNS cache. The cache is shared with (many, many) other users. If the other users manipulate the data in the cache, or the DNS data in transit from authoritative DNS servers to the cache is manipulated, DNSSEC can help to detect that. It is like downloading a file over an insecure network then checking against known checksums/hashes/fingerprints. Except with DNSSEC the owner of the file does not create the hashes. She delegates control over the signing process to third parties.


DNSSEC can only help you detect manipulation in zones that are signed. Very few popular zones sign.

Further, DNSSEC does nothing to protect data along the network path between you and 8.8.8.8 (or NextDNS or whatever). It collapses down to a single "trust me I checked" bit in the DNS header.

If you're worried about someone tampering with 8.8.8.8, a more reasonable approach is to run your own recursive resolver off-net and use DoH to query it.


Personally, I avoid recursive resolvers altogether. Much faster. I can gather the DNS data I need for the zone files myself. Not reliant on Google, NextDNS or the next third party DNS provider.


I can't tell if you're serious. Are you saying that scrape authorities directly to, like, build a fake zone file for GOOGLE.COM or whatever, and then consult that? Say more about how this system works?


I am serious. I have DNS data that I save. I serve zone files over the loopback. This is simpler and faster than any cache. It works for me.

Not that anyone except me should care, but this beats any recursive resolver in terms of speed, produces less DNS traffic over the network, can eliminate ads/tracking, increases "privacy"^1 and allows for resiliance against other peoples' DNS problems. I have seen people complain they could not reach some website because of some DNS problem; meanwhile I had no problems because I am not making DNS queries over the network every time a re-visit the site.

Why do this? The story is that many years ago I was running a copy of the root zone served over the loopback. Gradually I started adding A records to it for sites I used often. IIRC I think .mil used to have some A RRs in their zone file that were for websites not nameservers. This technique reduces the number of queries needed to resolve those names. Faster lookups. That might be where I got the idea, otherwise I was just experimenting. I got obsessed with faster lookups, fewer queries. Over the years the local DNS setup I use became more complex and I started running multiple authoritative servers over the loopback, but the technique is essentially the same. Gather DNS data in bulk, save it and serve it. It works for me.

1. One property of this is that DNS lookups are not done at the same time as the user accesses the resource, e.g., a website or whatever. Thus, FWIW, a network observer cannot make easy inferences about what reources a user is accessing, e.g., via a shared IP at a CDN, simply by looking at DNS queries.

For recreational web use, like reading HN and sites posted here, I use a text-only browser and a forward proxy that strips unnecessary headers and does not send SNI except when required. Obsessive minimalist. For serious, non-recreational web use I still have to use a bloated graphical browser and ISP DNS just like everyone else.

It is probably inviting DVs and negative, snarky comments to share this in this thread, but there you go.


Why are you advocating for DNSSEC? You don't even use the DNS.


I never advocate for DNSSEC. Around 2008 when cache poisoning and consequently DNSSEC gained renewed attention, I stopped using shared caches. I think the costs of DNSSEC outweigh the benefits.


I misread, thanks/sorry!


I want to use DNSSEC in my domains but either my registar or TLD doesn't support it. I'll move them to another registar.


mind you DNSSEC is also far more complex then TLS. A misconfigration of DNSSEC can result in your entire domain not being responsive and thus have the risks in regards to implementation are far higher.

I don't really think this should be a reason to NOT implement DNSSEC, but it is still a reason many companies and organisations see it as risky.


There are a lot of reasons not to deploy DNSSEC. That it's much more dangerous than TLS is just one of them. It would be better for the Internet if we scrapped DNSSEC and started from scratch with a service model that acknowledges what the Internet of 2020 (or, if we can't be that ambitious, 2005) actually looks like.


DNSSEC provides zero benefit to confidentiality. DNS over TLS and DNS over HTTPS both do. Additionally, the TLS CA system with Certificate Transparency and the HTTPS Everywhere addon is better without DANE.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: