This was reported first at the begining of May. They got into the server via that Salt vuln and just ran crypto miners on the server, they didn't (as far as anyone knows) use the keys to do anything bad. From the original email reporting the problem:
(the attacker doesn't seem to realize that they gained access to the keys and were running other services on the instracture)
Nobody knows for sure what they were doing since the script dropped via the salt vulnerability downloaded new scripts and executed them every 60 seconds. Just because all they found were crypto miners doesn't mean it was the only thing it was doing. And also there were multiple groups exploiting this vulnerability at the same time as was evidenced in one of the scripts trying to detect and eliminate competing scripts dropped by someone else
I'm not sure. Most crackers don't use port knocking or other obfuscation techniques, so something that shouldn't be listening on a box and a large amount of bandwidth being used is normally a good indicator.
I think there's great overlap between servers that aren't monitored and servers that are vulnerable to exploits. If you're monitoring your servers for unknown listening ports, chances are you're keeping your systems up to date. Same with bandwidth, although if bandwidth is expensive (eg. cloud providers), your accounting dept might notice.
> I think there's great overlap between servers that aren't monitored and servers that are vulnerable to exploits.
Agreed. CC mining would be CPU on most servers, and go undetected for the same reasons. Some servers and gaming rigs would have GPUs, but not many vs CPUs.
Two approaches to doing nasty things on other people's servers: 1) be very quiet and try not to get caught at all. 2) be very noisy doing something "innocent" like running a bitcoin miner.
If you do not assume that everything bad that could have happened, did happen, then you will have some interesting explaining to do later, with lawyers.
The problem with Transparent Logs / Certificate Transparency is that they don't have the best story with regarding to recovering from compromise. We wrote an article comparing The Update Framework (TUF) to CT/TL:
Your blog post is about yet another X Transparency, rather than about Certificate Transparency. Because CT works well people tried to apply the same approach to lots of other problems, most of them obviously dumb.
CT is a narrow solution to a narrow problem. We had that specific problem, and so this is a very good solution. You almost certainly don't have that problem, our solution can't help you, we aren't sorry about that.
As I understand it, this is not a big deal: there are multiple CT servers, and auditors verifying their entries, the design of the system assumes the possibility of compromised log operators.
Correct. It's an interesting incident, but the consequences for Internet users are pretty much zero.
(Conversely, multiple CT server compromises would be a significant concern, but even then without a compromised Certificate Authority, the impact is almost zero.)
Well the political consequences are kind of bad. CA system is built around trust. Not being able to secure their servers, even non critical ones, degrades trust.
It was a 0day in a popular open source package. They didn't do anything wrong. Presumably they use stronger defense in depth, as required by the CA guidelines, to prevent software compromises to the actual CA signing machines. There's no need to go through that level of paranoia for CT log machines.
This CA system is designed for maximum reduction in security - not defense in depth. The company with the least security defines the security of the entire CA system.
It’s time for DANE [1]. We don’t need any more “men in the middle.”
I'm pretty sure the SCT requirement contained the blast radius of this compromise to avoid anyone from actually being affected by it, so your comment doesn't really follow here.
Agree with you, it's entirely incorrect that a CT log operator "defines the security of the entire CA system". They can't issue certs and their output is already regarded with suspicion by default.
> I'm pretty sure the SCT requirement contained the blast radius of this compromise to avoid anyone from actually being affected by it, so your comment doesn't really follow here.
I think this would be true except my statement wasn't just about SCT but the CA system in general. Furthermore, browsers like Firefox [1] do not even have CT checks.
Despite being supported, relying on browsers checking CT logs was never something that what going to be deployed at scale because, just like revocation lists, it adds a performance hurdle to every request.
Auditors and monitors are the real protection you get from CT logs.
You get the same problem with any form of DNS validation because you'll never get every little caching server to validate records.
Just so everyone's clear about this, Handshake is a system backed by a cryptocurrency, "HNS", that is actively traded despite being useful only to speculators. Its backers believe that they'll take over control of the Internet and be rewarded financially for it through their currency.
"Currently [2018], in more than 90% of cases if a user passes DNS queries to a resolver that performs DNSSEC validation of an RSA digital signature the same resolver will also perform DNSSEC validation of ECDSA P-256 digital signatures."
"Since the second quarter of 2019 [to the first quarter of 2020], the population of [strict DNSSEC] validating users has risen from 12% to 22%, close to doubling. At the same time, the proportion of [non-strict DNSSEC validating] users has risen from 5% to 10%."
A better argument against the preposterous claim that DNSSEC is "government-controlled" (and one that doesn't rely on blockchains, which are controversial in their own right), is that with DNSSEC you can choose which government (i.e. ccTLD) your domain is under, or choose one of the many generic TLDs.
The web PKI, by contrast, requires users to trust a bunch of CAs, any one of which could have been compromised by a government and can issue a certificate for your domain. Also, if a government can compromise your DNS records, they can also be granted domain-validated certificates for those domains, so the web PKI is not an improvement.
Anyway, if your threat model is that every single country in the world is willing to subvert the security of their own DNS hierarchy specifically to attack you, then the limitations of DNSSEC are the least of your worries.
With DNSSEC, when it becomes instantly clear that the United States Government controls .COM, Google can simply leave .COM, and tell every Google user in the United States never to use the tainted GOOGLE.COM domain, and all the users will stop using .COM and start using the new Google domain name, because that is how things work in the real world.
Yeah, imagine the hours of productivity that will be lost as people in the US update their link to Google in their IE6 "Favorites" list, because that's how things work in the real world.
In contrast, when it becomes clear that the United States Government controls multiple certificate authorities, that won't be a problem for any websites.
With the CT system I can monitor the CA's issuance, with DNSSEC I can't retroactively monitor if someone has changed the DANE keys and intercepted traffic.
True, DNSSEC Transparency is not as developed a technology as Certificate Transparency. There has been experimental deployment of such a system[0] but to give higher assurance there is a small addition to the DNS data that needs to be adopted first[1], which is still going through the IETF process.
(the attacker doesn't seem to realize that they gained access to the keys and were running other services on the instracture)