Admittedly, I don't know the specifics e.g. if there was obvious negligence. However, this seems like a major fine for a security vulnerability. The statement given in the article is:
> Personal data has a real value so organisations have a legal duty to ensure its security, just like they would do with any other asset. If that doesn’t happen, we will not hesitate to take strong action when necessary to protect the rights of the public
Certainly, calling out poor security practices is a good thing, however this level of scrutiny is going to require a major shift in mentality for a large portion of the industry. "Move fast and break things" just isn't going to cut it anymore.
> Certainly, calling out poor security practices is a good thing, however this level of scrutiny is going to require a major shift in mentality for a large portion of the industry. "Move fast and break things" just isn't going to cut it anymore.
When 339 million guest records are involved, anything less shouldn't cut it anymore.
"Move fast and break things" was never going to cut it. Anyone who has the word "engineer" in their job title, certified or not, should expect better of themselves.
That maxim only works out when you’ve got such a solid CI/CD setup that you can roll out to a tiny percentage of users and automatically roll back and notify if they error out.
Move fast and break things all you want if you have such an amazing safety net. But at that point you’re not breaking anything, you’re just letting the real world fill in the gaps in your test suite with an insignificant impact.
Still, not good enough if you’re dealing with money, medical stuff, vehicles, etc. You’re getting into life or death territory there.
The monetary value of the median hotel guest record to the sort of nefarious powers that be that scoop up hotel guest databases is ~$0. It may even be a negative amount.
The monetary value of a particular person of interest's guest record may be more than that, but those people are few and far between.
If you disagree, please let me know who I can get in touch with, who will pay me $4/guest record. I'll quit my day job, and set up camp outside a hotel lobby, recording guests and license plates.
The benefit to the nefarious buyer is not the same as the damage to the hotel guest. In fact, there's no reason to even expect them to be similar in value. And the relevant measure of central tendency is the mean, not the median, since we're looking at the sum of all the damages from all the records being copied. Keeping one gay Egyptian dude from getting outed and killed justifies protecting hundreds or thousands of boring middle-class vacationers from Ohio.
The personal, quantifiable damage to me of every one of my guest records in my past five years getting stolen by some shadowy cabal is zero. That's because I haven't done anything particularly interesting. Neither have most people. If you're ready to wire me a few hundred dollars, I'll be happy to share records of my, and my wife's stays with you. It'll be a waste of your money, but who am I to judge?
A few people have done some very interesting things. For them, those numbers are substantially higher than zero.
You haven't disproven my point. The quantifiable, median damage is zero. This is relevant, because this sub-thread tries to quantify the harm by taking the fine, divides it by the number of records, produces $3, and posits that the leak has done more than that amount of harm. Because, obviously, if any harm comes, the harm is over three dollars.
Well, yes. It is. If you can measure the harm, of course it's more than three dollars.
For most people, though, the harm is immeasurable. Pointing out that the median harm is zero exposes the absurdity of the original argument.
So you're saying that, because the median damage is zero, the mean could be arbitrarily small, and in particular might be less than US$3? That's probably true, but your original argument about the purchase price of the data doesn't help to support that.
There's also a problem we haven't brought up in this thread, which is that the main damage from privacy invasion is not to people individually, but to human society as a whole. Increasing the price of doing anything particularly interesting can condemn an entire society to domination by mediocrity.
> So you're saying that, because the median damage is zero, the mean could be arbitrarily small, and in particular might be less than US$3?
I'm saying we have no idea, and we're not going to get there, by doing arithmetic. But, if you ask me, I do believe (based on nothing more then a worthless napkin calculation) that it's more likely to be between $0.3 and $3, than it is to be between $3 and $30. Remember, the recipient of this data is incredibly unlikely to cause maximum possible worst-case damage to even the interesting people on the list. Most likely, they just want to steal credit card numbers.
> There's also a problem we haven't brought up in this thread, which is that the main damage from privacy invasion is not to people individually, but to human society as a whole. Increasing the price of doing anything particularly interesting can condemn an entire society to domination by mediocrity.
The nice thing about the GDPR is that even if it doesn't address the damage of a particular leak, it's a swift kick in the ass of the IT departments other companies, who are really keen to not end up on the receiving end of the next fine.
>”Move fast and break things" just isn't going to cut it anymore.”
That motivational quote should never have made it out into corporate communication. It was embraced by everyone[1] because it seemed edgy and hey that company is successful in spite of itself.
That should never have been embraced by anyone especially outside specific contexts.
[1]Just about everyone embraced it because it was a kind of punk attitude in the face of stodgy enterprise development schedules. Everyone wanted to seem cool, so they went whole hog.
I agree with you on that for heavy majority of companies (esp. medical or self-driving car ones) "move fast and break things" is not a good idea at all. However, I cannot agree with the absolutist nature of your statement.
>that company is successful in spite of itself
That's a pretty strong opinion. Some would argue that FB is successful because of stuff like that.
A bit of a sidenote, but after all, I remember how badly Zuck was clowned everywhere (including by FB shareholders and people here on HN) immediately after the purchase of Instagram and WhatsApp. People were saying that FB is dying and Zuck is trying to buy companies that are hyped but irrelevant to the core business out of desperation. These days, it is a pretty universal sentiment that those acquisitions were some of the smartest purchase decisions he could have made at the time.
> A bit of a sidenote, but after all, I remember how badly Zuck was clowned everywhere (including by FB shareholders and people here on HN) immediately after the purchase of Instagram and WhatsApp. People were saying that FB is dying and Zuck is trying to buy companies that are hyped but irrelevant to the core business out of desperation.
Most of HN didn't have some insider information which Facebook did. This information was acquired by surreptitious data logging of a VPN "security" app [0]. This is the same app which was controversially packaged as a "research" app and then forced to take down from the App store [1].
How has the WhatsApp acquisition helped the company? I assume they harvest metadata for their social graphs but my Facebook feed is still as irrelevant to me now as it was five years ago.
I've seen this with medical devices. The amount of callouts these days for service technicians for newer equipment is ridiculous, to the point that you can't even get service technicians to come out any more! and yes, this does mean certain services knocked out for days at a time.
I made medical imaging software for diagnosis and analysis. We would get a regulatory proctology exam every release. These guys would find arithmetic errors in papers referenced to support algorithms used in the software, and ask us about them. (Incidentally, that's where I learned never to trust "peer reviewed" papers or dissertations in computer science. Always check it, in detail, yourself. CS guys have to be some of the worst mathematicians I've seen during my time in scientific research.)
But yeah, the FDA were that anal in our case. If they were not that anal for you, I'm not sure why that is? Maybe the radiation? We were putting out software that potentially dealt with, essentially, irradiating humans. Software that controlled radioactive devices. Etc. No one wants the equivalent of a dirty bomb going off in some small town somewhere. So I don't blame the FDA for the whole "every i dotted, every t crossed" treatment. It was actually reassuring. At least in our case.
>Just about everyone embraced it because it was a kind of punk attitude in the face of stodgy enterprise development schedules. Everyone wanted to seem cool, so they went whole hog.
I think it's 10,000% more likely to be due to so much post-2008 ZIRP money floating around, so why give a shit about quality or consequences? As long as advertising and copyright industries are able to establish themselves as the fundamental arbiters of all content, anything else that happens to us or the companies are broken eggs for the proverbial omelet.
The people embracing MFaBT exhibit nothing resembling "punk attitude."
It’s a different era, when not literally everything was being done online yet. We’re now (possibly) shifting into an era where the most profitable modus operandi will need to be different, doesn’t mean it was a bad choice at the time
The idea is to make hoarding data "just because you can" a liability. If you think it has value you should also protect it likewise. If you think data has less value than "move fast and break things" then simply delete the data ASAP after you don't need it anymore.
I'd go further and say anyone running serious enterprises that have important security implications should not be taking it lightly. Nobody should "run an ISP for fun". Cowboys like that, or people who describe themselves as "hackers" have no place in the industry.
> "Move fast and break things" just isn't going to cut it anymore.
Yes, the cutting edge can cut both ways. But alas is kinda needed in IT security. So easy for a security update to come out, yet the process in some companies rightly dictates that it is tested so that the update does not break anything else. So you get a delay. So even then, that small window could see that security issue exploited and the powers that be will see you didn't apply the update instantly and you're lambasted - even for following best practices and going by the book of testing. After all, any update could have an impact upon the applications and infrastructure in ways above and beyond the issue the update is addressing. We have all encountered such issues as well.
So the phrase "move fast and break things" has a younger brother now "move slow and be broken".
Be nice if the powers that be (Governments) proactively audited companies IT security proactively instead of being event driven - after the horse has always bolted. I would love to see companies fined for security issues before such security issues are exploited and abused. After all, the customer always pays. Until that happens, the same mentalities in how security is treated as a priority will carry on playing out. But the other old IT saying of "if it works, don't touch it" whilst true, equally is the source of so many security issues that it just can not carry on being leaned upon.
>ensure its security, just like they would do with any other asset.
This motivation is asinine. Record leaks don't do anything to harm the original records and it's not like Marriott's secret sauce is a list of customer records. Companies don't protect assets from things that don't cause the assets harm.
Personal data does not have value to the company in that regard. It's a liability more than an asset.
The definition of an asset is something that provides probable future economic value to the firm. Personal data seems to fall under this category to me.
Hopefully this fine will help wake the industry up to the fact that this asset comes with considerable regulatory costs, and that those costs may well be more than the value of the asset.
Hopefully that will stop them hoarding personal data.
If personal data doesn't have value to the company and it's not an asset, then the company shouldn't be collecting and storing it. Making storing personal data a liability is good if the goal is to prevent companies from storing personal data unless there's a strong business need (and corresponding level of care) involved.
They should be subject to independent security audits for the next X number of years in addition to the fine. It would only take a few companies having to go through this before budgets stop getting cut in IT.
Move fast and break things used to be the way that bridges were designed (i.e., build it, test it, see if it breaks, improve it if it does). I don't think anyone would tolerate that as a way to conduct other fields of engineering anymore.
I disagree. At the cutting edge of development, experiments and failure is impossible the avoid without incredible and expensive measures, which, you know, isn't exactly feasible in every case.
> Certainly, calling out poor security practices is a good thing, however this level of scrutiny is going to require a major shift in mentality for a large portion of the industry.
That's the point. "A major shift in mentality for a large portion of the industry" is basically GDPR's success criteria.
It does if said companies plan on being acquired, which is precisely what happened here:
> It is believed the vulnerability began when the systems of the Starwood hotels group were compromised in 2014. Marriott subsequently acquired Starwood in 2016
Starwood is by far big enough to have the GDPR apply to it independently.
Starwood was well beyond the "move fast and break things" phase of companies. Many companies aren't, and if you're trying to say all startups have to comply with GDPR on day 1, you are wrong.
OK - so basically, if you don't knowingly provide (or envisage providing) goods or services to EU customers.
This is a very different situation from your original statements based on size and maturity of a company - but I'll concede that the EU reach clearly only applies to EU interests .
> But, there is an important limitation. Article 30 requires people/businesses processing personal data to keep records of their processing activities and categories and to make those records available upon request. If your business employs fewer than 250 people, you do not have to create these records unless there could be a risk to the ‘rights and freedoms of data subjects (including trade secrets or intellectual property rights), the processing is not occasional, or your business processes any ‘special categories’ of data as referred to in Article 9(1) (personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs or trade union membership, and genetic data or biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation, or personal data relating to criminal convictions and offences referred to in Article 10.)
The way art 30(5) is drafted though means that the exception applies to basically nobody. What business can, honestly, say that it only occasionally processes personal data? Does it have no employees? Does it only occasionally communicate with people? That hardly seems likely.
Yes, it's probably a drafting defect, but I think one would be very brave indeed to try to rely on it, especially given that establishing an art 30 record is, practically speaking, a prerequisite for being in a position to comply with the rest of GDPR.
Then I guess we're back to letting big companies, who have the resources to do this shit, have Europe, and the small companies, who are too busy building new things, will just grow in other areas before expanding to the EU.
The exemption is only for the record keeping requirement to help keep costs down. Every organisation still needs to meet the data protection regulations.
The way art 30(5) is drafted though means that the exception applies to basically nobody.
Unfortunately, this is a good demonstration of two criticisms made of the GDPR right from the start: the costs of creating new paperwork in the approved format even if it makes no material difference to any actual data processing, and ambiguity about what is required or permitted even in quite fundamental respects.
A £99 million fine is on its way for Marriott after their 2018 breach of 30 million EU citizen guest records, for lack of due diligence over data security.
Those who said EU regulations had no teeth last year might need to readjust their expectations. This follows on from BA's large fine a few days ago.
I never really understood the arguments over the GDPR having teeth or not; it's a EU mandate, whatever teeth it will have is dependent on what each member country decides it will have. That's just how confederations work
I think the accusation that the GDPR has no teeth is not about the magnitude of fines. The GDPR promised great enhancements to privacy and freedom in the text of the legislation (opt-in data processing consent not conditional on service; right to data erasure; data portability). In practise, enforcement has been focused on punishing poor security, rather than lack of privacy or freedom.
That's fair, although even if they still only focus on breaches I think it might improve privacy indirectly: the database that's hardest to hack is the one that doesn't exist. If companies get in the mindset that storing client data is a big liability they might decide that archiving everything and anything forever might not be such a clever decision after all.
This is, as I understand it, the goal. I mentioned this in a different comment, but getting companies to think of unsecured consumer data as a liability is absolutely key to getting them to take privacy seriously. Companies need to consciously decide if the risk of accruing this data is worth the downside. Pre-gdpr there was functionally no downside at all.
If companies get in the mindset that storing client data is a big liability they might decide that archiving everything and anything forever might not be such a clever decision after all.
I think this would be a stronger argument if other EU laws didn't actively require the collection and long-term retention of some of the most important personal information, including identity and financial details, for other purposes such as VAT audits. Such obligations often preclude otherwise reasonable data management strategies like encrypting all personal data with a per-account key that can be easily deleted and thus render everything connected with a given account permanently inaccessible in the event of an erasure request etc.
Instead, data controllers are in principle supposed to keep track of every possible purpose for which personal data could be processed, even those originating in theoretical legal requirements that are rarely if ever used in practice, as they applied at the time each item of personal data was first collected and at all times since; to retain each individual data point for as long as any purpose for which it might be needed continues to apply; and then to delete that data promptly once its final purpose is no longer relevant.
I suggest that few if any data controllers are actually doing this. Instead I suspect almost everyone who is trying in good faith to comply with the GDPR is using sufficiently generic purposes and blanket provisions to simplify their position to a manageable level of complexity. (How many privacy policies have you read since GDPR came out that actually stated a concrete time period for retaining each category of personal data being processed, and how may have you seen that rely on abstract wording about keeping the data for as long as any stated purpose applies or something similar?) No doubt many other organisations are simply not complying with the GDPR rules about retention and deletion at all, perhaps through ignorance, or perhaps as a deliberate choice that they hope to get away with.
>opt-in data processing consent not conditional on service
I see this violated so often with full-screen popups requiring you disable adblock or exit private mode. The EU really needs to fine these companies into oblivion, I should not have to create an account just to look and see if they have a disabled tracking toggle (and they usually don't, so the only way to prevent tracking is private mode/adblock).
> In practise, enforcement has been focused on punishing poor security, rather than lack of privacy or freedom.
That's not true in the slightest. One bank (ING) in The Netherlands implemented an opt-out for analyzing customers data. Quite a bit of outrage. PR spokeperson said: "all is fine, this is all good, we follow the GDPR".
Local privacy authority sent a general letter informing that such behaviour is very likely not according to the GDPR. ING quickly backtracked. Other banks said they'd obviously comply with GDPR.
No fine was given.. it was not needed. I don't particularly care if companies are fined. I do care that they take my privacy into account. The latter is what (slowly) is happening.
There have been 50,000+ complaints filed with various data protection authorities since G-Day (1), Google recently got a hefty fine for a non-conformant consent implementation (2) and a lot more are rolling in country by country as bureaucracy grinds (3).
There have been a few fines, including the large 50M EUR fine against Google. Despite this, compliance has fallen short of many peoples' expectations. Being presented with consent dialogues where it is not possible or practical to decline consent is still commonplace.
Hopefully the rate of enforcement will further increase and compliance attitudes will improve.
Despite this, compliance has fallen short of many peoples' expectations.
Indeed. Are the following two statements true or false?
1. Major data hoarders, including online giants like Facebook and Google and traditional data brokers like credit reference agencies, are still hoovering up huge amounts of personal data and processing it in ways that some or all of the data subjects don't understand and to which they can't therefore have given their informed consent (assuming they are aware of any processing and have given any consent at all).
2. Governments and organisations with ties to governments are still hoovering up huge amounts of personal data allegedly for purposes involving security with little meaningful oversight and little need to demonstrate effectiveness or proportionality.
Until statements like these are false, data protection and privacy law isn't really protecting people from the biggest threats anyway, and the main positive effect of the GDPR is just to give the regulators the ability to impose fines for things that were mostly prohibited anyway but now on a scale that is significant to large businesses. That in itself is probably no bad thing, but if that's all it achieves then it's far from clear that it's been worth the huge implementation costs and the uncertainty it has brought even to honest organisations.
The data security side of gdpr is strong, imo. Even before these fines, the disclosure requirement is "teeth." It's already having major positive impact.
The user consent parts of gdpr, are, imo, not good. Any wins though are better than nothing.
I don't have a strong/mature alternative of my own.
But, I feel that assuming a pseudo-contractual relationship between websites and users is euphemism. I feel the same way about user agreements. The south park parody of apple's sums it up, for me.
Not even judges read it. It can't form the basis of a consent model. I think bans on certain types of tracking would be preferable to "consent."
If we are determined to have explicit contracts, we need to be realistic and take incentives into account. If the website controls the UI of the "opt in," language of the contract and such... they have a high level of control over outcomes... and these are highly manipulated to secure convenient outcomes. UI plays a far bigger role in determining the "consent" outcomes than user preferences.
So, if we are determined to go down this route, "consent management" needs to be "open" to 3rd parties chosen by users and allow central management, user selected defaults and 3rd party recommendations/defaults.
If user consent actually reflected informed user preferences, FB's tracking pixel would be disabled for >90% of users. What possible benefit is this to users?
The overlap between people who are paranoid that FB is listening to their conversations via the phone mic and users who have "consented" to advertiser cookies on a bunch of sites tells me that this model for consent is fundamentally broken.
On the flip side if you (or someone) doesn't mind....
I can tell from the downvotes (also on similar comments) that this is an unpopular opinion. Anyone care to defend gdpr "consent" as it exists currently. I don't mean the aspirational language of the law, I mean consent in the wild under gdpr today.
I personally doubt that the common click-here-to-agree-to-everything implementations in the wild comply with the law, and I expect they are under scrutiny and the hammer will come down on them at some point.
> The user consent parts of gdpr, are, imo, not good.
Indeed, this seems to be really lacking as far as I've seen.
Compliance with article 7 section 4 in particular (provision of service must not be conditional on consent for processing of personal data not necessary for provision of that service) is blatantly ignored by many actors, with a message of "accept our tracking or we won't let you see our content". Others pretend to be in compliance by having an opt-out which never completes, or other dark patterns.
Agree, this needs to be seriously enforced. Web browsing is a PITA now with all these popups that only have an "I Accept" box with no way to dismiss the notification otherwise. Usually I just close the tab.
Since so many companies seem to be outsourcing these popups, possibly because they're enrolled in some ad-network cooperations, I wonder if it would be possible to build a browser extension that would automatically submit these forms with everything set to "no consent".
Thanks, I saw your reply to my question below as well.
I wonder: what can we (as a community) do to change this? Can we:
- make applications to the information commissioner about the most egregious breaches
- create enough nuisance value that these entities need to acknowledge and address this issue
I'm sure there are enough privacy minded individuals on HN that if we all agreed on a concerted plan of action we could make an impact?
Lawyer activist Max Schrems created NOYB for this purpose: https://noyb.eu/ Supporting / joining them is probably the best way to make a concerted impact.
Compliance with article 7 section 4 in particular (provision of service must not be conditional on consent for processing of personal data not necessary for provision of that service) is blatantly ignored by many actors
In fairness, that's because it's a legislative over-reach that the EU wants to enforce extra-territorially, but a lot of businesses outside the EU's jurisdiction have declined to undermine their entire business model because a foreign government decided they should.
It is reasonable to say that people should have a choice about things like being tracked, and that such tracking may only be used with the subject's informed consent. This protects the privacy of those who value it by default.
However, IMHO it is not reasonable to say that organisations that fundamentally rely on such data processing to be financially viable must then continue to provide service to users who choose not to participate. That's an entirely one-sided deal, and it's logically unsustainable to require businesses to operate on that basis. You'd never tell a bricks and mortar grocery store that if someone came in but didn't have any money with them then the store still had to let them take a chocolate bar and eat it, and I don't see the effects of the GDPR in this respect as any better than that.
> rely on such data processing to be financially viable
I'm not convinced this is true. Television, billboards, and print newspapers demonstrate that advertising can still exist and support a business model without invading the privacy of users. I'm also not convinced that user-targeted advertising is significantly more effective in practise than contextual advertising.
You might be right in at least some cases, but I don't think this is really the point. Requiring businesses to provide their services to arbitrary numbers of people who are offering them nothing in return is a dangerous and ultimately unsustainable strategy.
Maybe if we were faced with some severe and immediate threat to public health and the business were a manufacturer of essential medication, there might be some overriding public interest justification for adopting that sort of position in law, but I think it would need to be a crisis of that sort of level to justify such direct intervention. I don't think the situation we're talking about here is anywhere close to justifying it on similar grounds.
Nor do I see any obvious evidence that forcing the matter through regulatory action will motivate a shift to more desirable funding models for the affected businesses. First some good alternative models would have to be identified, and if we'd done that already, we probably wouldn't be having this conversation at all.
> Requiring businesses to provide their services to arbitrary numbers of people who are offering them nothing in return is a dangerous and ultimately unsustainable strategy.
The advertising industry has been subject to regulation for a long time, especially in European countries. There is precedent for regulating the quantity and quality of advertising, banning classes of adverts (e.g. tobacco), and the use of personal data for direct marketing (telemarketing). Ultimately they have the option of providing their services within the bounds of the law, or not providing them at all. I don't see why web advertising should be exceptionally unregulated.
The GDPR does not prohibit advertising, and as a website operator you would be within your rights to block access to adblocker users. Advertising without the use of personal data is demonstrably successful (TV; radio; cinema; magazines; sponsorship). Personal targeting of adverts may improve revenue, but so would allowing tobacco advertising, increasing the length of advert breaks on TV, or numerous other regulated practises.
The move towards increased regulation of software engineering, especially with regards to security, makes me wonder if we will see state/provincial/national engineering regulatory authorities move in on the field.
You can't, for instance, call yourself a structural engineer unless you are registered with the regulatory authority as such. Nor can you offer engineering services to the public without registration. And you are bound by a code of ethics, subject to a formal complaint process, undergo somewhat regular practice reviews, and can face disciplinary actions when you fail to comply.
Right now, it seems like software engineering is the wild west, complete with tales of fortune to be had attracting code-slingin' cowboys without regard for the public's safety. I predict the lawman is coming for you.
I worry that said lawmen are going to be more bueracratic and even less safety. Moving slowly isn't going to cut it unlike structural engineering. The metaphor breaks down because the architect isn't responsible if people use sound dampered sledge hammers and saws in the middle of the night on vital support beams.
The software engineer is.
Especially given the origin cultures of regulators think that just banning Cryptography is a remotely reasonable idea instead of barking mad.
Standards may make some sense but they should be deliberately open ended like "encrypt customer data sufficiently or don't gather it" not "use single DES to encrypt - if you use large key RSA you will be in deep shit in spite of it being better".
yeah, it's going to be an interesting argument between the bits of government that want to read your mail, and the bits of government that want to ensure that you protect your customers' mail from being read.
The last time we listened to the lawman he was an idiot who compromised user safety. I can just imagine a world where I have to use Dual EC DPRG or follow requirements to rotate passwords. Obviously I, and most software engineers, were smart enough to not do these things but when you’ll send us to jail for not doing the mandated thing, people will do the mandated thing even if they know better.
This has far more potentially far-reaching connotations than the BA fine.
Yes, Marriot failed to conduct proper due diligence. Yes, they should have been able to detect the breach earlier and block the attackers' access. And yes, the attackers managed to stay in their system for a very, very long time.
But this breach was conducted by a nation state adversary. An attacker with unlimited resources and the best technical knowledge on the planet. If inability to protect yourself from such a threat becomes an offense, I am not sure the net effect is positive.
If Marriot can prove that they anonymised the data wherever possible, deleted all the data that they didn't absolutely need, took all possible steps to prevent the attack, and took all the required steps once they detected the breach, then they won't be facing huge fines. The fines aren't for "having a breach", the fines are for mismanagement of their customer's data and negligence around the breach.
It's not the inability to protect yourself that's the offence, it's not doing the right thing in the event of a breach that's the offence.
Where are you seeing a nation state adversary? I wasn't able to find any mention of this.
It's _annoyingly_ common for those who are subject to fairly ordinary attacks to blame a powerful adversary based on very thin evidence, because "The state of Russia attacked my business" sounds like you couldn't be expected to resist whereas "A bored 14 year old attacked my business" sounds like you're useless.
This is one of those where the nature of the attack and inference together make the case.
The attackers were inside the system for several years. Marriott is a high-end hotel chain, whose establishments are used by state level travelers. Having ongoing access to politicians' and high-ranking corporate executives' itineraries, and especially their hotel room bookings, is an incredible avenue for espionage.
A financially motivated attacker would have tried to exfiltrate otherwise valuable data. But if the main target is the travel information data itself, and if the scope does not particularly expand over time, I am going to call it advanced espionage.
In addition to that, while we know data was taken, it hasn't shown up in any of the customary haunts for stolen information. Someone got access, and squirreled away the data. Just some hacker looking to make some money from identity theft would be selling that left and right.
Marriot has time travel technology? The breach happened before acquisition by Marriott. Their failure to discover it before and take sufficient action subsequently is what got them in trouble.
Put this on its head: Nation states are attacking our businesses. That makes it absolutely essential to make sure our country's businesses are properly prepared for the threat. Think of it like building an apartment block in San Francisco without adequate Earthquake protection - not only is it bad engineering, it's illegal.
The requirement of GDPR is not "don't get hacked", it's "take reasonable steps not to get hacked". If you take reasonable security measures and still get breached, you aren't liable.
Even worse when it's a nation state getting hold of records where their political enemies are spending their hotel nights, and who else was there same night.
Ever since I stayed at a Marriott hotel (over five years ago), I have received (and still do) telemarketing calls offering me a new deal on Marriott or another of their subsidiaries. It's always a new agency with a new voice and a different pitch. I wonder how many times Marriott has sold my data to third-party agencies over the years?
Perhaps enough times where this fine from the EU is still considered insignificant (apparently about 30p per record). If you consider your details to be worth 5-10p per agency, if they've sold your details at least three times they've already made their money back (including this fine from the EU).
Security is tricky for many companies since security is still somewhat complicated compared to the level of talent you can hire, and the amount of software needed to run an enterprise.
The solution is to have security controls that cross cut entire enterprises and give operators a place to control them, however what we have today is just a jumble of different solutions that consist more of blocking access rather than allowing the business to run securely.
It seems reasonable not to operate a business that you can't operate according to minimum standards. For example, you wouldn't run a construction company without a properly trained builder on staff.
Does "properly trained" include training to build buildings that cannot be brought down or otherwise compromised by sustained targeted attacks using the latest tools available? Most homes can burnt down with $20 of gas and a lighter; should we consider the builders of those homes to be improperly trained?
Of course not, because that's the company's core competency. A better analogy is running a construction company without quarterly software security audits. Because if that list of clients along with contact info gets leaked, that could be a GDPR violation.
>Security is tricky for many companies since security is still somewhat complicated compared to the level of talent you can hire, and the amount of software needed to run an enterprise.
Still we see databases with no password made accessible on the internet, maybe it is time that you don't employ someone that has no training at all, or offer a training program, say if your developer needs to use TodaysCoolDb then have him trained on how to use it instead of him copy pasting the hello world from a webpage.
The amount of money you invest in your data security should be proportional to the data you collect, so collecting less will help you or investing more into security training and auditing your own systems.
At a higher level, the hard part is for companies to realise data is a liability - one they have been ignoring for too long while reaping the benefits and letting users suffer breach after breach.
> Security is tricky for many companies ... The solution is to have security controls that cross cut entire enterprises and give operators a place to control them
This is definitely an area worth tackling, and one where multiple companies are recently growing. That's not the only issue though.
Security has many levels and the landscape is historically filled with opaque practices and prices. That does not entice people to go forward with security audits or solutions.
We've seen improvements on tooling with SAST but active security is largely pattern-based WAF or at the network level. This has poor signal/noise ratio and can't protect against more advanced attacks that target above the network layer (including HTTP).
Recent developments target more knowledge of the application and the business logic itself. Facebook itself for example has internal tools to detect data leaks. Being inside the application is much more useful because they don't just see data flying by but have knowledge of context and call sites, which allows to register malicious calls on the spot, protect just in time (even against zero days because you hinge on behaviour), and show the exact line of code (including the call stack) where the vulnerability lies, allowing to surface and fix it, or even virtual patch the vulnerability live.
> however what we have today is just a jumble of different solutions that consist more of blocking access rather than allowing the business to run securely.
The goal of ASMs is precisely to solve that: those tools are kind of APMs like New Relic or Datadog, only geared towards security. Big names like Facebook or Google have their own internal tools, but a couple of independent solutions have emerged already, and I think that having those companies around is going to be a shift that will benefit everyone's security in the long run, due to their accessibility and ease of use compared to previously existing solutions.
Of course, here are some Application Security Management platforms: Signal Sciences, Contrast Security, and as you may have guessed, Sqreen, where I work at.
(We have a culture of transparency, faith in our product and our vision, and are hell-bent on improving for security for everyone because it’s desperately needed, so no, I’m not afraid to name competitors)
Feel free to ask me anything here, on our Intercom support channel, or via email of you’re so inclined!
Deliberately not disclosing the breach would likely result in much larger fines.
>If you experience a personal data breach you need to consider whether this poses a risk to people. You need to consider the likelihood and severity of the risk to people’s rights and freedoms, following the breach. When you’ve made this assessment, if it’s likely there will be a risk then you must notify the ICO;
> The GDPR introduces a duty on all organisations to report certain types of personal data breach to the relevant supervisory authority. You must do this within 72 hours of becoming aware of the breach, where feasible.
There is no definite scale for the fines though, i'm pretty sure negotiating with the hackers will be cheaper 100% of the time. This is interesting from a free market perspective: The undefined cost of the fines would lead to the discovery of the true price of data leaks by negotiating with thieves.
> There is no definite scale for the fines though, i'm pretty sure negotiating with the hackers will be cheaper 100% of the time
This will not be true if any paid-off breach is ever discovered as then you'll have paid the hackers and the fine, which will be larger because you've deliberately kept it from the ICO/similar.
What negotiating though? If your huge customer, orders and/or payments database is exploited and dumped and then used for identity/CC fraud, there is no negotiating with hackers. You will be found out eventually due to the proliferation of sold information and data dumps in the black market, which are then analysed by researchers. Then you will be fined possibly twice instead of once or not at all, since you also failed to report the breach.
I fail to think of relevant common situations where negotiating with the hackers would be an option in breaches relating to GDPR.
Do you mean they should pay off the hackers or ignore that a data breach ever happened? If it's the former then there's some obvious risks there and it's an expensive gamble. If it's the later then it's likely discoverable without their involvement, one of their millions of customers will enter user+mariot@gmail.com as their address and registering that with https://haveibeenpwned.com/ ?
The lack of discovery/disclosure also covered an acquisition, companies not disclosing breaches during acquisitions is something I bet the SEC would be interested in.
It's really strange how little detail they're providing here. How are other UK businesses looking at this supposed to know what level of security is expected of them?
For context, Marriott's 2018 revenue was $20 Billion...so this fine is 0.5% - not insignificant, but not as high as the maximum 4% which is possible under GDPR.
I think you mean profit, not revenue. 99 million is enough for them to alter their behavior and for heads to roll without putting themat a strategic disadvantage in the market.
Fines need to be significant relative to revenue, not profit. Otherwise, they become a “cost of doing business.” A punishment isn’t a punishment unless it hurts.
I'm not sure what your question is getting at, but if I owned a business, a fine proportional to the profit that I would otherwise receive would hurt equally for a low margin business and a high margin business.
That’s the point: if you can’t profit off my personal data while simultaneously protecting it as a “low margin,” your business doesn’t need to exist. Fines relative to revenue hurt every business a lot, which is how it should be. A massive data breach should be cause for going out of business, not “oh, we’re taking X% of your profits this year, but we totally trust you to do better next year.”
I don't agree with your point at all. A high margin business should then be able to get away with massive data breaches compared to a low margin business going bust? It's not about fairness, it's about actually achieving your goals.
I am ok with it being below maximum for now.
They should start to increase the fines little by little and the big corps will caught on and start to treat the security with more respect.
If they make no attempt to improve their security I suspect the next incident will cost them $800m. Be interesting to see who will insure them against that.
When the parent comment said "they could set up an insurance fund", I believe they didn't mean a literal contract with an insurance company, but a straight up savings fund set up by Marriott to be used in the future specifically for expenses like that.
On the bright side, the overall regulatory effects of this decision are much larger, as it sets a precedent for future acquisitions (by any company, not just Marriott).
You have to look at it from the perspective of the CEO and C Suite. For them they likely don't really care about harm done to whoever had their data compromised, but what they do care about is their bonus. They likely have detailed bonus tiers and if this fine reduces their bonus and incentives then it is likely that change will occur. In many / most large companies, C-Suite pay and the bonus structure is the major driver of all corporate decisions.
For an example, just look at the recent US corp. tax cut. One time bonus to employees and then repurchase of company stock to boost the share price and in turn boost executive level rewards.
3 week profit is a lot. I'd have a hard time imagining my boss telling me that the next three weeks, every employee's entire profit outcome will solely be dedicated to paying off a fine and that would be acceptable.
Well obviously, doing 75 in a 70 means you endanger the lives of yourself and those around recklessly and without proper reason (being an Ambulance is a good reason).
Endangering Lives of others for no damn reason other than wanting to be home 3 seconds earlier > Loosing customer records
I wonder if these increasing influx of fines will become part of how the GDP is calculated eventually. Certainly a revenue stream for governments more and more these days, with issues left to fester and then some law with the ability to capitalise (fine) upon the situation coming into play.
But when you fine a company the customers end up paying, same customers who ended up being the victims of whatever reason the fine was needed in the first place. Sadly I don't see a way of fixing that enpass.
The extortion opportunities are looking very sweet. Breach a company and charge them a keep quiet fee. Lets call it a consulting fee. If its cheaper than a GDPR fine the company will likely do it.
That brings to mind a hypothetical dysfunctional yet oddly workable system of deputization akin to ADA compliance lawyers - registered hackers able to hack and cite for violations to receive a fine portion - perhaps with a bonus for fixing on the way out. Also very cyberpunk in a satirical "Snowcrash" way.
Not saying that we should adopt such a system, potentially terrible idea but it amusingly is better than other "do something" legislation in that it would actually help the target problem even if there are clear downsides.
I can’t think of a single GDPR fine I couldn’t get 1000% behind. Haven’t companies had ages to adapt anyway?? I mean dear lord it’s literally like they won’t do anything until they see someone in their space get fined. Absolute corporate misconduct
I wonder if there's any chance something like the GDPR could make it in to law in the US. It's long past time for the US government to take serious actions against companies that violate user privacy and security.
Won't happen in this climate. Or any climate. This current clan is too business-friendly.
I could see someone like Elizabeth Warren or Ron Wyden getting behind it, but not really the rest of the pack (it's not a popular enough issue when you weigh it against things like student loan forgiveness, or universal healthcare).
I do wish it would become law here. It would make my professional life a bit harder (mostly on the security front, we already steadfastly refuse to "monetize the data" or even give it to any third party, to the point we've rejected those questions from investors) but it's definitely the right thing to do since the benefit for consumers is much more important.
Key part of the article is that the data breach occurred 2 years PRIOR to the acquisition. How can due diligence possibly discover this? Seems like government overreach to me.
A detailed security audit of their systems should have uncovered areas where their security was lacking and they should have undertaken steps to remedy the defects. It’s something you should do during an acquisition anyways. If their software is crap, the price of acquisition should decrease by some amount in anticipation of the work required to meet data protection laws. Not performing the audit means not only are you likely to pay too much for the acquired company, but it also opens you up to liability as was the case here.
My understanding, and I could be wrong, is that the breach started 2 years prior to acquisition, and continued to be exploited until sometime in 2018 - several years after the acquisition.
And, regardless, if a company violates the GDPR then quickly sells it itself, should the relevant data protection commission just drop it? After all, they sold the company!
It says the vulnerability began two years prior to acquisition but does not say that it was a one time event and the rest of the article would not make a lot of sense if that was the case.
Marriott should be fined that amount just for the shittiness of its website. They are a company whose management is out to sea on autopilot, on holiday and it shows after the merger with SPG. That UI and customer experience (and what it says about the brand) is getting terrible.
> Personal data has a real value so organisations have a legal duty to ensure its security, just like they would do with any other asset. If that doesn’t happen, we will not hesitate to take strong action when necessary to protect the rights of the public
Certainly, calling out poor security practices is a good thing, however this level of scrutiny is going to require a major shift in mentality for a large portion of the industry. "Move fast and break things" just isn't going to cut it anymore.