Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's Article 35: https://www.privacy-regulation.eu/en/article-35-data-protect...

> Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons...

I find it strange that you didn't quote the important part of the text here (high risk to natural persons).

Like, this doesn't apply to almost anyone. This applies to things like facial recognition or automated systems that track people's health data, not almost anything that one does in most CRUD/tech applications.

If you are subject to this requirement, then you almost certainly have both a legal team, and a data protection officer. Can you give me an example of what you think would be high risk but should not be?

Secondly, you're missing something here. I'm going to assume that you are US-based, based on your theory of regulation.

The US and EU differ wildly on how regulation works. What you would normally do, if your legal team/DPO says that what you're doing is high risk, is write an assessment (which your legal team has already done) and run it by your regulator. They'll say this works, or you'll need to do something different, you'll comply and get on with your life.

In any case, the whole process is really, really slow before any fines are administered. As an example, the ECJ basically said recently that data transfers from the EU to the US are illegal because of the NSA. The Irish regulator has just sent a letter to FB asking them to comply. FB are fighting this in court (with some ludicrous arguments), but as of yet their processing has not been impacted at all.

This legal fight has been going since 2011, and yet FB are still sending data to the US. I'm not sure if that fits the model of crazy regulators.

And let's be clear, as of yet there have been no GDPR cases fining companies for anything less than absurd violations. Ad tech still exists in Europe (and it's arguable that it shouldn't), FB and Google have continued on their merry way.

The second part is pretty much consult your lawyers/DPO and be able to make a case for whatever you're doing. In that sense, it's very similar to HIPAA, which doesn't seem to cause the same level of upset here.

I still stand by my original statement. If you have consent for your data processing, all will be OK (as long as it's not forced consent). If you don't have consent, or a legitimate interest in the data analysis/processing then maybe you shouldn't be doing the analysis or processing?

As an example, Facebook turned off automatic facial recognition in photos in the EU following an outcry. They re-introduced it when GDPR came in, because they asked for consent. You'll note that there have been no court cases on this, because it's totally fine.

tl;dr - get consent for your data processing, make it easy for people to get their data, and GDPR compliance is pretty easy. If you're in an industry where this is difficult/impossible, then lawyer up and be prepared to spend a bunch of money (and probably lose eventually).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: