The ICO's new guidelines need to focus on aggregation, not anonymisation
The Information Commissioner's Office (ICO) is between a rock and several hard places. Long-castigated for being a soft touch, over the past year it has been increasingly visible in terms of the fines it imposes. These range from individual confidentiality breaches to the large-scale spamming via text with fines commensurate with the scale of the breach.
While nobody would doubt the need for regulation - and enforcement - in matters of data protection, the process followed has come under scrutiny, particularly around reporting of breaches. A recent Computer Weekly article asks whether the approach of requesting breaches to be reported, then punishing the organisations involved, might lead to those in charge choosing to keep quiet, which would be a backward step.
The report cites the example of Brighton and Sussex University Hospitals NHS Trust, which was fined £325,000 after reporting a breach. While this was subsequently reduced to £260,000 on appeal, it presents a substantial financial disincentive to report any breaches. While we don't have plea bargaining in the UK, there is nonetheless a frequently used principle of understanding potential recourse before full disclosure.
While this debate will run on, it is about to get more complicated still - with the ICO's decision not to require anonymised data to be 100% watertight in terms of protecting privacy. As in - if anonymised data can still be used to identify a person, the organisation will not be at fault. To quote the ICO: "This will not amount to a disclosure of personal data."
While the position is understandable - in that it is becoming ever harder to reduce privacy breaches caused through data aggregation (when two or more data sets are combined), it illustrates the kinds of enforcement challenges that the ICO will face in the future. An unfortunate spin-off is that enforcement can fall between the stools of several organisations - for example, a business could release anonymised data in the knowledge that another organisation had, if you will, the other half of the picture.
In theory this shouldn't happen, as the ICO guidance states an organisation should have "reasonable confidence" that anonymity is preserved. However this is a wooly phrase, which will be difficult to test in the courts. If I have reasonable confidence and you have reasonable confidence, does that result in a collective reasonable confidence? What if a new (or previously unknown) data source undermines such confidence, once the (anonymised) data is already published? What about international data sources and their jurisdictions - should they have reasonable confidence too?
The fact that it is difficult to nail down answers to such questions leaves the door open for the less scrupulous to do the bare minimum of anonymisation, in the knowledge that other organisations can then exploit the data - even, potentially, with their complicity. A specific example might be the release of anonymised patient data to a commercial third party, which some time later spots the opportunity to sell the information on to another company which can sell insurance products to a subset of the people the data is about.
As we move further into the era of bigger and better data sources accessible by a widening pool of organisations, so we need higher levels of protection against our personal information being exploited. Guidelines which consider individual organisations and whether they have assessed their own risks leave a gaping hole in UK data protection which is simply waiting to be exploited. If the ICO wants to keep its teeth, it needs to focus on the intent of data aggregators, rather than simply on whether original data owners were reasonably confident they did the right thing. Whether or not they feel obliged to report it.