Privacy: The Cloud Model’s Waterloo?

Part 1 – Privacy Ain’t the Same As Security

Most people consider the word privacy solely in the context of cloud deployment models, where a private cloud is one reserved strictly for the use of a specific group of people that have a common affiliation, such as being employed by the same company. But it is becoming quickly evident that the more broad context of global legal systems and basic human rights are where cloud computing may meet its privacy Waterloo.

The concept of personal privacy is present in all cultures to varying degrees.  Western cultures have developed the expectation of universal individual privacy as a right.  As such, privacy is a legal construct, not a technical one.  It is founded upon the idea that information not publicly observable about a person belongs to that person, and is subject to their will regarding disclosure and subsequent use.

By default, most legal systems require the individual to opt out of their rights to privacy, rather than opt in.  This means that, unless there is specific permission from the owner of the data to allow its use in specific ways, the use is unlawful and a violation of that person’s privacy rights. Examples include the United States healthcare privacy laws, and the European Union’s privacy directive.

There are instances to the contrary, where opt-in is the default.  One is the privacy-related right to not be approached without consent.  An example is the US Federal Trade Commission’s National Do Not Call Registry, which one must actively join in order to supposedly avoid unwanted marketing telephone calls. This solution also demonstrates the difficulty in balancing privacy of the individual with the free-speech rights of others.

The details of privacy law vary across jurisdictions, and historically have been somewhat anemic.  Before the printed word, propagation of personal information could only occur by word-of-mouth, which was highly suspect as mere gossip.  The printed word resulted in more accurate and authoritative data communication, but the cost rarely allowed for transmitting personal details outside the realm of celebrity (in which case it was considered part and parcel of one’s celebrated position). These limitations rarely tested the laws, and when they did, it was infrequently enough to manage on a case-by-case basis. But, as with so many other legal constructs, computer systems and networking have strained the law to breaking points.

The modern, democratized Internet has enabled the near instantaneous propagation of data at little expense, by almost anyone, to broad audiences.  In supposed acts of public service, “whistle blowers” purposefully disclose private information to call attention to illicit activities or behaviors of the data owners: Whether their ends justify their means is hotly debated, though it is a clear violation of privacy. Vast databases of personal information collected by governments and corporations are at much greater risk to be copied by unauthorized agents, which most people agree is data theft.  In these cases, it is fairly easy to spot the transgressor and the violation.

But the free flow of information in the cloud computing era brings ambiguity to what once seemed straightforward.  Individuals who volunteer personal information often do not realize just how far their voluntary disclosure may propagate, or how it might be used, especially when combined with other information gleaned from other sources.  The hyper-connected Internet allows data from multiple sources to be correlated, creating a more complete picture of an individual than he or she may know, or would otherwise condone.  The data may come from a seemingly innocuous disclosure such as a marketing questionnaire, or from public records which, until today, were simply too difficult to find, much less match up with other personally identifiable information (PII).  Attempts to ensure data anonymity by “scrubbing” it for obvious PII such as name, address, phone number, and so on, is increasingly ineffective as something as simple as a time stamp can tie two datum together and lead to an eventual PII link in other data sets.

This particular problem is one of the negative aspects of big data analytics, by which vast sources of data, both structured like database tables and unstructured like tweets or blog posts, can be pulled together to find deeper meaning through inferential analysis.  Certainly, big data analytics can discover important trends and help identify solutions to problems by giving us insight in a way we could never have achieved before. The scope, diversity, size, and access of data combined with cheap, distributed, open source software has brought this capability to the masses.  The fact that we can also infer personal information that the owner believes to be private, and has not given us consent to use, must also be dealt with.

As cloud computing continues on the ascendant, and high-profile data breaches fill the news headlines, governments have been forced to revisit their privacy laws and increase protection, specifically for individuals.  In jurisdictions such as the United States, privacy rules are legislated and enforced by sector.  For example, the Health Insurance Portability and Accountability Act (HIPPA) established strict privacy rules for the healthcare sector, building upon previous acts such as the Privacy Act of 1974.  Although the Payment Card Industry (PCI) standard is not a law, it is motivated by laws in many states designed to protect financial privacy and guard against fraud. In the European Union, the Data Protection Directive of 1995 created strict protections of personal data processing, storage and transmission that applies in all cases.  This directive is expected to be superseded by a much stronger law in coming years.

In an environment where there are legal sanctions and remedies for those who have suffered violations of their privacy, one is wise to exercise caution in collecting, handling, storing, and using PII, regardless of the source.  Cloud technologies make it all too easy to unintentionally break privacy laws, and ignorance is not an acceptable plea in the emerging legal environment.  Clearly, for cloud to be successful, and for us to be successful in applying it to our business problems, we need systematic controls to prevent such abuses.

But is a failure to guarantee privacy in the cloud enough to kill the cloud model, or hobble it to insignificance?  More on this line of thinking in Part 2.

This entry was posted in Cloud Computing Technology Insights, Uncategorized. Bookmark the permalink.

Privacy: The Cloud Model’s Waterloo?

Part 1 – Privacy Ain’t the Same As Security

Most people consider the word privacy solely in the context of cloud deployment models, where a private cloud is one reserved strictly for the use of a specific group of people that have a common affiliation, such as being employed by the same company. But it is becoming quickly evident that the more broad context of global legal systems and basic human rights are where cloud computing may meet its privacy Waterloo.

The concept of personal privacy is present in all cultures to varying degrees.  Western cultures have developed the expectation of universal individual privacy as a right.  As such, privacy is a legal construct, not a technical one.  It is founded upon the idea that information not publicly observable about a person belongs to that person, and is subject to their will regarding disclosure and subsequent use.

By default, most legal systems require the individual to opt out of their rights to privacy, rather than opt in.  This means that, unless there is specific permission from the owner of the data to allow its use in specific ways, the use is unlawful and a violation of that person’s privacy rights. Examples include the United States healthcare privacy laws, and the European Union’s privacy directive.

There are instances to the contrary, where opt-in is the default.  One is the privacy-related right to not be approached without consent.  An example is the US Federal Trade Commission’s National Do Not Call Registry, which one must actively join in order to supposedly avoid unwanted marketing telephone calls. This solution also demonstrates the difficulty in balancing privacy of the individual with the free-speech rights of others.

The details of privacy law vary across jurisdictions, and historically have been somewhat anemic.  Before the printed word, propagation of personal information could only occur by word-of-mouth, which was highly suspect as mere gossip.  The printed word resulted in more accurate and authoritative data communication, but the cost rarely allowed for transmitting personal details outside the realm of celebrity (in which case it was considered part and parcel of one’s celebrated position). These limitations rarely tested the laws, and when they did, it was infrequently enough to manage on a case-by-case basis. But, as with so many other legal constructs, computer systems and networking have strained the law to breaking points.

The modern, democratized Internet has enabled the near instantaneous propagation of data at little expense, by almost anyone, to broad audiences.  In supposed acts of public service, “whistle blowers” purposefully disclose private information to call attention to illicit activities or behaviors of the data owners: Whether their ends justify their means is hotly debated, though it is a clear violation of privacy. Vast databases of personal information collected by governments and corporations are at much greater risk to be copied by unauthorized agents, which most people agree is data theft.  In these cases, it is fairly easy to spot the transgressor and the violation.

But the free flow of information in the cloud computing era brings ambiguity to what once seemed straightforward.  Individuals who volunteer personal information often do not realize just how far their voluntary disclosure may propagate, or how it might be used, especially when combined with other information gleaned from other sources.  The hyper-connected Internet allows data from multiple sources to be correlated, creating a more complete picture of an individual than he or she may know, or would otherwise condone.  The data may come from a seemingly innocuous disclosure such as a marketing questionnaire, or from public records which, until today, were simply too difficult to find, much less match up with other personally identifiable information (PII).  Attempts to ensure data anonymity by “scrubbing” it for obvious PII such as name, address, phone number, and so on, is increasingly ineffective as something as simple as a time stamp can tie two datum together and lead to an eventual PII link in other data sets.

This particular problem is one of the negative aspects of big data analytics, by which vast sources of data, both structured like database tables and unstructured like tweets or blog posts, can be pulled together to find deeper meaning through inferential analysis.  Certainly, big data analytics can discover important trends and help identify solutions to problems by giving us insight in a way we could never have achieved before. The scope, diversity, size, and access of data combined with cheap, distributed, open source software has brought this capability to the masses.  The fact that we can also infer personal information that the owner believes to be private, and has not given us consent to use, must also be dealt with.

As cloud computing continues on the ascendant, and high-profile data breaches fill the news headlines, governments have been forced to revisit their privacy laws and increase protection, specifically for individuals.  In jurisdictions such as the United States, privacy rules are legislated and enforced by sector.  For example, the Health Insurance Portability and Accountability Act (HIPPA) established strict privacy rules for the healthcare sector, building upon previous acts such as the Privacy Act of 1974.  Although the Payment Card Industry (PCI) standard is not a law, it is motivated by laws in many states designed to protect financial privacy and guard against fraud. In the European Union, the Data Protection Directive of 1995 created strict protections of personal data processing, storage and transmission that applies in all cases.  This directive is expected to be superseded by a much stronger law in coming years.

In an environment where there are legal sanctions and remedies for those who have suffered violations of their privacy, one is wise to exercise caution in collecting, handling, storing, and using PII, regardless of the source.  Cloud technologies make it all too easy to unintentionally break privacy laws, and ignorance is not an acceptable plea in the emerging legal environment.  Clearly, for cloud to be successful, and for us to be successful in applying it to our business problems, we need systematic controls to prevent such abuses.

But is a failure to guarantee privacy in the cloud enough to kill the cloud model, or hobble it to insignificance?  More on this line of thinking in Part 2.

This entry was posted in Cloud Computing Technology Insights, Uncategorized. Bookmark the permalink.