I recently wrote a series of posts on LinkedIn exploring privacy and data security harms. I thought I’d share them here, so I am re-posting all four of these posts together in one rather long post.
I. PRIVACY AND DATA SECURITY VIOLATIONS: WHAT’S THE HARM?
“It’s just a flesh wound.”
– Monty Python and the Holy Grail
Suppose your personal data is lost, stolen, improperly disclosed, or improperly used. Are you harmed?
Suppose a company violates its privacy policy and improperly shares your data with another company. Does this cause a harm?
In most cases, courts say no. This is the case even when a company is acting negligently or recklessly. No harm, no foul.
Strong Arguments on Both Sides
Some argue that courts are ignoring serious harms caused when data is not properly protected and used.
Yet others view the harm as trivial or non-existent. For example, given the vast number of records compromised in data breaches, the odds that any one instance will result in identity theft or fraud are quite low.
And so much of our data isn’t very embarrassing or sensitive. For example, who really cares what brand of paper towel you prefer?
Most of the time, people don’t even read privacy policies, so what’s the harm if a company violates a privacy policy that a person didn’t even bother to read?
The Need for a Theory of Harm
Courts have struggled greatly with the issue of harms for data violations, and not much progress has been made. We desperately need a better understanding and approach to these harms.
I am going to explore the issue and explain why it is so difficult. Both theoretical and practical considerations are intertwined here, and there is tremendous incoherence in the law as well as fogginess in thinking about the issue of data harms.
I have a lot to say here and will tackle the issue in a series of posts. In this post, I will focus on how courts currently approach privacy/security harm.
The Existing Law of Data Harms
1. Data Breach Harms
Let’s start with data breach harms. There are at least three general bases upon which plaintiffs argue they are injured by a data breach, and courts have generally rejected them.
1. The exposure of their data has caused them emotional distress.
2. The exposure of their data has subjected them to an increased risk of harm from identity theft, fraud, or other injury.
3. The exposure of their data has resulted in their having to expend time and money to prevent future fraud, such as signing up for credit monitoring, contacting credit reporting agencies and placing fraud alerts on their accounts, and so on.
Courts have generally dismissed these arguments. In looking at the law, I see a general theme, which I will refer to as the “visceral and vested approach” to harm. Harms must be visceral – they must involve some dimension of palpable physical injury or financial loss. And harms must be vested – they must have already occurred.
For harms that involve emotional distress, courts are skeptical because people can too easily say they suffered emotional distress. It can be hard to prove or disprove statements that one suffered emotional distress, and these difficulties make courts very uneasy.
For the future risk of harm, courts generally want to see harm that has actually manifested rather than harm that is incubating. Suppose you’re exposed to a virus that silently waits in your bloodstream for 10 years and then suddenly might kill you. Most courts would send you away and tell you to come back after you’ve dropped dead, because then we would know for sure you’re injured. But then, sadly, the statute of limitations will have run out, so it’s too late to sue. Tough luck, the courts will say.
For harms that involve time and money you spend to protect yourself, that’s on your own dime. If you want to collect damages for being harmed, then leave yourself exposed, wait until you’re harmed, and hope that it happens within the statute f limitations. For example, In re Hannaford Bros. Data Security Breach Litigation(Maine Supreme Court, 2010), the court held that there was “no actual injury” from a data breach even when plaintiffs had to take efforts to protect themselves because the law “does not recognize the expenditure of time or effort alone as a harm.”
Occasionally, a court recognizes a harm under one of the above theories, but for the most part, the cases are losers. One theory that has gained a small bit of traction is if plaintiffs can prove that they paid fees based on promises of security that were broken. But this is in line with visceral and vested approach because it focuses on money spent. And many people can’t prove that they read the privacy policy or relied on the often vague and general statements made in that policy.
2. Privacy Harms
Privacy harms differ from data breach harms in that privacy harms do not necessarily involve data that was compromised. Instead, they often involve the collection or use of data in ways that plaintiffs didn’t consent to or weren’t notified about.
The law of privacy harms is quite similar to that of data breach harms. Courts also follow the visceral and vested approach. For example, in In Re Google, Inc. Cookie Placement Consumer Privacy Litigation (D. Delaware, Oct. 9, 2013), plaintiffs alleged that Google “’tricked’ their Apple Safari and/or Internet Explorer browsers into accepting cookies, which then allowed defendants to display targeted advertising.” The court held that the plaintiffs couldn’t prove a harm because they couldn’t demonstrate that Google interfered with their ability to “monetize” their personal data.
In another case involving Google, In re Google, Inc. Privacy Policy Litigation (N.D. Cal. Dec. 3, 2013), plaintiffs sued Google for consolidating information from various Google products and services under a single universal privacy policy. The plaintiffs claimed that Google began using and sharing their data in different ways than had been promised in the original privacy policies. The court held that the plaintiffs lacked standing because the plaintiffs failed to allege that how Google’s “use of the information deprived the plaintiff of the information’s economic value.”
In Clapper v. Amnesty International, 133 S. Ct. 1138 (2013), the U.S. Supreme Court held that plaintiffs failed to allege a legally cognizable injury when they challenged a provision of the law that permits the government to engage in surveillance of their communications. The plaintiffs claimed that there was an “objectively reasonable likelihood” that their communications would be monitored, and as a result, they had to take “costly and burdensome measures to protect the confidentiality of their international communications.” The Supreme Court concluded that the plaintiffs were speculating and that “allegations of possible future injury are not sufficient” to establish an injury. According to the Court, “fears of hypothetical future harm” cannot justify the countermeasures the plaintiffs took. “Enterprising” litigants could establish an injury “simply by making an expenditure based on a nonparanoid fear.”
There are some cases where courts find privacy harms, but they too are largely consistent with the visceral and vested approach. For example, in In Re iPhone Application Litigation (Nov. 25, 2013), the plaintiffs alleged that Apple breached promises in its privacy policy to protect their personal data because its operating system readily facilitated the non-consensual collection and use of their data by apps. Judge Koh found that the plaintiffs had made sufficient allegations of harm because of their claim that “the unauthorized transmission of data from their iPhones taxed the phones’ resources by draining the battery and using up storage space and bandwidth.” But then the court concluded that the plaintiffs failed to prove that they read and relied upon the privacy policy.
But Wait . . . Courts Do Readily Recognize these Harms Sometimes
So is it really true that harms must be visceral and vested? Not necessarily. In the most influential privacy law article ever written, Samuel Warren and Louis Brandeis’s The Right to Privacy, 4 Harv. L. Rev. 193 (1890), the authors spent a great deal of time discussing the nature of privacy harms. “[I]n very early times,” they contended, “the law gave a remedy only for physical interference with life and property.” Subsequently, the law expanded to recognize incorporeal injuries; “[f]rom the action of battery grew that of assault. Much later there came a qualified protection of the individual against offensive noises and odors, against dust and smoke, and excessive vibration. The law of nuisance was developed.”
Along this trend, the law recognized protection to people’s reputations. Warren and Brandeis pointed out how the law originally just protected physical property but then expanded to intellectual property. Warren and Brandeis were paving the way for the legal recognition of remedies for privacy invasions, which often involve not a physical interference but an “injury to the feelings” as they described it.
Since the Warren and Brandeis article, the law has come a long way in recognizing emotional distress injuries. Originally, the law didn’t protect emotional harm. But the law later developed an action for intentional infliction of emotional distress as well as for negligent infliction of emotional distress. Courts used to allow emotional distress damages only when accompanied by physical injury, but that rule has eased as the law has developed.
A number of privacy cases succeed, and they often do not follow the visceral and vested approach. The law recognizes harm in defamation cases, for example, and this harm is reputational in nature and in some cases does not involve physical or financial injury.
In many privacy tort cases, plaintiffs win when their nude photos are disseminated or when autopsy or death scene photos of their loved ones are disclosed. Courts don’t seem to question the harm here, even though it isn’t physical or financial. Also cases involving embarrassing secrets can win too without proof of physical or financial injury.
There are also cases where courts provide plaintiffs with remedies when they are at risk of suffering future harm. For example, in Petriello v. Kalman, 576 A.2d 474 (Conn. 1990), a physician made an error that damaged the plaintiff’s intestines. The plaintiff would have between an 8% to 16% chance that she would suffer a future bowel obstruction. The court concluded that the plaintiff should be compensated for the increased risk of developing the bowel obstruction “to the extent that the future harm is likely to occur.” Courts have also begun allowing people to sue for medical malpractice that results in the loss of an “opportunity to obtain a better degree of recovery.” Lord v. Lovett, 770 A.3d 1103 (N.H. 2001).
Under these risk of future harm cases, damages can include those “directly resulting from the loss of a chance of achieving a more favorable outcome,” as well as damages “for the mental distress from the realization that the patient’s prospects of avoiding adverse past or future harm were tortiously destroyed or reduced,” and damages “for the medical costs of monitoring the condition in order to detect and respond to a recurrence or complications.” Joseph H. King, Jr., “Reduction of Likelihood” Reformulation and Other Retrofitting of the Loss-of-Chance Doctrine, 28 U. Mem. L. Rev. 491, 502 (1998).
In cases involving rights under the First Amendment to the U.S. Constitution, courts have sometimes recognized a harm when people are “chilled” from exercising rights such as free speech or free association. Courts have always been uneasy about recognizing a “chilling effect” and the law wavers here a bit, but the concept is definitely an accepted one in the law.
What Accounts for these Differences?
What accounts for these differences? Why are courts departing from the visceral and vested approach in some circumstances but not others?
With the photos involving nudity and death, or the revelation of deeply embarrassing secrets, judges can readily imagine the harm. It is harder to do so when various bits and pieces of more innocuous data are leaked or disclosed. With the medical cases, the harm is also much easier to understand.
Harms involving non-embarrassing data, however, are quite challenging to understand and also present some difficult practical issues. In my next post, I will explore why.
II.WHY THE LAW OFTEN
DOESN’T RECOGNIZE PRIVACY
AND DATA SECURITY HARMS
The Collective Harm Problem
One of the challenges with data harms is that they are often created by the aggregation of many dispersed actors over a long period of time. They are akin to a form of pollution where each particular infraction might, in and of itself, not cause much harm, but collectively, the infractions do create harm.
In a recent article, Privacy Self-Management and the Consent Dilemma, 126 Harvard Law Review 1880 (2013), I likened many privacy harms to bee stings. One bee sting might not do a lot of damage, but thousands of stings can be lethal.
In the movie, Office Space, three friends create a virus to deduct a fraction of a cent on every financial transaction made from their employer’s bank account, with the proceeds being deposited into their own account. The deductions would be so small that nobody would notice them, but over time, they would result in a huge windfall to the schemers. That’s the power of adding up a lot of small things.
The problem is that our legal system struggles when it comes to redressing harms created to one person by a multitude of wrongdoers. A few actors can readily be sued under joint and several liability, but suing thousands is much harder. The law has better mechanisms for when many people are harmed by one wrongdoer, such as class actions, but even here the law has difficulties, as only occasionally do class members get much of benefit out of these cases.
The Multiplier Problem
The flip side of collective harm is what I call the “multiplier problem,” which affects the companies that cause privacy and data security problems. A company might lose personal data, and these days, even a small company can have data on tens of millions of people. Judges are reluctant to recognize harm because it might mean bankrupting a company just to give each person a very tiny amount of compensation.
Today, organizations have data on so many people that when there’s a leak, millions could be affected, and even a small amount of damages for each person might add up to insanely high liability.
Generally, we make those who cause wide-scale harm pay for it. If a company builds a dam and it bursts and floods a town, that company must pay. But with a data leak, courts are saying that companies should be off the hook. In essence, they get to use data on millions of people without having to worry about the harm they might cause. This seems quite unfair.
It takes a big entity to build a dam, but a person in a garage can create an app that gathers data on vast numbers of people. Do we want to put a company out of business for a data breach that only causes people a minor harm? When each case is viewed in isolation, it seems quite harsh to annihilate a company for causing tiny harms to many people. Courts say, in the words of the song my 3-year old son will not stop singing: “Let it go.” But that still leaves the collective harm problem. If we let it go all the time, then we have death by a thousand bee stings (or cuts, whichever you prefer).
The Harm of Leaked or Disclosed Data Depends Upon Context
People often make broad statements that the disclosure of certain data will not be harmful because it is innocuous, but such statements are inaccurate because so much depends upon context.
If you’re on a list of people who prefer Coke to Pepsi, and a company sells that list to another company, are you really harmed by this information? Most people wouldn’t view a preference for Coke versus Pepsi to matter all that much. Suppose the other company starts sending you unsolicited emails based on this information. You don’t like getting these emails, so you unsubscribe from the list. Are you really harmed by this?
But suppose you’re the CEO of Pepsi and the data that you like Coke is leaked to the media. This causes you great embarrassment, and you are forced to resign as CEO. That might really sting (though I’m certain you would have negotiated a great severance package).
Another example: For many people, their home address is innocuous information. But if you’re an abuse victim trying to hide from a dangerous ex-spouse who is stalking you, then the privacy of your home address might be a matter of life or death.
Moreover, the harmfulness of information depends upon the practices of others. Consider the Social Security number (SSN). As I discussed in a previous post, the reason why SSNs are so harmful if disclosed is because organizations use them to authenticate identity – they use them as akin to passwords. It is this misuse of SSNs by organizations that makes SSNs harmful. If SSNs were never misused in this way, leaking or disclosing them wouldn’t cause people harm.
The Uncertain Future
Problems of Proof
Another difficulty with harm is that the harm from privacy and data security violations may occur long after the violation. If data was leaked, an identity theft might occur years later, and a concrete injury might not materialize until after the statute of limitations has run.
Moreover, it is very difficult to trace a particular identity theft or fraud to any one particular data breach. This is because people’s information might be compromised in multiple breaches and in many different ways.
A big complicating factor is that very few identity theft cases result in much of an investigation, trial, or conviction. The facts never get developed sufficiently to figure out where the thief got the data. For example, in one estimate, fewer than 1 in 700 instances of identity theft result in a conviction.
Why are identity theft cases so neglected? Identity theft can occur outside of the locality where a victim lives, and local police aren’t going to fly to some remote island in the Pacific where the identity thief might be living. Police might be less inclined to go after an identity thief if the thief’s victims are not in the police’s jurisdiction. Cases can take a lot of resources, and police have other crimes they want to focus on more.
Without the thief being caught and fessing up about how he or she got the data, it will likely be very hard to link up identity theft or fraud to any one particular data breach.
The Aggregation Effect
With privacy, the full consequences depend not upon isolated pieces of data but upon the aggregation of data and how it is used. This might occur years in the future, and thus it is hard to measure the harm today.
Suppose at Time 1 you visit a website and it gathers some personal data in violation of its privacy policy. You are upset that it gathered data it shouldn’t have, but nothing had has happened to you yet. At Time 2, ten years from now, that data that was gathered is combined with a different set of data, and the result of that combination is that you’re denied a loan or placed on the No Fly List. The harm at Time 1 is different from the harm at Time 2. If we know about the use of the data at Time 1, then we could more appropriately assess the harm from the collection of the data. Without this knowledge at Time 1, it is hard to assess the harm.
Harm is Hard to Handle
Privacy harms are cumulative and collective, making them very difficult to pin down and link to any one particular wrongdoer. They are understandably very hard for our existing legal system to handle.
III. DO PRIVACY VIOLATIONS AND DATA BREACHES CAUSE HARM?
In this post, I want to explore two issues that frequently emerge in privacy and data security cases: (a) the future risk of harm; and (b) individual vs. social harm.
Future Risk of Harm
As I discussed in my first post in this series, the law’s conception of harm is to focus on visceral and vested injuries – financial or physical harm that has already occurred. Courts struggle greatly in handling the future risk of harm.
Is a future risk of harm really a harm? I believe that it is. It might be hard to see, but consider the following analogy: We generally don’t perceive air as having mass or weight – but it does, of course. Experiments to prove this to school children typically involve balancing two balloons, one of which is then popped to show the comparison. Let’s look at the harm from a data breach. There may be no visible identity theft or fraud, but let’s try a similar comparison to the balloon experiment. Imagine I own two identical safes. I want to sell them. I list them on eBay:
SAFE FOR SALE
Made of the thickest iron with the most unbreakable lock
.
SAFE FOR SALE
Made of the thickest iron with the most unbreakable lock. However, the combination to the safe was improperly disclosed and others may know it. Unfortunately, the safe’s combination cannot be reset.
Which safe would get the higher price?
Now we can see it! Safe 2 is no longer as good as Safe 1. It has been harmed by the improper disclosure, and its value has been reduced.
If I remove the locks to your doors in your house, but there’s no burglar yet or intruder, is there no harm to you? I think there is — you’re clearly worse off.
Or suppose there’s a new virus. The virus isn’t contagious. It has no side effects. But it makes people more vulnerable to getting a painful disease later on that can take a year or more to recover from. Many people will not get this disease, only some. But those with the virus are at greater risk. Now, imagine I secretly inject you with this virus. Are you harmed?
Now, suppose there’s a remedy – another shot that cures the virus. Would you pay for it?
I provide these analogies to demonstrate that although having one’s risk of future harm increased may not be as easy to see with the naked eye, it does put someone in a worse position. People are made more vulnerable; they are put in a weakened and more precarious position. Their risk level is increased. In the immediate present, this situation is undesirable, anxiety-producing, and frustrating.
And how can there be no harm when so many laws mandate the protection of privacy and data security? If violations don’t create harms, then why have all these laws? Why mandate costly compliance measures? In short, if data violations don’t cause harm, then why spend so much money and time in protecting against them?
Individual vs. Social Harm
The law often is fixated on individual harm, but many privacy and data security issues involve not just harm to individuals, but a larger social harm.
What if a company secretly sends your data over to the NSA, and you never find out about it? Nothing bad ever happens to you. The data just goes into some supercomputer at the NSA, where it is stored secretly forever. Are you harmed? Or is it akin to the proverbial tree that falls in the forest that nobody hears?
The fact that the NSA can gather data in secret, virtually unchecked, and can do so without accountability to the public is a threat to democracy. It is certainly a problem. It is harmful to society and to democracy, but it might be hard to prove that any one individual was harmed.
Is Harm the Right Issue?
So what should be done? In this series of posts, I have shown how the law often fails to recognize privacy/security harms and why it is so difficult for the law to do so. In this post, I have shown that there really are problems caused by privacy and security violations, ones that are harmful, but just in ways that are very difficult to establish in the law’s current framework.
One way to deal with the problem is to push the law to better recognize privacy and data security harms. I think that this could help, though it will be quite challenging. Even if successful, I am unsure whether a recognition of harm would best solve the problems. Class action lawyers would surely benefit, but would it achieve the goals we want to achieve? For me, those goals broadly are (1) a robust use of data; (2) robust protections on that data; (3) widespread compliance with these protections and strong deterrence for violations; and (4) redress when individuals are harmed in a significant manner.
Maybe the best method might be to shift the focus away from harms. But if we do that, what should the focus be on? How should privacy and security violations be dealt with? I will explore this issue in the next installment.
IV. HOW SHOULD THE LAW HANDLE
PRIVACY AND DATA SECURITY HARMS?
In three earlier posts, I’ve been exploring the nature of privacy and data security harms.
In the first post, Privacy and Data Security Violations: What’s The Harm?, I explored how the law often fails to recognize harm for privacy violations and data breaches.
In the second post, Why the Law Often Doesn’t Recognize Privacy and Data Security Harms, I examined why the law has struggled in recognizing harm for privacy violations and data breaches.
In particular, I pointed out the “collective harm problem” — that data harms are often caused by the combination of many actions by different actors over a long period of time, which makes it hard to pin the harm to a single wrongdoer.
I also discussed the “multiplier problem” – that companies have data on so many people these days that an incident can affect millions of people yet cause each one only a small amount of harm. Adding it all up, however, could lead to catastrophic damages for a company.
In the third post, Do Privacy Violations and Data Breaches Cause Harm?, I examined why the future risk of harm, often ignored by courts, really is harmful. I also pointed out that privacy violations and data breaches often cause harm not just to individuals, but also to society.
In this post, I will discuss how the law should handle privacy and security harms.
Statutory Damages
One potential solution is for the law to have statutory damages – a set minimum amount of damages for privacy/security violations. A few privacy statutes have them, such as the Electronic Communications Privacy Act (ECPA).
The nice thing about statutory damage provisions is that they obviate the need to prove harm. Victims can often prove additional harm above the fixed amount, but if they can’t, they can still get the fixed amount.
Are statutory damages the answer? Yes and no. In cases where we want the law to recognize harm and where it can be very difficult to prove harm, then statutory damages do the trick. But there are many circumstances, as I discuss below, when I’m not sure we would be better off if the law compensated for harm.
Should the Law Start Compensating for Data Harms?
One answer is to push the law to start compensating for data harms. On the pro side, I believe that there really are harms caused by privacy violations and data breaches. But would things be better if the law always compensated for harm? Not necessarily. There are at least two reasons why not.
Our Clunky and Costly Legal System
In many cases, the harm to each individual might be small. It would not be worth that person’s time to sue. Nor would it be worth the time and expense to have the legal system involved in millions of cases involving small harms.
There is a way our legal system gets around these difficulties – class actions. But class actions also have their pathologies. The members of the class in data harm cases hardly get anything; the lawyers make out like bandits.
Class actions do serve an important function, though. They serve as a kind of private enforcement mechanism. Damages in class actions can act as the functional equivalent of a fine that deters violations. But many cases just settle because the cost of litigating them is too high. In an ideal system, cases should settle based on their merits not based on the torturous expenses of the legal system.
The Multiplier Problem and the Collective Action Problem
The multiplier problem would not be addressed by the law recognizing harm.
When an organization causes a small amount of harm to many people, do we want to devastate that company in damages?
Causing a few people a lot of harm is generally worse than causing a lot of people a little harm. Generally, society will frown more on stabbing one person to death with a sword than poking 100 people in the arm with an acupuncture pin.
SCENARIO 1: Suppose X Corp says to you: “We have this really cool service, but there is a risk that we will cause $1 of harm to you. Do you want the service?” You say: “Sure, I’ll accept that risk because the service seems cool and the risk of harm is low.” One billion other people have the same answer. If X Corp has an incident and causes $1 of harm to one billion people, we might not want X Corp to be bankrupted by $1 billion in damages.
SCENARIO 2: Now suppose X Corp. came to you and said: “There is a risk that we will cause you $10,000 worth of harm.” You say: “Hey, wait a moment, that’s quite a lot.” Even if only you might be harmed for $10,000, we generally might have a problem throwing you under the bus for the collective good – even if X Corp’s service benefits everyone else.
But now imagine 10,000 X Corps each come to you with the deal in Scenario 1 – all together. That’s a potential $10,000 in harm and it makes the whole deal seem much less attractive. More like Scenario 2.
That’s the difficulty. So I don’t think the solution is as simple as the law just recognizing harm.
Moving Beyond Harm
Although privacy/security violations cause harm, the legal system should move beyond its fixation with harm. There are many circumstances where it is preferable to society for people or entities to comply with the law even if there is no harm. Harm is still relevant because the laws are passed to address problems that can cause harm, but the laws are designed to deter the conduct regardless of whether it does or doesn’t cause harm in any particular case.
For example, suppose you drive through a red light in the middle of the night with nobody else around. You get caught on a traffic camera and fined. There is no harm to others. Should the law be changed to fine you only if you caused harm? Imagine if that were the law. You would then run red lights whenever in your discretion you felt there was not a risk of your causing harm. You might trust your own judgment here, but do you really trust everyone else’s?
The reason for enforcing the law here is to deter, and for this purpose, harm really isn’t important in any one individual case. There is general harm from running red lights in a lot of cases, and that’s why the law forbids it. The law focuses on harm by looking at the big picture, at the collective cases, not each particular case.
Governmental Agency Enforcement
Maybe governmental agency enforcement is the answer. For example, the FTC has been bringing actions against companies that have privacy incidents and data security violations under its authority to regulate “unfair or deceptive acts or practices.” The FTC has brought cases for more than 15 years, and it is has a broader view of harm. It is not tethered simply to monetary or physical harm. (For more background about FTC enforcement, see Daniel J. Solove & Woodrow Hartzog, The FTC and the New Common Law of Privacy, 114 Columbia Law Review 583 (2014)).
Agencies can get around the multiplier problem because they are not tethered to the traditional harm model that forces a particular amount for each person affected. Agencies can fine companies an appropriate fine by taking into account all the circumstances (but the FTC, unfortunately, is limited in its ability to fine).
The FTC can address data security issues earlier on, even before they cause harm. In a few cases, the FTC brought actions against companies for inadequate security even though the companies had not yet had a data breach.
However, we shouldn’t rely solely on agencies, as there are problems of agency capture plus the various efforts by presidential administrations to undermine agencies they don’t like. When agencies don’t stand up for people, people need a way to stand up for themselves, and one of the great virtues of our legal system is that it often provides individuals with a means to seek redress on their own. I think we need a mechanism that allows for individuals not to be solely at the mercy of agencies to protect them.
Back to Basics: Focusing On Goals
The best way to approach the issue is to go back to the basics. Let’s focus on our goals. And for these, I think I can set forth goals that will have a wide amount of consensus:
(1) We want a system that permits a robust use of personal data when it provides social benefits.
(2) We want robust protections for personal data.
(3) We want widespread compliance with these protections and strong deterrence of violations.
(4) We want compensation for individuals who are harmed in a significant manner.
I will focus on #3 and #4 below.
The Need for Compliance and Deterrence
The law needs to create an incentive to comply. Too often, we rely merely on good will and kindness to motivate compliance, but experience shows that this doesn’t work. Only by creating the right incentives will the law make companies behave appropriately.
The law should focus primarily on deterrence. The ideal penalty, in my view, is one that will make the company worse off for the violation. Too often, agency penalties – including the FTC and HHS – do not even cover a fraction of what was gained by a violation.
Moreover, there must be a reasonable likelihood of getting caught. The FTC and HHS don’t bring a lot of actions, so many entities – especially smaller ones – will very rarely be targeted. Occasionally one is, but the odds are greater being hit by lightning.
Ultimately, penalties should be designed to create adequate incentives to deter violations.
We also need a mechanism for individuals to be protected when agencies fall into periods of derelict enforcement or are weakened by a presidential administration that is antagonistic to the agency’s mission.
One possible solution is for people to be able to sue only if a court determines that no regulatory agency has taken adequate actions. The court would first review how the agency handled the matter. If the agency didn’t handle it adequately, then a case could proceed in court.
Compensation
We still would need a compensation system for individuals who are harmed in a significant way. Perhaps this could be established through a fund that comes out of the monetary penalties agencies exact from non-complying entities.
Or maybe we should require companies that collect data to pay into a general fund which would be administered by the government to compensate people (something like worker’s compensation). The payment would be like an insurance premium, which could be higher or lower based on whether a company followed industry standards, how much data was held, how sensitive, and whether a company had a breach in the past.
Conclusion
The above proposals are just half-baked ideas at this point. The important thing, though, is that we clearly identify our goals and recognize what we want the legal system to do. We must not lose sight of these goals in debates about harm. The goals are what will guide us and help us avoid all the confusion and problems caused by the struggle over conceptualizing data harm.
I hope that this series of posts is a helpful first step in the process of bringing more light than heat into the debate about privacy and data security harms.
* * * *
Here are the links to the original LinkedIn posts in this series
Post 1: Privacy and Data Security Violations: What’s The Harm?
Post 2: Why the Law Often Doesn’t Recognize Privacy and Data Security Harms
Post 3: Do Privacy Violations and Data Breaches Cause Harm?
Post 4: How Should the Law Handle Privacy and Data Security Harms?