[disinfo ed.'s note: the following is a chapter excerpt from Technocreep: The Surrender of Privacy and the Capitalization of Intimacy by Thomas P. Keenan]
Things were both brutal and creepy in the Paleolithic era as our ancestors struggled to survive. Homo erectus, Homo habilis, and Homo neanderthalensis all had the technologies appropriate to their time: stone tools, clothing, and most especially fire. Recent plant ash and charred bone evidence from the Wonderwerk Cave in South Africa show that, even a million years ago, early hominids harnessed the power of fire on a routine basis.
We can only imagine how bizarre the astounding transformation of matter by fire would have appeared to these people. They would have been as unsettled by this mystery as we are when we walk by a billboard and it displays something we just mentioned in a tweet. They figured it out, and so will we, but not without some burned fingers.
In their article on the Wonderwerk Cave discovery, anthropologist Michael Chazan and colleagues call the ability to control fire “a crucial turning point in human evolution.” In a very real way, we have reached a similar juncture. Information, and the technologies that handle it, are transforming our lives in ways as fundamental as the changes brought by fire.
Since we’ve had information processing for over sixty years, one might think we’ve moved beyond the “Ugh. Look. Fire!” stage. Actually, and I can say this with confidence because I’ve been involved with computers since 1965, the first four or five decades of information technology, for all but the most advanced thinkers among us, were spent just rubbing the sticks together:
First we automated things that we understood, like payroll processing, airline reservation systems, and searching for stuff in the library. A few bright lights like Joseph Weizenbaum and Ray Kurzweil pushed us to think about using technology to do things differently, instead of just billions of times faster and more efficiently.
The way we applied technology in the past made eminent good sense because that’s exactly what the times called for. Just as Henry Ford’s assembly line made car making more efficient, the IBM 360/50 computer ensured that I got my paycheck on time and that the calculations were done right, as long as humans entered the data correctly.
Now, however, as biomedical and information technologies merge in seamless ways, we don’t really know where we are going. Information will still be the spark, but our bodies and our entire lives are becoming the fuel.
It is clear that we should be thinking about the moral, ethical, and even spiritual dimensions of technology before it is too late. We know we will not get it 100% right, because some entrepreneur or hacker will always come up with something clever that we never anticipated. That’s why having a framework based on past experience can help. It’s time to consider why some technologies strike us as being technocreepy. Helen Nissenbaum has written extensively about “Privacy as Contextual Integrity,” suggesting that a shared understanding of the norms of information use is key to protecting privacy. As she writes, “demanding that information gathering and dissemination be appropriate to that context and obey the governing norms of distribution within it” will provide a benchmark of privacy protection. Nissenbaum’s ideas are explored in an article by Alexis Madrigal, which includes an excellent example of contextual privacy.
Madrigal notes that some people are offended by Google’s Street View car even though they are standing in a public street and can be seen by their neighbors. “If I’m out in the street,” he writes, “I can see who can see me, and know what’s happening. If Google’s car buzzes by, I haven’t agreed to that encounter. Ergo, privacy violation.” The key criticism of Nissenbaum’s framework, Madrigal writes, is that “it rests on the ‘norms’ that people expect.” To explore what contextualized privacy really means to us, here is a model that illustrates some aspects that have emerged as common threads in the examples we have considered:
In 2012, the New York Times described a controversial smartphone app called “Girls Around Me” (GAM) as “Taking Creepy to a New Level.” While it’s not quite true that GAM demonstrates every aspect of technocreepiness, it does come pretty close.
GAM allowed a smartphone user to snoop on strangers in the vicinity who had checked into the location-based Foursquare service. On the surface, that seems totally reasonable. If someone discloses their location on Foursquare, presumably they would like to be found, at least by some people. However, GAM also silently links back to the Facebook profiles of the subjects, which often contain a great deal of personal information. So, the scenario goes, Bob, possibly encouraged by the real-time gender ratio in a bar presented on another app, such as SceneTap, sits down on a stool and orders a beer. He stealthily checks out all the women in the bar (this was gender specific, but other apps like Gays Around Me soon followed) and chooses Alice as an attractive possible companion.
Her Facebook profile, which she has not kept sufficiently private, discloses that she is not in a relationship; likes Italian cooking; and that her favorite band is The Barenaked Ladies. Her photos reveal even more details about her likes and dislikes. Armed with this conversation fodder, and with Alice totally unaware, Bob goes over for a chat …
Let’s run GAM against the Dimensions of Creepiness to see how it stacks up.
1. Known vs. Mysterious. Since a person may not even know of the existence of GAM, let alone that people around them are using it while pecking at their smartphones, the odds are good that this falls into the mysterious category. Also, in 2012 at least, few people had done much thinking about how different technology platforms, in this case Facebook, Foursquare, and even Google Maps, could be melded together.
2. Random vs. Certain. It is merely the luck of the draw that Bob and Alice are in the same place tonight. Whether this technology will even affect them depends on many factors, engendering an aspect of randomness here. Humans are, by and large, creatures of habit who find random incursions into their lives somewhat uncanny.
3. High vs. Low Control. This is an interesting one. If Alice is a high tech wizard or a privacy expert, she may well be able to control her presence well enough to be in control of the situation. However, for most people in most bars on most nights, the likelihood is that there is a definite imbalance of power in favor of Bob, especially since he is initiating the use of the technology.
4. High vs. Low Impact. Another situational call. Even if Bob is so aroused that he approaches Alice and says something suggestive, she might just walk away or slap him. At the other extreme, swayed by his apparent clairvoyance, she might be persuaded to go home with him. That could end well or very badly. So we come back to the important factor of intention: is someone using GAM purely as a curiosity, as a tool to build up courage, or as a means to stalk a person?
5. Human vs. Mechanical. In his landmark essay On the Psychology of the Uncanny, German psychiatrist Ernst Jentsch explained that “In storytelling, one of the most reliable artistic devices for producing uncanny effects easily is to leave the reader in uncertainty as to whether he has a human person or rather an automaton before him in the case of a particular character.”
While there was certainly human activity and input at the time of entering details on Facebook and checking in on Foursquare, that may be less true in the future. Automated data gathering along the lines of Zoominfo, combined with DeepFace-type facial recognition, may put the data in there on your behalf. Speaking at Gigaom Structure Data 2014 conference, Foursquare CEO Dennis Crowley indicated that automating check-ins is part of his company’s future plans. There are some Mechanical aspects at work here too. Apps can sometimes post information automatically to your social media sites. And, of course, the very process of linking up Foursquare and Facebook through a common field like email address is a mechanical activity that has some creepy aspects of human-ness.
6. Good vs. Bad Reputation. Even the name “Girls Around Me” raised hairs on the necks of many people. After being pilloried in the press, the reputation of this app went downhill fast. Foursquare pulled the key data feed that it was using and GAM disappeared, though not before inspiring various copycats. By contrast, products with benign, even cute names like Facebook and Twitter appear to stand the test of time and work their way into our daily lives.
Some of GAM’s successors may even be creepier. Jetpac City Guides sweep up photos posted on Instagram and “looks for faces in the photos, determines if they’re happy or sad,” writes one reviewer. It also “makes style judgments (mustache? could be a hipster! lipstick? people get dressed up to visit here!).”
7. Surprise vs. Predictable. In the early days of any technology, its capabilities are often unknown and even startling to non-users. GAM undoubtedly took many people by surprise. Fortunately, it did not last long enough for people to view it is as a routine part of life in the bar scene. Other looming technologies, such as Google Glass and Oculus Rift may have enough staying power so that we will just look at Glassholes and Rifters and take them in our stride.
These Dimensions of Creepiness also provide insights into other technocreepy situations we’ve considered, and how the technocreepiness level can change based on various factors.
1. Known vs. Mysterious. Clearly revelations about government snooping programs fall in the mysterious category. We are told that, for reasons of national security, we are simply not allowed to know how we are being watched.
On the other hand, some technologies are only mysterious until you understand how they work. Disney’s Ishin-Den-Shin communication system falls into this category. If you see people sending each other messages by touching ears with fingers, as happens in the demonstration video, it looks like magic. One you’re shown how they do it, it is still impressive but no longer mysterious.
2. Random vs. Certain. When they first started randomly arriving, those “Nigerian 419” letters were novelties and many people fell for them. When they started filling up our mailboxes, and we knew we’d get some every day, they went from being creepy to simply being annoying. Eventually we will all understand that scammers use email; that a streetlamp may start talking to us at random; and that seemingly psychic coupon we just received on our smartphone is the result of walking past a particular trash can. There is some transfer of learning: once you recognize one kind of email fraud you are more likely to spot others. But it will be a never-ending process as the bad guys get sneakier and new technologies emerge.
3. High vs. Low Control. You could, perhaps, choose not to walk down a particular street in the City of London because the rubbish bins there are monitoring smartphone pings. But why should you have to? Should hidden technology force you to alter your real world behavior? Would you even know which rubbish bin is doing this? Based on the examples studied, it seems that technology control is often more illusory than real. You can decline on principle to give your Social Security Number on a credit card application. They will simply get it from a credit bureau or data broker. The work of the Dark Patterns researchers also shows that systems sometimes make it so hard to take control that we give up. They have even coined a phrase for making a website’s privacy settings ultra-complex: “Privacy Zuckering.”
4. High vs. Low Impact. Potentially your health, wealth, and the most intimate details of your life are at risk here. Even little things like falsely telling a survey site that you have hemorrhoids just for the heck of it could, in theory, come back to bite you somehow in the future. We cannot possibly foresee all the impacts of the subterranean linkages between our technological contacts, so the best policy is to treat all personal information as sensitive, and put in a liberal dose of misinformation and even deliberately misleading “facts.”
5. Human vs. Mechanical. If there were (as was once the case in China) a building full of humans sorting through our Google searches to analyze them, that would be more disturbing than knowing it’s done by bots. Then again, humans do have access, albeit in “anonymized” form, to the results of those bot searches and who knows what they are doing with that data. As we move into a world where human and machine intelligences merge and interact seamlessly, we will all be traveling into the Uncanny Valley. Whistleblower revelations have also shown us that data collected that is supposed to be mechanically analyzed can also come under human scrutiny. In the case of Optic Nerve, the GCHQ operation in the U.K. to intercept millions of Yahoo webcam images, it was reported that “The documents also chronicle GCHQ’s sustained struggle to keep the large store of sexually explicit imagery collected by Optic Nerve away from the eyes of its staff.”
6. Good vs. Bad Reputation. If the “Boyfriend Tracker” folks had planned ahead, they could have named their app “Find My Phone That I Left in a Taxi” and it might not have been banned from the Google Play store. “Girls Around Me” suffered mightily from the sexist connotations of its name, whereas Facebook has a friendly feel to it. Technology creators and marketers need to do some deeper thinking about what they call their creations. Of course they shouldn’t lie, but, as I noted in the MIT lab project, there’s a world of difference between “Kinect of the Future” and “the Anne Frank Finder.”
7. Surprise vs. Predictable. A stranger calling you by name is surprising, but not if you happen to be wearing a nametag. Especially in its early days, people armed with apps like “Girls Around Me” had a secret weapon that gave them an unfair advantage. The same will be true of Google Glass and whatever comes next. The antidote to surprise is usually education, though, to be fair, it’s rather hard to educate yourself on the impact of government programs like PRISM, Optic Nerve, and XKeyscore on your personal life, and even the inner workings of software and algorithms elude most people.
Like a lawyer preparing a witness for cross-examination, it seems appropriate to consider some counterarguments against the major premise of this book—that our lives are infected with an increasing amount of technocreepiness.
Creepy technology can be beneficial.
It would be negligent to fail to acknowledge that even the most troubling technologies can sometimes be beneficial to us. People have been located in remote areas because technology was tracking them, even if they didn’t know it. Car accident victims have survived because an OnStar operator somewhere dispatched emergency aid even if they didn’t ask for it. The Ontario woman who was reunited with her lost fifty million dollar lottery ticket probably has no problems at all with video surveillance cameras in her favorite shopping haunts. Indeed, General Keith Alexander in his Black Hat 2013 speech tried to argue that the daily lives and activities of Americans would be much less free and more restricted if the NSA was not doing what it does, as the government would have to use other measures to counteract the terrorist threat. The fact that we derive ample benefits from so many technologies should not obscure the real dangers behind them. With the rare exception of institutions that collect no data about us, we are almost always giving up some part of ourselves when we interact with technology. If we have learned anything from the advance of computer science in the past five decades, it is that smart people will find ways to use information, often in ways that were never anticipated. Gurus from Raymond Kurzweil to the proponents of Watson at IBM assure us that the capability of machine intelligence is about to accelerate greatly.
Our lives are too fragmented for any system to put it all together and form a detailed and useful dossier on us.
Some people note that their digital photos are in their camera, or burned on a CD in a desk drawer; their professional lives are conducted under one email address, and personal business under another; and if they partake in dating sites like Match.com or use Christian Mingle, they create yet another identity, probably a pseudonym. Their Amazon purchases are separated from their banking details through PayPal, and they never give their credit cards online. They infer that this gives them some degree of immunity from the creepiest aspects of technology. That may have been an accurate picture a decade ago. Today, our photos are more likely in cloud storage or on Facebook, and our friends are busy posting photos of us. As shown by Acquisti’s work, as well as studies on medical privacy by Khaled El Emam at the University of Ottawa, supposedly anonymous photos and records can sometimes be manipulated to divulge information. If we learned anything from the Manning and Snowden disclosures, it is that information on us is continually being pulled together and shared without our knowledge or consent. Some people believe that even if it is out there, extracting information about them would be like finding a specific needle in a haystacksized pile of needles. However, this line of reasoning ignores the tremendous increase in computing power, the decrease in storage costs, and rapid improvement in data mining and analysis algorithms that have taken place over the past two decades. Another important factor is the development of keys to cross-reference us. The gold standard used to be a government-issued identity number such as a Social Security Number. Today, stores like Target are creating their own numbers for us; our email address allows us to be tied together on various sites; our browsers can be fingerprinted to uniquely identify our computers. Very significantly, the face is becoming a universal identifier and one that we cannot do a lot to change.
We are just not that interesting, or wealthy, or scary for anyone to care about us.
Organizations are definitely willing to make the effort to track us, and the more they do this, the easier it becomes for them. In fact, we often enter the data for them ourselves. We have also learned that everyone’s Internet traffic is of interest to the security establishment and, for that matter, to some corporations. Every time you buy a book, “Like” a Facebook post, ”Friend” someone, send an email, or even log on to the Internet from a different place, you are leaving a digital trail that’s being scrutinized to learn more about you. And while human relationships may come and go, your online presence is forever and can be monetized in ways you may not have considered.
I’m not doing anything wrong, so I don’t have anything to worry about.
Just because you are innocent does not mean you are going to appear that way to authorities. A police officer once told me about a man who routinely parked his car in stall #11 of a parking lot. Unbeknownst to him, the guy who parked in stall #12 was a major organized crime figure. They often exchanged “good mornings” and that was enough to get the occupant of #11 placed in a police computer as a “known associate” of the Mafioso under investigation. Information about you can be incorrect, incomplete, or misleading. In 2006, a Canadian woman was denied entry to the U.S. because she had once attempted suicide, raising a furor that confidential Canadian medical records were being shared with U.S. agencies. Investigation shows that the linkage was probably through law enforcement records. Another woman had difficulty obtaining credit because her file stated she had appeared in small claims court. She argued that she was the plaintiff, and had won her case. However, the company holding the record refused to add that notation so she continued to suffer. These stories serve as a chilling reminder that any skeletons in your closet, no matter how ancient, may be dragged up at any time by databases that never forget.
Computers don’t really understand me so I will always be able to stay a step ahead of them.
It doesn’t matter if machines truly “understand” us; sometimes humans don’t understand us either. Google Search and Gmail probably don’t comprehend your whole personality in the same way as your spouse, but they know you in a different way. Your search engine may have a much better idea of precisely what you are interested in at this very moment. Since commerce ultimately happens in the present, that information is probably more useful from a business viewpoint than whether you’re a Buddhist, a Baptist, a Jew, or a Jain.
Though that would not be hard to figure out either. Your metadata might show you calling or visiting a certain church, or your checking account might show a lot of $18, $36, and $180 donations, the favorite Chai numbers for Jewish donors. The other factor is that tremendous progress is being made in machine learning and artificial intelligence. As this privacy singularity meets the Vinge/Kurzweil artificial intelligence singularity … the outcome is, almost by definition, unknowable.
In 1984, when Dr. Duncan Chappell and I were writing our CBC Radio IDEAS Series Crimes of the Future, we picked up on some emerging crimes like identity theft and traffic in human organs that have since become household words. On the other hand, we predicted that by the early 21st century, people would be routinely using electricity to stimulate the pleasure centers of the brain for recreation. Many new drugs have emerged since 1984, but “wireheading” is still a fringe technology. And, of course, in writing the programs, we completely missed the importance of a little something called “The Internet,” which was then in its infancy.
In the inevitable gap between my writing these words and you reading them, new and even more disturbing facts about technology will have emerged. We will know more about what the NSA can do. We will learn more about how companies are burrowing into our psyches in the interest of competitive advantage. More smartphone stalking applications will appear. The whole area of biological manipulation will probably grow at an exponential pace.
Some of tomorrow’s headlines will be more extreme examples of creepy technologies described in this book. Others may take us in entirely new directions. Only in retrospect can we know which ones are so creepy that people simply refuse to use them. And, of course, some will be well beyond our control as individuals, and require a higher level of intervention. Many of the creepiest aspects of new technologies will be hidden from our view, and we will only catch the occasional glimpse of them.
Perhaps the creepiest aspect of our relationship with technology is the misguided belief that we can have the benefits of new technologies without the risks. Just as there is no pleasure without pain, and no peace without war, we will always need to question the cost, the risk, and the motivation of those who may benefit from changing our lives through technology.
It will not be easy. One thing is certain: we will need to continuously make decisions on both the individual and societal levels. Long before there were viral cat videos, there was a newspaper ad offering “Free to Good Home” your choice of a playful kitten or a “handsome husband, good job, but says he doesn’t like cats and the cat goes or he goes.” The ad then suggested “come and see both and decide which you’d like.” It is almost certainly a good natured joke between spouses who really care for each other. Yet taken literally, it nicely sums up the intensity of the hard choices we will need to make:
• Every time we post a photo on Facebook, retweet on Twitter, buy a plane ticket online, or even search for one, we are making personal privacy choices that have consequences.
• When we vote for political candidates with particular views on the issues raised in Technocreep, we are making societal decisions that have consequences. We can also vote with our wallets, choosing the most privacy-friendly technologies.
• As we talk to our friends, our co-workers, and our children about technocreepiness, we are taking a stand about the kind of future we want to see.
If you are left with feeling that the world is spinning out of control, you’ve been paying attention. There is, however, some very good news. There are some concrete ways to minimize the impact of invasive technologies on your life and those of the ones you love. But you have to start now. Let’s do exactly that.
Excerpted from Technocreep: The Surrender of Privacy and the Capitalization of Intimacy with kind permission of the publishers, Greystone Books.
About the Author: Thomas P. Keenan learned to program in FORTRAN and assembly language in the 1960s at a secure computer facility in New York City, presumably to help America fight the post-Sputnik Russian menace. Instead, he was often seen around the early 2600 meetings and also playing with pay telephones. He went on to a respectable career as a computer science professor and technology journalist in Canada. He has interviewed interesting people from John Draper (Cap’n Crunch) to Arthur C. Clarke to FBI, NSA and State Department officials. In addition, he co-wrote the award-winning CBC Radio series Crimes of Future. He also helped design Canada’s first computer crime laws and is a fellow of several prestigious societies. He is currently professor of environmental design at the University of Calgary.
The post Creep Theory appeared first on disinformation.