Santa Clara University

internet-ethics-banner
Bookmark and Share
 
 
RSS

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by category Privacy and Security. clear filter
  •  Content versus Conversation

    Tuesday, Dec. 16, 2014
     
    Last month, at the pii2014 conference held in Silicon Valley (where “pii” stands for “privacy, identity, innovation”), one interesting session was a conversation between journalist Kara Swisher and the co-founders of Secret—one of a number of apps that allow users to communicate anonymously.  Such apps have been criticized by some as enabling cruel comments and cyberbullying; other commentators, however, like Rachel Metz in the MIT Tech Review, have argued that “[s]peaking up in these digital spaces can bring out the trolls, but it’s often followed by compassion from others, and a sense of freedom and relief.”
     
    During the conversation with David Byttow and Chrys Bader-Wechseler, Swisher noted that Secret says it is not a media company—but, she argued, it does generate content through its users. Secret’s co-founders pushed back. They claimed that what happens on their platform are conversations, not “content.”  Secret messages are ephemeral, they noted; they disappear soon after being posted (how soon is not clear). We’ve always had great, passionate conversations with people, they said, without having those conversations recorded for ever; Secret, they argued, is just a new way to do that.
     
    Those comments left me thinking about the term “social media” itself. What does “media” mean in this context? I’m pretty sure that most Facebook or Twitter users don’t see themselves as content-creators for media companies. They see themselves, I would guess, as individuals engaged in conversations with other individuals. But those conversations do get treated like media content in many ways. We keep hearing about social media platforms collecting the “data” or “content” created by their users, analyzing that content, tweaking it to “maximize engagement,” using it as fodder for behavioral research, etc.
     
    There are other alternatives for online conversations, of course. Texting and emailing are never claimed to constitute “content creation” for media companies. But texts and email conversations are targeted, directed. They have an address line, which has to be filled in.
     
    Tools like Secret, however, enable a different kind of interaction. If I understand it correctly, this is more like shouting out a window and—more often than not—getting some response (from people you know, or people in your area).  It’s hoping to be heard, and maybe acknowledged, but not seen, not known.
     
    A reporter for Re/Code, Nellie Bowles, once wrote about a “real-life” party organized through Secret. Some of the conversations that took place at that party were pretty odd; some were interesting; but none of them became “content” until Bowles wrote about them.
     
    Calling social media posts “content” turns them into a commodity, and makes them sound less personal. Calling them parts of a conversation is closer, I think, to what most people perceive them to be, and reminds us of social norms that we have around other people’s conversations—even if they’re out loud, and in public.
     
    It’s a distinction worth keeping in mind. 
     
    Photo by Storebukkebruse, used without modification under a Creative Commons license.
  •  Movie Review: Citizenfour

    Tuesday, Nov. 25, 2014
     
    Sona Makker is a second-year law student at Santa Clara University’s School of Law, in the process of earning a Privacy Law certificate. This piece first appeared in The Advocate--the law school's student-run newspaper.
     
    When The Guardian first leaked the story about the National Security Agency’s surveillance programs, I was sitting in a conference room at one of largest privacy conferences in the world. I couldn’t help but laugh at the irony. I was surrounded by some of the world’s leading experts in this field, who have written texts and treatises on the current state of privacy law in this country. Surveillance wasn’t on the agenda for this conference, but of course, since that day, government surveillance has remained at the top of the public’s agenda.
     
    To some, the man behind the NSA revelations, Edward Snowden, is a hero; to others he is a traitor. Whatever you may believe, I recommend seeing Laura Poitras’ latest documentary-- Citizenfour-- which follows the story of NSA whistleblower Edward Snowden during the moments leading up to the Guardian story that exposed the U.S. government’s secret collection of Verizon cellphone data.
     
    The majority of the film takes places in a hotel room in Hong Kong. Snowden contacted Poitras through encrypted channels. Only after a series of anonymous e-mail exchanges did the two finally trust that the other was really who they said they were-- “assume your adversary is capable of 3 billion guesses per second,” he wrote her. Poitras and Snowden were eventually joined by Guardian reporter Glen Greenwald, whom Snowden contacted under the pseudonym “Citizenfour.”
     
    Snowden guides the journalists through the piles and piles of NSA documents as they strategize how to publish and inform the American public about the government snooping programs, including Verizon, AT&T, and other telecom companies sharing phone records with the NSA, FBI access to data from private web companies like Yahoo and Google, and the PRISM program that authorized the collection of e-mail, text messages, and voicemails, of both foreigners and US citizens. Snowden appears to be very calm and quiet as he unveils all of this.
     
    Snowden worried that “personality journalism” would end up making the story about him, rather than the substance of his revelations. When Greenwald’s stories were published in the Guardian, the three sat together and watched as the media reacted and the story unfolded on TV. “We are building the biggest weapon for oppression in the history of mankind,” said Snowden.
     
    The film also contextualizes the leaks, providing background on the extent of government surveillance. Poitras interviewed William Binney, a former NSA employee who also blew the whistle -- “a week after 9/11, they began actively spying on everyone in this country,” he says. She also includes CSPAN footage of former NSA chief Keith Alexander who flatly denied any kind of snooping programs to Congress.
     
    There is a perfect scene (almost too perfect) where Poitras films Snowden’s reaction to a fire alarm that went off during one of their meetings in the hotel. It was a routine test, but Snowden questions whether or not someone staged it. The timing “seems fishy,” he says. Is the room bugged? As the viewer you start to question whether it was actually a test too, but then you ask yourself “is that even possible?” It seems so outlandish, straight out of a scene from 24 or something. With that, Poitras effectively prompts the viewer to think that the whole thing, the snooping, the surveillance, it all seems outlandish, but clearly, the evidence proves otherwise.
     
    I am optimistic that the law can serve as a powerful counterweight to curbing mass surveillance, but this cannot happen without continued public pressure. The Internet is changing how we live and how we interact with our social institutions. Institutions—how we structure our everyday lives and how we produce social order—are not written in stone, but are mutable and capable of evolving alongside our own evolution as social beings. This evolution is dependent upon the will and foresight of those who are willing to speak up. Citizenfour puts a human face to Snowden, and Poitras does so without painting him as a hero or a villain, but just as a twenty-something concerned citizen whom many can relate to. “This is the first time people can see who Snowden really is,” said Glenn Greenwald after the film’s premiere. “You can decide what you think about him." 
     
    Photo by Mike Mozart, used without modification under a Creative Commons license.
  •  Cookies and Privacy: A Delicious Counter-Experiment

    Monday, Nov. 17, 2014

     

    Last month, a number of stories in publications such as Pro Publica, Mashable, Slate, and The Smithsonian Magazine covered an “experiment” by artist Risa Puno, who asked attendees at an art festival to disclose bits of personal information about themselves in exchange for cookies.  ProPublica described the event as a “highly unscientific but delicious experiment” in which “380 New Yorkers gave up sensitive personal information—from fingerprints to partial Social Security numbers—for a cookie.” Of course, we are given no count of the number of people who refused the offer, and the article notes that “[j]ust under half—or 162 people—gave what they said were the last four digits of their Social Security numbers”—with that rather important “what they said” caveat casually buried mid-sentence.

    “To get a cookie,” according to the Pro Publica story, “people had to turn over personal data that could include their address, driver's license number, phone number and mother's maiden name”—the accuracy of most of which, of course, Puno could also not confirm.
     
    All of this is shocking only if one assumes that people are not capable of lying (especially to artists offering cookies). But the artist declared herself shocked, and Pro Publica somberly concluded that “Puno's performance art experiment highlights what privacy experts already know: Many Americans are not sure how much their personal data is worth, and that consumer judgments about what price to put on privacy can be swayed by all kinds of factors.”
     
    In this case, I am at least thankful for the claim that the non-experiment “highlights,” rather than “proves” something. Other stories, however, argued that the people convinced to give up information “demonstrated just how much their personal information was worth.” The Smithsonian argued that the “artistic experiment is confirmation of the idea that people really just have no sense of what information and privacy is worth other than, variably, a whole lot, or, apparently, a cookie.” The headline in The Consumerist blared, “Forget Computer Cookies: People Happily Give Up Personal Data For The Baked Kind” (though, in all fairness, The Consumerist article did highlight the “what they said” bit, and noted that the “finely-honed Brooklynite sense of modern irony may have played a role, too. Plenty of purchasers didn’t even eat their cookies…. They ‘bought’ them so they could post photos on Twitter and Instagram saying things like, ‘Traded all my personal data for a social media cookie’…”—which suggests rather more awareness than Puno gives people credit for).
     
    In any case, prompted by those stories, I decided that a flip-side “artistic experiment” was in order. Last week, together with my partner in privacy-protective performance art—Robert Henry, who is Santa Clara University’s Chief Information Security Officer—I set up a table in between the campus bookstore and the dining area.  Bob had recently sent out a campus-wide email reminding people to change their passwords, and we decided that we would offer folks cookies in return for password changes. We printed out a sign that read “Treats for Password Changes,” and we set out two types of treats: cookies and free USB drives. The USB drives all came pre-loaded with a file explaining the security dangers associated with picking up free USB drives. The cookies came pre-loaded with chocolate chips.
     
    We are now happy to report our results. First, a lot of people don’t trust any offers of free cookies. We got a lot of very suspicious looks. Second, within the space of about an hour and a half, about 110 people were easily convinced to change one of their passwords—something that is a good privacy/security practice in itself—in exchange for a cookie. Does this mean people do care about privacy? (To anticipate your question: some people pulled out their phones or computers and appeared to be changing a password right there; others promised to change a password when they got to their computer; we have no way of knowing if they did—just like Puno had no way of knowing whether much of the “information” she got was true. Collected fingerprints aside…) Third, cookies were much, much more popular than the free USB drives. Of course, the cookies were cheaper than the USB drives. Does this mean that people are aware of the security dangers posed by USB drives and are willing to “pay” for privacy?
     
    Responses from the students, parents, and others who stopped to talk with us and enjoy the soft warm chocolate-chip cookies ranged from “I’m a cryptography student and I change my passwords every three months” to “I only have one password—should I change that?” to “I didn’t know you were supposed to change passwords” to “But I just changed my password in response to your email” (which made Bob really happy). It was, if nothing else, an educational experience—in some cases for us, in others for them.
     
    So what does our “artistic experiment” prove? Absolutely nothing, of course—just like Puno’s “experiment,” which prompted so much coverage. (Or maybe they both prove that people like free cookies.)
     
    The danger with projects like hers, though, is that their “conclusions” are often echoed in discussions about business, regulation, or public policy in general: If people give up personal information for a cookie, the argument goes, why should we protect privacy? That is the argument that needs to be refuted—again and again. Poll after poll finds that people say they do value their privacy, are deeply concerned by its erosion, and want more laws to protect it; but some refuse to believe them and turn, instead, to “evidence” from silly “experiments.” If so, we need more flip-side “experiments”—complete, of course, with baked goods.
     
  •  Questions about Mass Surveillance

    Tuesday, Oct. 14, 2014


    Last week, Senator Ron Wyden of Oregon, long-time member of the Select Committee on Intelligence and current chairman of the Senate Finance Committee, held a roundtable on the impact of governmental surveillance on the U.S. digital economy.  (You can watch a video of the entire roundtable discussion here.) While he made the case that the current surveillance practices have hampered both our security and our economy, the event focused primarily on the implications of mass surveillance for U.S. business—corporations, entrepreneurs, tech employees, etc.  Speaking at a high-school in the heart of Silicon Valley, surrounded by the Executive Chairman of Google, the General Counsels of Microsoft and Facebook, and others, Wyden argued that the current policies around surveillance were harming one of the most promising sectors of the U.S. economy—and that Congress was largely ignoring that issue. “When the actions of a foreign government threaten red-white-and-blue jobs, Washington [usually] gets up at arms,” Wyden noted, but “no one in Washington is talking about how overly broad surveillance is hurting the US economy.”

    The focus on the economic impact was clearly intended to present the issue of mass surveillance through a new lens—one that might engage those lawmakers and citizens who had not been moved, perhaps, by civil liberties arguments.  However, even in this context, the discussion frequently turned to the “personal” implications of the policies involved.  And in comments both during and after the panel discussion, Wyden expressed his deep concern about the particular danger posed by the creation and implementation of “secret law.”  Microsoft’s General Counsel, Brad Smith, went one step further:  “We need to recognize,” he said, “that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies, and even the intelligence community itself.”

    That brought me back to some of the questions I raised in 2013 (a few months after the Snowden revelations first became public), in an article published by the Santa Clara Magazine.  One of the things I had asked was whether the newly-revealed surveillance programs might “change the perception of the United States to the point where they hamper, more than they help, our national security. “ In regard to secret laws, even if those were to be subject to effective Congressional and court oversight, I wondered, "[i]s there a level of transparency that U.S. citizens need from each branch of the government even if those branches are transparent to one another? In a democracy, can the system of checks and balances function with informed representatives but without an informed public? Would such an environment undermine voters’ ability to choose [whom to vote for]?"

    And, even more broadly, in regard to the dangers inherent in indiscriminate mass surveillance, "[i]n a society in which the government collects the metadata (and possibly much of the content) of every person’s communications for future analysis, will people still speak, read, research, and act freely? Do we have examples of countries in which mass surveillance coexisted with democratic governance?"

    We know that a certain level of mass surveillance and democratic governance did coexist for a time, very uneasily, in our own past, during the Hoover era at the FBI—and the revelations of the realities of that coexistence led to the Church committee and to policy changes.

    Will the focus on the economic impact of current mass governmental surveillance lead to new changes in our surveillance laws? Perhaps.  But it was Facebook’s general counsel who had (to my mind) the best line of last week’s roundtable event. When a high-school student in the audience asked the panel how digital surveillance affects young people like him, who want to build new technology companies or join growing ones, one panelist advised him to just worry about creating great products, and to let people like the GCs worry about the broader issues.  Another panelist told him that he should care about this issue because of the impact that data localization efforts would have on future entrepreneurs’ ability to create great companies. Then, Facebook’s Colin Stretch answered. “I would say care about it for the reasons you learned in your Civics class,” he said, “not necessarily the reasons you learned in your computer science class.”

    Illustration by Stuart Bradford

  •  Are You A Hysteric, Or A Sociopath? Welcome to the Privacy Debate

    Tuesday, Oct. 7, 2014

     

    Whether you’re reading about the latest data-mining class action lawsuit through your Google Glass or relaxing on your front porch waving at your neighbors, you probably know that there’s a big debate in this country about privacy.  Some say privacy is important. Some say it’s dead.  Some say kids want it, or not. Some say it’s a relatively recent phenomenon whose time, by the way, has passed—a slightly opaque blip in our history as social animals. Others say it’s a human right without which many other rights would be impossible to maintain.

    It’s a much-needed discussion—but one in which the tone is often not conducive to persuasion, and therefore progress.  If you think concerns about information privacy are overrated and might become an obstacle to the development of useful tools and services, you may hear yourself described as a [Silicon Valley] sociopath or a heartless profiteer.  If you believe that privacy is important and deserves protection, you may be called a “privacy hysteric.”
     
    It’s telling that privacy advocates are so often called “hysterics”—a term associated more commonly with women, and with a surfeit of emotion and lack of reason.  (Privacy advocates are also called “fundamentalists” or “paranoid”—again implying belief not based in reason.)  And even when such terms are not directly deployed, the tone often suggests them. In a 2012 Cato Institute policy analysis titled “A Reasonable Response to the Privacy ‘Crisis,’” for example, Larry Downes writes about the “emotional baggage” invoked by the term “privacy,” and advises, “For those who naturally leap first to leg­islative solutions, it would be better just to fume, debate, attend conferences, blog, and then calm down before it’s too late.”  (Apparently debate, like fuming and attending conferences, is just a harmless way to let off steam—as long as it doesn’t lead to such hysteria as class-action lawsuits or actual attempts at legislation.)
     
    In the year following Edward Snowden’s revelations, the accusations of privacy “hysteria” or “paranoia” seemed to have died down a bit; unfortunately, they might be making a comeback. The summary of a recent GigaOm article, for example, accuses BuzzFeed of “pumping up the hysteria” in its discussion of ad beacons installed—and quickly removed—in New York.
     
    On the other hand, those who oppose privacy-protecting legislation or who argue that other values or rights might trump privacy sometimes find themselves diagnosed, too–if not as sociopaths, then at least as belonging on the “autism spectrum”: disregardful of social norms, unable to empathize with others.
     
    Too often, the terms thrown about by some on both sides in the privacy debate suggest an abdication of the effort to persuade. You can’t reason with hysterics and sociopaths, so there’s no need to try. You just state your truth to those others who think like you do, and who cheer your vehemence.
     
    But even if you’re a privacy advocate, you probably want the benefits derived from collecting and analyzing at least some data sets, under some circumstances; and even if you think concerns about data disclosures are overblown, you still probably don’t disclose everything about yourself to anyone who will listen.
     
    If information is power, privacy is a defensive shell against that power.  It is an effort to modulate vulnerability.  (The more vulnerable you feel, the more likely you are to understand the value of privacy.)  So privacy is an inherent part of all of our lives; the question is how to deploy it best.  In light of new technologies that create new privacy challenges, and new methodologies that seek to maximize benefits while minimizing harms (e.g. “differential privacy”), we need to be able to discuss this complicated balancing act —without charged rhetoric making the debate even more difficult.
     
    If you find yourself calling people privacy-related names (or writing headlines or summaries that do that, even when the headlined articles themselves don’t), please rephrase.
     
    Photo by Tom Tolkien, unmodified, used under a Creative Commons license: https://creativecommons.org/licenses/by/2.0/legalcode
     
     
  •  Protecting Privacy and Society

    Monday, Apr. 15, 2013

    Consumer and business data is increasingly moving to the "cloud," and people are clamoring for protection of that data.  However, as Symantec's President, CEO, and Chairman of the Board Steve Bennett points out in this clip, "maximum privacy" is really anonymity, and some people use anonymity as a shield for illegal and unethical behavior.  How should cloud service providers deal with this dilemma?  What is their responsibility to their customers, and to society at large?  How should good corporate citizens respond when they are asked to cooperate with law enforcement? 

    Providers of cloud services are all faced with this dilemma; as Ars Technica recently reported, for example, Verizon took action when it discovered child pornography in one of its users' accounts.