Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by category Privacy and Security. clear filter
  •  Coverage of the Privacy Crimes Symposium

    Thursday, Oct. 29, 2015

    Note: The author of this blog post, Brent Tuttle, CIPP/US E, is a third-year law student at Santa Clara University’s School of Law; he is pursuing a Privacy Law certificate. This piece first appeared in The Advocate--the law school's student-run newspaper.

    On October 6th, SCU Law’s High Tech Law Institute, the Markkula Center for Applied Ethics, and the Santa Clara District Attorney’s Office hosted the first ever “Privacy Crimes: Definition and Enforcement” half-day conference. The Electronic Frontier Foundation (EFF), the International Association of Privacy Professionals (IAPP), and the Identity Theft Council (ITC) also sponsored the free event. It brought together practitioners, academics, and students to discuss several important questions that both civil and criminal legal professionals face in the digital age.  For example, what is a privacy crime? What is being done to enforce the laws addressing these privacy crimes? Furthermore, how can we balance privacy interests in the criminal justice system? 

    After opening remarks from Santa Clara District Attorney Jeffrey Rosen, Daniel Suvor gave the keynote address. Mr. Suvor is the Chief of Policy to the Attorney General of California, Kamala Harris, and former Senior Director of the Office of Cabinet Affairs at the White House. Mr. Suvor discussed his work with the California Attorney General’s Office and elaborated on the AG’s stance regarding the current state of privacy crimes. 

    Mr. Suvor spoke of the California AG’s efforts to combat cyber-crimes.  He noted that California was the first state to have a data breach notification law, implemented in 2003. Mr. Suvor also discussed a recent settlement between the CA Attorney General and Houzz, Inc. that is the first of its kind in the United States. Among other things, the terms of the settlement require Houzz, Inc. to appoint a Chief Privacy Officer who will oversee the company’s compliance with privacy laws and report privacy concerns to the CEO and/or other senior executives. 

    The California Attorney General has also increased privacy enforcement through the creation of an E-Crime Unit in 2011 to prosecute identity theft, data intrusion, and crimes involving the use of technology. To date, the E-Crime Unit has conducted several investigations involving piracy, shutting down illegal streaming websites, and online counterfeit operations. Mr. Suvor noted a recent area of priority to the Unit: the prosecution of cyber exploitation, commonly known as “revenge porn.” 

    Mr. Suvor clarified that the AG’s Office adamantly believes the term “revenge porn” is a misnomer. The Office takes the position that the term “cyber exploitation” is more appropriate for two reasons.  First, porn is generally created for public consumption, whereas “revenge porn” was not created with a public audience in mind. In addition, the Office does not give any credence to the notion that the publisher of non-consensual porn has any legitimate interest in vengeance or revenge in carrying out such heinous acts. He noted that cyber exploitation is a serious nationwide epidemic and that California law expressly prohibits this conduct under California Penal Code, section 647. To tackle this problem, the Office is collaborating with the private sector. Mr. Suvor reported that Google, Facebook, Twitter, Reddit, and others have since adopted policies that will help victims combat cyber exploitation.

    Following Mr. Suvor’s keynote, Irina Raicu, Director of Internet Ethics at the Markkula Center for Applied Ethics, moderated a panel titled “What Is a Privacy Crime?” The well-rounded group of panelists consisted of Hanni Fakhoury, Senior Staff Attorney from the Electronic Frontier Foundation; Tom Flattery, Santa Clara County’s Deputy District Attorney; and Susan Freiwald, a Professor at the University of San Francisco School of Law. 

    Ms. Freiwald opened the panel by acknowledging how hard it is to define a privacy crime. Privacy interests are amorphous. To some, privacy is the right to be left alone. Others seek privacy in their communications, privacy in their autonomy, but depending on the individual, privacy expectations and concerns will vary. However, she drew a sharp distinction in differentiating privacy crimes from torts, because in this respect, the State has an interest in punishing an individual for privacy crimes. 

    Ms. Freiwald also urged the audience that it is important to proceed with caution when defining privacy crimes. For example, Freiwald stressed the consideration of due process. We must ensure that legislation specifies conduct so that people have notice of what exactly is illegal, what the relevant level of culpability is, whether a privacy crime must be subjectively or objectively harmful, and what defenses may be available to those accused. Furthermore, she noted that protecting some from privacy crimes could also conflict with the First Amendment. In this respect, she urged that we find a proper balance between protecting an individual’s privacy while leaving room for freedom of speech and freedom of the press. 

    The co-panelists echoed Ms. Freiwald’s concerns and statements. Deputy District Attorney Tom Flattery shed light on how the Penal Code helps protect privacy, but also recognized that there are gaps that it does not address. While the Penal Code combats matters where one individual does something to harm another individual, it does not address matters Mr. Flattery referred to as “commercial surveillance,” where private companies use deceptive terms of service to invasively collect data on their users. 

    Mr. Flattery went into detail about the common use of the California Penal Code to deal with privacy crimes.  Specifically, section 502 contains anti-hacking provisions that differentiate criminal activity by what an individual does with the data after gaining unauthorized access. For example, if someone merely gained unauthorized access to a social media or email account and did nothing with this data, that person would be subject to Penal Code § 502(c)(7), though first offense is only considered an infraction, in the same vein as a speeding or parking ticket. However, if the individual used the information, then Penal Code § 502(c)(2) elevates the charge to a misdemeanor or felony. Mr. Flattery encouraged the audience to think about what the term “use” means in the context of the Code. Does this code section only apply when an individual uses the information to obtain financial gain, or does sharing this data with a group of friends also constitute a “use”? Mr. Flattery stated that these questions don’t really have “good clean answers,” which leaves citizens without a bright-line rule in a context that will become increasingly more important over time. 

    Another area of concern Mr. Flattery highlighted was the increasing theft of medical IDs and electronic medical records. In these instances, people will go in to a hospital or medical treatment facility and assume the identity of someone else to obtain free healthcare services under a stolen alias. However, as medical records increasingly become electronic, when the victim of this crime comes into the hospital with a legitimate medical emergency, his or her electronic medical record is full of inaccurate medical information. In these cases, the identity theft can be life threatening, as a patient’s record can correctly document that someone under their name received a particular medication two weeks prior, when in fact the actual patient is fatally allergic to such treatment. 

    Mr. Fakhoury brought a unique perspective to the debate, but one that all the panelists were somewhat in agreement on. His takeaway was that when defining and addressing privacy crimes, we “need to chill out a little bit and think these things through.” Rather than adding more legislation, he stressed that we should examine whether or not the current California Penal Code sections could be used to address the problem. Mr. Fakhoury believes that the current penal code could fix at least some of the new problems society is facing with “privacy crimes.” For example, addressing Mr. Flattery’s previous remarks about medical ID theft, Mr. Fakhoury noted that the general identity theft statute is an applicable statutory remedy, so he questioned why we would need another law to handle this problem. Mr. Fakhoury also emphasized the potential issues of adding an abundance of new and unnecessary legislation. New bills could be drafted sloppily or poorly and include ambiguous language that is left for courts to interpret, thereby covering more conduct than was originally intended. 

    Not entirely against new legislation, Mr. Fakhoury urged support for CalECPA, aka SB-178 (which was signed by the Governor late last week). This new law provides citizens with privacy protections against law enforcement. Mr. Fakhoury distinguished this piece of legislation from others that might be quick to criminalize privacy crimes, as he believes it provides law enforcement with tools to get sensitive digital information, but it also protects the public by requiring law enforcement to get a search warrant beforehand. 

    Santa Clara County’s Supervising District Attorney Christine Garcia-Sen moderated the next panel, “What’s Being Done to Enforce Laws Addressing Privacy Crimes?” Attorney Ingo Brauer, Santa Clara County Deputy District Attorney Vishal Bathija, and Erica Johnstone of Ridder, Costa & Johnstone LLP all participated in an hour-long talk that discussed the obstacles and successes practitioners are facing in enforcing privacy crimes. 

    Mr. Bathija highlighted the fact that frequently victims are so embarrassed by these privacy crimes that they are hesitant to shed more light on the humiliating moments with court proceedings and enforcement. He used an example of a sexual assault case where an underage female was exchanging sexually explicit photos with another person. Prior to the case going to trial, the victim realized that the details of her sexual assault would be heard by the jury. Understandably, she vocally expressed her concerns that she didn’t want other people to know that she had been subject to this sexually deviant conduct with the offender.

    Erica Johnstone was quick to point out that a huge difficulty in litigating “revenge porn” or “cyber exploitation,” is the expense of doing so. Many firms cannot accept clients without a retainer fee of $10,000. If the case goes to court, a plaintiff can easily accrue a bill of $25,000, and if the party wants to litigate to get a judgment, the legal bill can easily exceed $100,000. This creates a barrier whereby most victims of cyber exploitation cannot afford to hire a civil litigator. Ms. Johnstone shared her experience of working for pennies on the dollar in order to help victims of these crimes, but stressed how time- and labor-intensive the work was. 

    Ms. Johnstone also pointed out the flawed rationale in using copyright law to combat revenge porn. Unless the victim is also the person who took the picture, the victim has no copyright in the photo. In addition, the non-consensual content often goes viral so quickly that it is impossible to employ copyright takedown notices to effectively tackle this problem. She described one case where a client and her mother spent 500 hours sending Digital Millennium Copyright Act takedown notices to websites. She also spoke on the issue of search results still displaying content that had been taken down, but was pleased to announce that Google and Bing! had altered their practices. These updated policies allow a victim to go straight to search engines and provide them with all URLs where the revenge porn is located, at which point the search engines will automatically de-list all of the links from their query results. Ms. Johnstone also applauded California prosecutors in their enforcement of revenge porn cases and said they were “setting a high bar” that other states have yet to match. 

    As a defense attorney, Ingo Brauer expressed his frustration with the Stored Communications Act, a law that safeguards digital content. He noted that while prosecutors are able to obtain digital content information under the SCA, the law does not provide the same access for all parties, for example defense and civil attorneys. Mr. Brauer stressed that in order for our society to ensure due process, digital content information must be available to both prosecutors and defense attorneys. Failure to provide equal access to digital content information could result in wrongful prosecutions and miscarriages of justice. 

    All three panelists were also adamant about educating others and raising awareness surrounding privacy crimes. In many instances, victims of revenge porn and other similar offenses are not aware of the remedies available to them or are simply too embarrassed to come forward. However, they noted that California offers more legal solutions than most states, both civilly and criminally. Their hope is that as the discussion surrounding privacy crimes becomes more commonplace, the protections afforded to victims will be utilized as well.

    The conference closed out with the panel “Balancing Privacy Interests in the Criminal Justice System.” Santa Clara Superior Court Judge Shelyna V. Brown, SCU Assistant Clinical Professor of Law Seth Flagsberg, and Deputy District Attorney Deborah Hernandez all participated on the panel moderated by SCU Law Professor Ellen Kreitzberg. 

    This area presents a particularly sensitive field as both victims and the accused are entitled to certain privacy rights within the legal system, yet prioritizing or balancing these interests is difficult. For example, Judge Brown stated in a hypothetical sexual assault case where the defense sought psychological records of the victim, she would want to know if the records would have any relevance to the actual defense. She stressed that the privacy rights of the victim must be fairly weighed against the defendant’s right to fully cross-examine and confront his or her accusers. And even if the information is relevant, she noted that often times you must decide whether all of it should be released and whether the information should be released under seal.

    Overall, the Privacy Crimes conference served as an excellent resource for those interested in this expanding field. EFF Senior Staff Attorney Hanni Fakhoury stated, “This was a really well put together event. You have a real diversity of speakers and diversity of perspectives. I think what’s most encouraging is to have representatives from the District Attorney’s Office and the Attorney General’s Office, not only laying out how they see these issues, but being in an audience to hear civil libertarians and defense attorneys discuss their concerns. Having...very robust pictures, I think it’s great for the University and it’s great for the public interest as a whole to hear the competing viewpoints.”  

    Videos, photos, and resources from the event

  •  Et tu, Barbie?

    Wednesday, Oct. 14, 2015

    In a smart city, in a smart house, a little girl got a new Barbie. Her parents, who had enough money to afford a rather pricey doll, explained to the girl that the new Barbie could talk—could actually have a conversation with the girl. Sometime later, alone in her room with her toys, the little girl, as instructed, pushed on the doll’s belt buckle and started talking. After a few minutes, she wondered what Barbie would answer if she said something mean—so she tried that.

    Later, the girl’s mother accessed the app that came with the new doll and listened to her daughter’s conversation. The mom then went to the girl’s room and asked her why she had been mean to Barbie. The little girl learned something—about talking, about playing, about technology, about her parents.

    Or maybe I should have written all of the above using future tense—because “Hello Barbie,” according to media reports, does not hit the stores until next month.

    After reading several articles about “Hello Barbie,” I decided to ask several folks here at the university for their reactions to this new high-tech toy. (I read, think, and write all the time about privacy, so I wanted some feedback from folks who mostly think about other stuff.)  Mind you, the article I’d sent them as an introduction was titled “Will Barbie Be Hackers’ New Plaything?”—so I realize it wasn’t exactly a neutral way to start the conversation. With that caveat, though, here is a sample of the various concerns that my colleagues expressed.

    The first reaction came via email: “There is a sci-fi thriller in there somewhere…” (Thriller, yes, I thought to myself, though not sci-fi anymore.)

    The other concerns came in person.  From a parent of grown kids: the observation that these days parents seem to want to know absolutely everything about their children, and that that couldn’t be healthy for either the parents or the kids. From a dad of a 3-year girl: “My daughter already loves Siri; if I gave her this she would stop talking to anybody else!” From a woman thinking back: “I used to have to talk for my doll, too…” The concerns echoed those raised in much of the media coverage of Hello Barbie—that she will stifle the imagination that kids deploy when they have to provide both sides of a conversation with their toys, or that she will violate whatever privacy children still have.

    But I was particularly struck by a paragraph in a Mashable article that described in more detail how the new doll/app combo will work:

    "When a parent goes through the process of setting up Hello Barbie via the app, it's possible to control the settings and manually approve or delete potential conversation topics. For example, if a child doesn’t celebrate certain holidays like Christmas, a parent can chose to remove certain lines from Barbie's repertoire."

    Is the question underlying all of this, really, one of control? Who will ultimately control Hello Barbie? Will it be Mattel? Will it be ToyTalk, the San Francisco company providing the “consumer-grade artificial intelligence” that enables Hello Barbie’s conversations? The parents who buy the doll? The hackers who might break in? The courts that might subpoena the recordings of the children’s chats with the doll?

    And when do children get to exercise control? When and how do they get to develop autonomy if even well intentioned people (hey, corporations are people, too, now) listen in to—and control—even the conversations that the kids are having when they play, thinking they’re alone? (“…Toy Talk says that parents will have ‘full control over all account information and content,’ including sharing recordings on Facebook, YouTube, and Twitter,” notes an ABC News article; “data is sent to and from ToyTalk’s servers, where conversations are stored for two years from the time a child last interacted with the doll or a parent accessed a ToyTalk account,” points out the San Francisco Chronicle.)

    What do kids learn when they realize that those conversations they thought were private were actually being recorded, played back, and shared with either business’ partners or parents’ friends? All I can hope is that the little girls who will receive Hello Barbie will, as a result, grow up to be privacy activists—or, better yet, tech developers and designers who will understand, deeply, the importance of privacy by design.

    Photo by Mike Licht, used without modification under a Creative Commons license.


  •  Nothing to Hide? Nothing to Protect?

    Wednesday, Aug. 19, 2015

    Despite numerous articles and at least one full-length book debunking the premises and implications of this particular claim, “I have nothing to hide” is still a common reply offered by many Americans when asked whether they care about privacy.

    What does that really mean?

    An article by Conor Friedersdorf, published in The Atlantic, offers one assessment. It is titled “This Man Has Nothing to Hide—Not Even His Email Password.” (I’ll wait while you consider changing your email password right now, and then decide to do it some other time.) The piece details Friedersdorf’s interaction with a man named Noah Dyer, who responded to the writer’s standard challenge—"Would you prove [that you have nothing to hide] by giving me access to your email accounts, … along with your credit card statements and bank records?"—by actually providing all of that information. Friedersdorf then considers the ethical implications of Dyer’s philosophy of privacy-lessness, while carefully navigating the ethical shoals of his own decisions about which of Dyer’s information to look at and which to publish in his own article.

    Admitting to a newfound though limited respect for Dyer’s commitment to drastic self-revelation, Friedersdorf ultimately reaches, however, a different conclusion:

    Since Dyer granted that he was vulnerable to information asymmetries and nevertheless opted for disclosure, I had to admit that, however foolishly, he could legitimately claim he has nothing to hide. What had never occurred to me, until I sat in front of his open email account, is how objectionable I find that attitude. Every one of us is entrusted with information that our family, friends, colleagues, and acquaintances would rather that we kept private, and while there is no absolute obligation for us to comply with their wishes—there are, indeed, times when we have a moral obligation to speak out in order to defend other goods—assigning the privacy of others a value of zero is callous.

    I think it is more than callous, though. It is an abdication of our responsibility to protect others, whose calculations about disclosure and risk might be very different from our own. Saying “I have nothing to hide” is tantamount to saying “I have nothing and no one to protect.” It is either an acknowledgment of a very lonely existence or a devastating failure of empathy and imagination.

    As Friedersdorf describes him, Dyer is not a hermit; he has interactions with many people, at least some of whom (including his children) he appears to care about. And, in his case, his abdication is not complete; it is, rather, a shifting of responsibility. Because while he did disclose much of his personal information (which of course included the personal details of many others who had not been consulted, and whose “value system,” unlike his own, may not include radical transparency), Dyer wrote to Friedersdorf, the reporter, “[a]dditionally, while you may paint whatever picture of me you are inclined to based on the data and our conversations, I would ask you to exercise restraint in embarrassing others whose lives have crossed my path…”

    In other words, “I have nothing to hide; please hide it for me.”

    “I have nothing to hide” misses the fact that no person is an island, and much of every person’s data is tangled, interwoven, and created in conjunction with, other people’s.

    The theme of the selfishness or lack of perspective embedded in the “nothing to hide” response is echoed in a recent commentary by lawyer and privacy activist Malavika Jayaram. In an article about India’s Aadhar ID system, Jayaram quotes Edward Snowden, who in a Reddit AMA session once said that “[a]rguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” Jayaram builds on that, writing that the “nothing to hide” argument “locates privacy at an individual (some would say selfish) level and ignores the collective, societal benefits that it engenders and protects, such as the freedom of speech and association.”

    She rightly points out, as well, that the “’nothing to hide’ rhetoric … equates a legitimate desire for space and dignity to something sinister and suspect” and “puts the burden on those under surveillance … , rather than on the system to justify why it is needed and to implement the checks and balances required to make it proportional, fair, just and humane.”

    But there might be something else going on, at the same time, in the rhetorical shift from “privacy” to “something to hide”—a kind of deflection, of finger-pointing elsewhere: There, those are the people who have “something to hide”—not me! Nothing to see here, folks who might be watching. I accept your language, your framing of the issue, and your conclusions about the balancing of values or rights involved. Look elsewhere for troublemakers.

    Viewed this way, the “nothing to hide” response is neither naïve nor simplistically selfish; it is an effort—perhaps unconscious—at camouflage. The opposite of radical transparency.

    The same impetus might present itself in a different, also frequent response to questions about privacy and surveillance: “I’m not that interesting. Nobody would want to look at my information. People could look at information about me and it would all be banal.” Or maybe that is, for some people, a reaction to feelings of helplessness. If every day people read articles advising them about steps to take to protect their online privacy, and every other day they read articles explaining how those defensive measures are defeated by more sophisticated actors, is it surprising that some might try to reassure themselves (if not assure others) that their privacy is not really worth breaching?

    But even if we’re not “interesting,” whatever that means, we all do have information, about ourselves and others, that we need to protect. And our society gives us rights that we need to protect, too--for our sake and others'.

    Photo by Hattie Stroud, used without modification under a Creative Commons license.

  •  BroncoHack 2015 (Guest Post)

    Friday, May. 8, 2015

    Last weekend, Santa Clara University hosted BroncoHack 2015—a hackathon organized by the OMIS Student Network, with the goal of creating “a project that is innovative in the arenas of business and technology” while also reflecting the theme of “social justice.” The Markkula Center for Applied Ethics was proud to be one of the co-sponsors of the event.

    The winning project was “PrivaSee”—a suite of applications that helps prevent the leakage of sensitive and personally identifiable student information from schools’ networks. In the words of its creators, “PrivaSee offers a web dashboard that allows schools to monitor their network activity, as well as a mobile application that allows parents to stay updated about their kids’ digital privacy. A network application that sits behind the router of a school's network continuously monitors the network packets, classifies threat levels, and notifies the school administration (web) and parents (mobile) if it discovers student data being leaked out of the network, or if there are any unauthorized apps or services being used in the classrooms that could potentially syphon private data. For schools, it offers features like single dashboard monitoring of all kids and apps. For parents, it provides the power of on-the-move monitoring of all their kids’ privacy and the ability to chat with school administration in the event of any issues. Planned extensions like 'privacy bots' will crawl the Internet to detect leaked data of students who might have found ways to bypass a school's secure networks. The creators of PrivaSee believe that cybersecurity issues in connected learning environments are a major threat to kids' safety, and they strive to create a safer ecosystem.”

    From the winning team:

    "Hackathons are always fun and engaging. Personally, I put this one at the top of my list. I feel lucky to have been part of this energetic, multi-talented team, and I will never forget the fun we had. Our preparations started a week ago, brainstorming various ideas. We kick-started the event with analysis of our final idea and the impact it can create, rather than worrying about any technical challenges that might hit us. We divided our work, planned our approach, and enjoyed every moment while shaping our idea to a product. Looking back, I am proud to attribute our success to my highly motivated and fearless team with an unending thirst to bring a vision to reality. We are looking forward to testing our idea in real life and helping to create a safer community." - Venkata Sai Kishore Modalavalasa, Computer Science & Engineering Graduate Student, Santa Clara University

    "My very first hackathon, and an amazing experience indeed! The intellectually charged atmosphere, the intense coding, and the serious competition kept us on our toes throughout the 24 hours. Kudos to ‘Cap'n Sai,’ who guided us and helped take the product to near perfection. Kudos to the rest of my teammates, who coded diligently through the night. And finally, thank you to the organizers and sponsors of BroncoHack 2015, for having provided us with a platform to turn an idea into a functional security solution that can help us make a difference." - Ashish Nair, Computer Science & Engineering Graduate Student, Santa Clara University

    "Bronco-hack was the first hackathon I ever attended, and it turned to be an amazing experience. After pondering over many ideas, we finally decided to stick with our app: 'PrivaSee'. The idea was to come up with a way to protect kids from sending sensitive digital information that can potentially be compromised over the school’s network. Our objective was to build a basic working model (minimum viable product) of the app. It was a challenge to me because I was not experienced in the particular technical skill-set that was required to build my part of the app. This experience has most definitely strengthened my ability to perform and learn in high pressure situations. I would definitely like to thank the organizers for supporting us throughout the event. They provided us with whatever our team needed and were very friendly about it. I plan to focus on resolving more complicated issues that still plague our society and carry forward and use what I learnt from this event." - Manish Kaushik, Computer Science & Engineering Graduate Student, Santa Clara University

    "Bronco Hack 2015 was my first Hackathon experience. I picked up working with Android App development. Something that I found challenging and fun to do was working with parse cloud and Android Interaction. I am really happy that I was able to learn and complete the hackathon. I also find that I'm learning how to work and communicate effectively in teams and within time bounds. Everyone in the team comes in with different skill levels and you really have to adapt quickly in order to be productive as a team and make your idea successful within 24hrs." - Prajakta Patil, Computer Science & Engineering Graduate Student, Santa Clara University

    "I am extremely glad I had this opportunity to participate in Bronco Hack 2015. It was my first ever hackathon, and an eye-opening event for me. It is simply amazing how groups of individuals can come up with such unique and extremely effective solutions for current issues in a matter of just 24 hours. This event helped me realize that I am capable of much more than I expected. It was great working with the team we had, and special thanks to Captain Sai for leading the team to victory. " - Tanmay Kuruvilla, Computer Science & Engineering Graduate Student, Santa Clara University

    Congratulations to all of the BroncoHack participants—and yes, BroncoHack will return next Spring!

  •  Grant from Intel's Privacy Curriculum Initiative Will Fund New SCU Course

    Friday, Mar. 27, 2015

    Exciting news! A new course now being developed at Santa Clara University, funded by a $25,000 grant from Intel Corporation's Privacy Curriculum Initiative, will bring together engineering, business, and law students to address topics such as privacy by design, effective and accurate privacy policies, best‐practice cybersecurity procedures, and more. Ethics will be an important part of the discussion, and the curriculum will be developed by the High Tech Law Institute in conjunction with Santa Clara University’s School of Engineering, the Leavey School of Business, and the Markkula Center for Applied Ethics.

    More details here!


  •  Trust, Self-Criticism, and Open Debate

    Tuesday, Mar. 17, 2015
    President Barack Obama speaks at the White House Summit on Cybersecurity and Consumer Protection in Stanford, Calif., Friday, Feb. 13, 2015. (AP Photo/Jeff Chiu)

    Last November, the director of the NSA came to Silicon Valley and spoke about the need for increased collaboration among governmental agencies and private companies in the battle for cybersecurity.  Last month, President Obama came to Silicon Valley as well, and signed an executive order aimed at promoting information sharing about cyberthreats.  In his remarks ahead of that signing, he noted that the government “has its own significant capabilities in the cyber world” and added that when it comes to safeguards against governmental intrusions on privacy, “the technology so often outstrips whatever rules and structures and standards have been put in place, which means the government has to be constantly self-critical and we have to be able to have an open debate about it.”

    Five days later, on February 19, The Intercept reported that back in 2010 “American and British spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe….” A few days after that, on February 23, at a cybersecurity conference, the director of the NSA was confronted by the chief information security officer of Yahoo in an exchange which, according to the managing editor of the Just Security blog, “illustrated the chasm between some leading technology companies and the intelligence community.”

    Then, on March 10th, The Intercept reported that in 2012 security researchers working with the CIA “claimed they had created a modified version of Apple’s proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool. Xcode, which is distributed by Apple to hundreds of thousands of developers, is used to create apps that are sold through Apple’s App Store.” Xcode’s product manager reacted on Twitter: “So. F-----g. Angry.”

    Needless to say, it hasn’t been a good month for the push toward increased cooperation. However, to put those recent reactions in a bit more historical context, in October 2013, it was Google’s chief legal officer, David Drummond, who reacted to reports that Google’s data links had been hacked by the NSA: "We are outraged at the lengths to which the government seems to have gone to intercept data from our private fibre networks,” he said, “and it underscores the need for urgent reform." In May 2014, following reports that some Cisco products had been altered by the NSA, Mark Chandler, Cisco’s general counsel, wrote that the “failure to have rules [that restrict what the intelligence agencies may do] does not enhance national security ….”

    If the goal is increased collaboration between the public and private sector on issues related to cybersecurity, many commentators have observed that the issue most hampering that is a lack of trust. Things are not likely to get better as long as the anger and lack of trust are left unaddressed.  If President Obama is right in noting that, in a world in which technology routinely outstrips rules and standards, the government must be “constantly self-critical,” then high-level visits to Silicon Valley should include that element, much more openly than they have until now.


  •  Content versus Conversation

    Tuesday, Dec. 16, 2014
    Last month, at the pii2014 conference held in Silicon Valley (where “pii” stands for “privacy, identity, innovation”), one interesting session was a conversation between journalist Kara Swisher and the co-founders of Secret—one of a number of apps that allow users to communicate anonymously.  Such apps have been criticized by some as enabling cruel comments and cyberbullying; other commentators, however, like Rachel Metz in the MIT Tech Review, have argued that “[s]peaking up in these digital spaces can bring out the trolls, but it’s often followed by compassion from others, and a sense of freedom and relief.”
    During the conversation with David Byttow and Chrys Bader-Wechseler, Swisher noted that Secret says it is not a media company—but, she argued, it does generate content through its users. Secret’s co-founders pushed back. They claimed that what happens on their platform are conversations, not “content.”  Secret messages are ephemeral, they noted; they disappear soon after being posted (how soon is not clear). We’ve always had great, passionate conversations with people, they said, without having those conversations recorded for ever; Secret, they argued, is just a new way to do that.
    Those comments left me thinking about the term “social media” itself. What does “media” mean in this context? I’m pretty sure that most Facebook or Twitter users don’t see themselves as content-creators for media companies. They see themselves, I would guess, as individuals engaged in conversations with other individuals. But those conversations do get treated like media content in many ways. We keep hearing about social media platforms collecting the “data” or “content” created by their users, analyzing that content, tweaking it to “maximize engagement,” using it as fodder for behavioral research, etc.
    There are other alternatives for online conversations, of course. Texting and emailing are never claimed to constitute “content creation” for media companies. But texts and email conversations are targeted, directed. They have an address line, which has to be filled in.
    Tools like Secret, however, enable a different kind of interaction. If I understand it correctly, this is more like shouting out a window and—more often than not—getting some response (from people you know, or people in your area).  It’s hoping to be heard, and maybe acknowledged, but not seen, not known.
    A reporter for Re/Code, Nellie Bowles, once wrote about a “real-life” party organized through Secret. Some of the conversations that took place at that party were pretty odd; some were interesting; but none of them became “content” until Bowles wrote about them.
    Calling social media posts “content” turns them into a commodity, and makes them sound less personal. Calling them parts of a conversation is closer, I think, to what most people perceive them to be, and reminds us of social norms that we have around other people’s conversations—even if they’re out loud, and in public.
    It’s a distinction worth keeping in mind. 
    Photo by Storebukkebruse, used without modification under a Creative Commons license.
  •  Movie Review: Citizenfour

    Tuesday, Nov. 25, 2014
    Sona Makker is a second-year law student at Santa Clara University’s School of Law, in the process of earning a Privacy Law certificate. This piece first appeared in The Advocate--the law school's student-run newspaper.
    When The Guardian first leaked the story about the National Security Agency’s surveillance programs, I was sitting in a conference room at one of largest privacy conferences in the world. I couldn’t help but laugh at the irony. I was surrounded by some of the world’s leading experts in this field, who have written texts and treatises on the current state of privacy law in this country. Surveillance wasn’t on the agenda for this conference, but of course, since that day, government surveillance has remained at the top of the public’s agenda.
    To some, the man behind the NSA revelations, Edward Snowden, is a hero; to others he is a traitor. Whatever you may believe, I recommend seeing Laura Poitras’ latest documentary-- Citizenfour-- which follows the story of NSA whistleblower Edward Snowden during the moments leading up to the Guardian story that exposed the U.S. government’s secret collection of Verizon cellphone data.
    The majority of the film takes places in a hotel room in Hong Kong. Snowden contacted Poitras through encrypted channels. Only after a series of anonymous e-mail exchanges did the two finally trust that the other was really who they said they were-- “assume your adversary is capable of 3 billion guesses per second,” he wrote her. Poitras and Snowden were eventually joined by Guardian reporter Glen Greenwald, whom Snowden contacted under the pseudonym “Citizenfour.”
    Snowden guides the journalists through the piles and piles of NSA documents as they strategize how to publish and inform the American public about the government snooping programs, including Verizon, AT&T, and other telecom companies sharing phone records with the NSA, FBI access to data from private web companies like Yahoo and Google, and the PRISM program that authorized the collection of e-mail, text messages, and voicemails, of both foreigners and US citizens. Snowden appears to be very calm and quiet as he unveils all of this.
    Snowden worried that “personality journalism” would end up making the story about him, rather than the substance of his revelations. When Greenwald’s stories were published in the Guardian, the three sat together and watched as the media reacted and the story unfolded on TV. “We are building the biggest weapon for oppression in the history of mankind,” said Snowden.
    The film also contextualizes the leaks, providing background on the extent of government surveillance. Poitras interviewed William Binney, a former NSA employee who also blew the whistle -- “a week after 9/11, they began actively spying on everyone in this country,” he says. She also includes CSPAN footage of former NSA chief Keith Alexander who flatly denied any kind of snooping programs to Congress.
    There is a perfect scene (almost too perfect) where Poitras films Snowden’s reaction to a fire alarm that went off during one of their meetings in the hotel. It was a routine test, but Snowden questions whether or not someone staged it. The timing “seems fishy,” he says. Is the room bugged? As the viewer you start to question whether it was actually a test too, but then you ask yourself “is that even possible?” It seems so outlandish, straight out of a scene from 24 or something. With that, Poitras effectively prompts the viewer to think that the whole thing, the snooping, the surveillance, it all seems outlandish, but clearly, the evidence proves otherwise.
    I am optimistic that the law can serve as a powerful counterweight to curbing mass surveillance, but this cannot happen without continued public pressure. The Internet is changing how we live and how we interact with our social institutions. Institutions—how we structure our everyday lives and how we produce social order—are not written in stone, but are mutable and capable of evolving alongside our own evolution as social beings. This evolution is dependent upon the will and foresight of those who are willing to speak up. Citizenfour puts a human face to Snowden, and Poitras does so without painting him as a hero or a villain, but just as a twenty-something concerned citizen whom many can relate to. “This is the first time people can see who Snowden really is,” said Glenn Greenwald after the film’s premiere. “You can decide what you think about him." 
    Photo by Mike Mozart, used without modification under a Creative Commons license.
  •  Cookies and Privacy: A Delicious Counter-Experiment

    Monday, Nov. 17, 2014


    Last month, a number of stories in publications such as Pro Publica, Mashable, Slate, and The Smithsonian Magazine covered an “experiment” by artist Risa Puno, who asked attendees at an art festival to disclose bits of personal information about themselves in exchange for cookies.  ProPublica described the event as a “highly unscientific but delicious experiment” in which “380 New Yorkers gave up sensitive personal information—from fingerprints to partial Social Security numbers—for a cookie.” Of course, we are given no count of the number of people who refused the offer, and the article notes that “[j]ust under half—or 162 people—gave what they said were the last four digits of their Social Security numbers”—with that rather important “what they said” caveat casually buried mid-sentence.

    “To get a cookie,” according to the Pro Publica story, “people had to turn over personal data that could include their address, driver's license number, phone number and mother's maiden name”—the accuracy of most of which, of course, Puno could also not confirm.
    All of this is shocking only if one assumes that people are not capable of lying (especially to artists offering cookies). But the artist declared herself shocked, and Pro Publica somberly concluded that “Puno's performance art experiment highlights what privacy experts already know: Many Americans are not sure how much their personal data is worth, and that consumer judgments about what price to put on privacy can be swayed by all kinds of factors.”
    In this case, I am at least thankful for the claim that the non-experiment “highlights,” rather than “proves” something. Other stories, however, argued that the people convinced to give up information “demonstrated just how much their personal information was worth.” The Smithsonian argued that the “artistic experiment is confirmation of the idea that people really just have no sense of what information and privacy is worth other than, variably, a whole lot, or, apparently, a cookie.” The headline in The Consumerist blared, “Forget Computer Cookies: People Happily Give Up Personal Data For The Baked Kind” (though, in all fairness, The Consumerist article did highlight the “what they said” bit, and noted that the “finely-honed Brooklynite sense of modern irony may have played a role, too. Plenty of purchasers didn’t even eat their cookies…. They ‘bought’ them so they could post photos on Twitter and Instagram saying things like, ‘Traded all my personal data for a social media cookie’…”—which suggests rather more awareness than Puno gives people credit for).
    In any case, prompted by those stories, I decided that a flip-side “artistic experiment” was in order. Last week, together with my partner in privacy-protective performance art—Robert Henry, who is Santa Clara University’s Chief Information Security Officer—I set up a table in between the campus bookstore and the dining area.  Bob had recently sent out a campus-wide email reminding people to change their passwords, and we decided that we would offer folks cookies in return for password changes. We printed out a sign that read “Treats for Password Changes,” and we set out two types of treats: cookies and free USB drives. The USB drives all came pre-loaded with a file explaining the security dangers associated with picking up free USB drives. The cookies came pre-loaded with chocolate chips.
    We are now happy to report our results. First, a lot of people don’t trust any offers of free cookies. We got a lot of very suspicious looks. Second, within the space of about an hour and a half, about 110 people were easily convinced to change one of their passwords—something that is a good privacy/security practice in itself—in exchange for a cookie. Does this mean people do care about privacy? (To anticipate your question: some people pulled out their phones or computers and appeared to be changing a password right there; others promised to change a password when they got to their computer; we have no way of knowing if they did—just like Puno had no way of knowing whether much of the “information” she got was true. Collected fingerprints aside…) Third, cookies were much, much more popular than the free USB drives. Of course, the cookies were cheaper than the USB drives. Does this mean that people are aware of the security dangers posed by USB drives and are willing to “pay” for privacy?
    Responses from the students, parents, and others who stopped to talk with us and enjoy the soft warm chocolate-chip cookies ranged from “I’m a cryptography student and I change my passwords every three months” to “I only have one password—should I change that?” to “I didn’t know you were supposed to change passwords” to “But I just changed my password in response to your email” (which made Bob really happy). It was, if nothing else, an educational experience—in some cases for us, in others for them.
    So what does our “artistic experiment” prove? Absolutely nothing, of course—just like Puno’s “experiment,” which prompted so much coverage. (Or maybe they both prove that people like free cookies.)
    The danger with projects like hers, though, is that their “conclusions” are often echoed in discussions about business, regulation, or public policy in general: If people give up personal information for a cookie, the argument goes, why should we protect privacy? That is the argument that needs to be refuted—again and again. Poll after poll finds that people say they do value their privacy, are deeply concerned by its erosion, and want more laws to protect it; but some refuse to believe them and turn, instead, to “evidence” from silly “experiments.” If so, we need more flip-side “experiments”—complete, of course, with baked goods.
  •  Questions about Mass Surveillance

    Tuesday, Oct. 14, 2014

    Last week, Senator Ron Wyden of Oregon, long-time member of the Select Committee on Intelligence and current chairman of the Senate Finance Committee, held a roundtable on the impact of governmental surveillance on the U.S. digital economy.  (You can watch a video of the entire roundtable discussion here.) While he made the case that the current surveillance practices have hampered both our security and our economy, the event focused primarily on the implications of mass surveillance for U.S. business—corporations, entrepreneurs, tech employees, etc.  Speaking at a high-school in the heart of Silicon Valley, surrounded by the Executive Chairman of Google, the General Counsels of Microsoft and Facebook, and others, Wyden argued that the current policies around surveillance were harming one of the most promising sectors of the U.S. economy—and that Congress was largely ignoring that issue. “When the actions of a foreign government threaten red-white-and-blue jobs, Washington [usually] gets up at arms,” Wyden noted, but “no one in Washington is talking about how overly broad surveillance is hurting the US economy.”

    The focus on the economic impact was clearly intended to present the issue of mass surveillance through a new lens—one that might engage those lawmakers and citizens who had not been moved, perhaps, by civil liberties arguments.  However, even in this context, the discussion frequently turned to the “personal” implications of the policies involved.  And in comments both during and after the panel discussion, Wyden expressed his deep concern about the particular danger posed by the creation and implementation of “secret law.”  Microsoft’s General Counsel, Brad Smith, went one step further:  “We need to recognize,” he said, “that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies, and even the intelligence community itself.”

    That brought me back to some of the questions I raised in 2013 (a few months after the Snowden revelations first became public), in an article published by the Santa Clara Magazine.  One of the things I had asked was whether the newly-revealed surveillance programs might “change the perception of the United States to the point where they hamper, more than they help, our national security. “ In regard to secret laws, even if those were to be subject to effective Congressional and court oversight, I wondered, "[i]s there a level of transparency that U.S. citizens need from each branch of the government even if those branches are transparent to one another? In a democracy, can the system of checks and balances function with informed representatives but without an informed public? Would such an environment undermine voters’ ability to choose [whom to vote for]?"

    And, even more broadly, in regard to the dangers inherent in indiscriminate mass surveillance, "[i]n a society in which the government collects the metadata (and possibly much of the content) of every person’s communications for future analysis, will people still speak, read, research, and act freely? Do we have examples of countries in which mass surveillance coexisted with democratic governance?"

    We know that a certain level of mass surveillance and democratic governance did coexist for a time, very uneasily, in our own past, during the Hoover era at the FBI—and the revelations of the realities of that coexistence led to the Church committee and to policy changes.

    Will the focus on the economic impact of current mass governmental surveillance lead to new changes in our surveillance laws? Perhaps.  But it was Facebook’s general counsel who had (to my mind) the best line of last week’s roundtable event. When a high-school student in the audience asked the panel how digital surveillance affects young people like him, who want to build new technology companies or join growing ones, one panelist advised him to just worry about creating great products, and to let people like the GCs worry about the broader issues.  Another panelist told him that he should care about this issue because of the impact that data localization efforts would have on future entrepreneurs’ ability to create great companies. Then, Facebook’s Colin Stretch answered. “I would say care about it for the reasons you learned in your Civics class,” he said, “not necessarily the reasons you learned in your computer science class.”

    Illustration by Stuart Bradford

  • Pages:
  • 1
  • 2
  • »