Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

  •  Privacy and Diversity

    Friday, Jun. 12, 2015
    Teams that work on privacy-protective features for our online lives are much more likely to be effective if those teams are diverse, in as many ways as possible.
    Here is what led me to this (maybe glaringly obvious) insight:
    First, an event that I attended at Facebook’s headquarters, called “Privacy@Scale,” which brought together academics, privacy practitioners (from both the legal and the tech sides), regulators, and product managers. (We had some great conversations.)
    Second, a study that was recently published with much fanfare (and quite a bit of tech media coverage) by the International Association of Privacy Professionals, showing that careers in privacy are much more likely than others to provide gender pay parity—and including the observation that there are more women than men in the ranks of Chief Privacy Officers.
    Third, a story from a law student who had interned on the privacy team of a large Silicon Valley company, who mentioned sitting in a meeting and thinking to herself that something being proposed as a feature would never have been accepted in the culture that she came from—would in fact have been somewhat taboo, and might have upset people if it were broadly implemented, rather than offered as an opt-in—and realizing that none of the other members of the team understood this.
    And fourth, a question that several commenters asked earlier this year when Facebook experienced its “It’s Been a Great Year” PR disaster (after a developer wrote about the experience of seeing his daughter’s face auto-inserted by Facebook algorithms under a banner reading “It’s Been a Great Year!” when in fact his daughter had died that year): Had there been any older folks on the team that released that feature? If not, would the perspective of some older team members have tempered the roll-out, provided a word of caution?
    Much has been said, for a long time, about how it’s hard to “get privacy right” because privacy is all about nuance and gray areas, and conceptions of privacy vary so much among individuals, cultures, contexts, etc.  Given that, it makes sense that diverse teams working on privacy-enhancing features would be better able to anticipate and address problems. Not all problems, of course—diversity would not be a magic solution. It would, however, help.
    Various studies have recently shown that diversity on research teams leads to better science, that cultural diversity on global virtual teams has a positive effect on decision-making, that meaningful gender diversity in the workplace improves companies’ bottom line, and that “teams do better when they are composed of people with the widest possible range of personalities, even though it takes longer for such psychologically diverse teams to achieve good cooperation.”
    In Silicon Valley, the talk about team building tends to be about “culture fit” (or, more sharply critical, about “broculture”). As it turns out, though, the right “culture fit” for a privacy team should probably include diversity (of background, gender, age, skills, and even personality), combined with an understanding that one’s own perspectives are not universal; the ability to listen; and curiosity about and respect for difference. 
    Photo by Sean MacEntee, used without modification under a Creative Commons liccense.
  •  Which Students? Which Rights? Which Privacy?

    Friday, May. 29, 2015


    Last week, researcher danah boyd, who has written extensively about young people’s attitudes toward privacy (and debunked many pervasive “gut feelings” about those attitudes and related behaviors), wrote a piece about the several bills now working their way through Congress that aim to protect “student privacy.” boyd is not impressed. While she agrees that reform of current educational privacy laws is much needed, she writes, "Of course, even though this is all about *students*, they don’t actually have a lot of power in any of these bills. It’s all a question of who can speak on their behalf and who is supposed to protect them from the evils of the world. And what kind of punishment for breaches is most appropriate."
    boyd highlights four different “threat models” and argues that the proposed bills do nothing to address two of those: the “Consumer Finance Threat Model,” in which student data would “fuel the student debt ecosystem,” and the “Criminal Justice Threat Model,” in which such data would help build “new policing architectures.”
    As boyd puts it, “the risks that we’re concerned about are shaped by the fears of privileged parents.”
    In a related post called “Students: The one group missing from student data privacy laws and bills,” journalist Larry Magid adds that the proposed bills “are all about parental rights but only empower students once they turn 18.” Referencing boyd’s research, he broadens the conversation to argue that “[i]t’s about time we start to respect privacy, free speech rights and intellectual property rights of children.”
    While free speech and property rights are important, the protection of privacy in particular is essential for the full development of the self. The fact that children and young people need some degree of privacy not just from government or marketers but from their own well-intentioned family members has been particularly obscured by pervasive tropes like “young people today don’t care about privacy.”
    Of course, one way to combat those false tropes is to talk to young people directly. Just ask them: are there some things they keep to themselves, or share only with a few close friends or family members? And no, the fact that some of them post lots of things on social media that their elders might not does not mean that they “don’t care about privacy.” It just means that privacy boundaries vary—from generation to generation, from culture to culture, from context to context, from individual to individual.
    The best recent retort to statements about young people and privacy comes from security expert Bruce Schneier, who answered a question from an interviewer with some questions of his own: "Who are all these kids who are growing up without the concept of digital privacy? Is there even one? … All people care deeply about privacy—analog, digital, everything—and kids are especially sensitive about privacy from their parents, teachers, and friends. … Privacy is a vital aspect of human dignity, and we all value it."
    Given that, boyd’s critique of current efforts aimed at protecting student privacy is a call to action: Policy makers (and, really, all of us) need to better understand the true threats, and to better protect those who are most vulnerable in a “hypersurveilled world.”


    Photo by Theen Moy, used without modification under a Creative Commons license.

  •  "Harrison Bergeron" in Silicon Valley -- Part II

    Friday, May. 22, 2015

    A few weeks ago, I wrote about Kurt Vonnegut’s short story “Harrison Bergeron.” In the world of that story the year is 2081, and, in an effort to render all people “equal,” the  government imposes handicaps on all those who are somehow better than average. One of the characters, George, whose intelligence is "way above normal," has "a little mental handicap radio in his ear.”

    As George tries to concentrate on something,

    “[a] buzzer sounded in George's head. His thoughts fled in panic, like bandits from a burglar alarm.

    "That was a real pretty dance, that dance they just did," said Hazel.

    "Huh" said George.

    "That dance-it was nice," said Hazel.

    "Yup," said George. He tried to think a little about the ballerinas. … But he didn't get very far with it before another noise in his ear radio scattered his thoughts.

    George winced. So did two out of the eight ballerinas.

    Hazel saw him wince. Having no mental handicap herself, she had to ask George what the latest sound had been.

    "Sounded like somebody hitting a milk bottle with a ball peen hammer," said George.

    "I'd think it would be real interesting, hearing all the different sounds," said Hazel a little envious. "All the things they think up."

    "Um," said George.

    "Only, if I was Handicapper General, you know what I would do?" said Hazel. … "I'd have chimes on Sunday--just chimes. Kind of in honor of religion."

    "I could think, if it was just chimes," said George.

    Re-reading the story, I thought about the work of the late professor Cliff Nass, whose “pioneering research into how humans interact with technology,” as the New York Times described it, “found that the increasingly screen-saturated, multitasking modern world was not nurturing the ability to concentrate, analyze or feel empathy.”

    If we have little “mental handicap radios” in our ears, these days, it’s usually because we put them there—or on our eyes, or wrists, or just in our hands—ourselves (though some versions are increasingly required by employers or schools). Still, like the ones in the story, they are making it more difficult for all of us to focus on key tasks, to be present for our loved ones, to truly take in and respond to our surroundings.

    In anticipation of the Memorial Day’s weekend, I wish you a few days of lessened technological distractions. And, if you have some extra time, you might want to read some of professor Nass’ research.


  •  How Google Can Illuminate the "Right to Be Forgotten" Debate: Two Requests

    Thursday, May. 14, 2015


    Happy Birthday, Right-to-Have-Certain-Results-De-Listed-from-Searches-on-Your-Own-Name-,-Depending-on-the-Circumstances!

    It’s now been a year since the European Court of Justice shocked (some) people with a decision that has mistakenly been described as announcing a “right to be forgotten.”

    Today, 80 Internet scholars sent an open letter to Google asking the company to release additional aggregate data about the company’s implementation of the court decision.  As they explain,

    The undersigned have a range of views about the merits of the ruling. Some think it rightfully vindicates individual data protection/privacy interests. Others think it unduly burdens freedom of expression and information retrieval. Many think it depends on the facts.

    We all believe that implementation of the ruling should be much more transparent for at least two reasons: (1) the public should be able to find out how digital platforms exercise their tremendous power over readily accessible information; and (2) implementation of the ruling will affect the future of the [“right to be forgotten”] in Europe and elsewhere, and will more generally inform global efforts to accommodate privacy rights with other interests in data flows.

    Although Google has released a Transparency Report with some aggregate data and some examples of the delinking decisions reached so far, the signatories find that effort insufficient. “Beyond anecdote,” they write,

    we know very little about what kind and quantity of information is being delisted from search results, what sources are being delisted and on what scale, what kinds of requests fail and in what proportion, and what are Google’s guidelines in striking the balance between individual privacy and freedom of expression interests.

    For now, they add, the participants in the delisting debate “do battle in a data vacuum, with little understanding of the facts.”

    More detailed data is certainly much needed. What remains striking, in the meantime, is how little understanding of the facts many people continue to have about what the decision itself mandates. A year after the decision was issued, an associate editor for Engadget, for example, still writes that, as a result of it, “if Google or Microsoft hides a news story, there may be no way to get it back.” 

    To “get it back”?! Into the results of a search on a particular person’s name? Because that is the entire scope of the delinking involved here—when the delinking does happen.

    In response to a request for comment on the Internet scholars’ open letter, a Google spokesman told The Guardian that “it’s helpful to have feedback like this so we can know what information the public would find useful.” In that spirit of helpful feedback, may I make one more suggestion?

    Google’s RTBF Transparency Report (updated on May 14) opens with the line, “In a May 2014 ruling, … the Court of Justice of the European Union found that individuals have the right to ask search engines like Google to remove certain results about them.” Dear Googlers, could you please add a line or two explaining that “removing certain results” does not mean “removing certain stories from the Internet, or even from the Google search engine”?

    Given the anniversary of the decision, many reporters are turning to the Transparency Report for information for their articles. This is a great educational opportunity. With a line or two, while it weighs its response to the important request for more detailed reporting on its actions, Google could already improve the chances of a more informed debate.

    [I’ve written about the “right to be forgotten” a number of times: chronologically, see “The Right to Be Forgotten, Or the Right to Edit?” “Revisiting the ‘Right to Be Forgotten,’” “The Right to Be Forgotten, The Privilege to Be Remembered” (that one published in Re/code), “On Remembering, Forgetting, and Delisting,” “Luciano Floridi’s Talk at Santa Clara University,” and, most recently, “Removing a Search Result: An Ethics Case Study.”]

    (Photo by Robert Scoble, used without modification under a Creative Commons license.)


  •  BroncoHack 2015 (Guest Post)

    Friday, May. 8, 2015

    Last weekend, Santa Clara University hosted BroncoHack 2015—a hackathon organized by the OMIS Student Network, with the goal of creating “a project that is innovative in the arenas of business and technology” while also reflecting the theme of “social justice.” The Markkula Center for Applied Ethics was proud to be one of the co-sponsors of the event.

    The winning project was “PrivaSee”—a suite of applications that helps prevent the leakage of sensitive and personally identifiable student information from schools’ networks. In the words of its creators, “PrivaSee offers a web dashboard that allows schools to monitor their network activity, as well as a mobile application that allows parents to stay updated about their kids’ digital privacy. A network application that sits behind the router of a school's network continuously monitors the network packets, classifies threat levels, and notifies the school administration (web) and parents (mobile) if it discovers student data being leaked out of the network, or if there are any unauthorized apps or services being used in the classrooms that could potentially syphon private data. For schools, it offers features like single dashboard monitoring of all kids and apps. For parents, it provides the power of on-the-move monitoring of all their kids’ privacy and the ability to chat with school administration in the event of any issues. Planned extensions like 'privacy bots' will crawl the Internet to detect leaked data of students who might have found ways to bypass a school's secure networks. The creators of PrivaSee believe that cybersecurity issues in connected learning environments are a major threat to kids' safety, and they strive to create a safer ecosystem.”

    From the winning team:

    "Hackathons are always fun and engaging. Personally, I put this one at the top of my list. I feel lucky to have been part of this energetic, multi-talented team, and I will never forget the fun we had. Our preparations started a week ago, brainstorming various ideas. We kick-started the event with analysis of our final idea and the impact it can create, rather than worrying about any technical challenges that might hit us. We divided our work, planned our approach, and enjoyed every moment while shaping our idea to a product. Looking back, I am proud to attribute our success to my highly motivated and fearless team with an unending thirst to bring a vision to reality. We are looking forward to testing our idea in real life and helping to create a safer community." - Venkata Sai Kishore Modalavalasa, Computer Science & Engineering Graduate Student, Santa Clara University

    "My very first hackathon, and an amazing experience indeed! The intellectually charged atmosphere, the intense coding, and the serious competition kept us on our toes throughout the 24 hours. Kudos to ‘Cap'n Sai,’ who guided us and helped take the product to near perfection. Kudos to the rest of my teammates, who coded diligently through the night. And finally, thank you to the organizers and sponsors of BroncoHack 2015, for having provided us with a platform to turn an idea into a functional security solution that can help us make a difference." - Ashish Nair, Computer Science & Engineering Graduate Student, Santa Clara University

    "Bronco-hack was the first hackathon I ever attended, and it turned to be an amazing experience. After pondering over many ideas, we finally decided to stick with our app: 'PrivaSee'. The idea was to come up with a way to protect kids from sending sensitive digital information that can potentially be compromised over the school’s network. Our objective was to build a basic working model (minimum viable product) of the app. It was a challenge to me because I was not experienced in the particular technical skill-set that was required to build my part of the app. This experience has most definitely strengthened my ability to perform and learn in high pressure situations. I would definitely like to thank the organizers for supporting us throughout the event. They provided us with whatever our team needed and were very friendly about it. I plan to focus on resolving more complicated issues that still plague our society and carry forward and use what I learnt from this event." - Manish Kaushik, Computer Science & Engineering Graduate Student, Santa Clara University

    "Bronco Hack 2015 was my first Hackathon experience. I picked up working with Android App development. Something that I found challenging and fun to do was working with parse cloud and Android Interaction. I am really happy that I was able to learn and complete the hackathon. I also find that I'm learning how to work and communicate effectively in teams and within time bounds. Everyone in the team comes in with different skill levels and you really have to adapt quickly in order to be productive as a team and make your idea successful within 24hrs." - Prajakta Patil, Computer Science & Engineering Graduate Student, Santa Clara University

    "I am extremely glad I had this opportunity to participate in Bronco Hack 2015. It was my first ever hackathon, and an eye-opening event for me. It is simply amazing how groups of individuals can come up with such unique and extremely effective solutions for current issues in a matter of just 24 hours. This event helped me realize that I am capable of much more than I expected. It was great working with the team we had, and special thanks to Captain Sai for leading the team to victory. " - Tanmay Kuruvilla, Computer Science & Engineering Graduate Student, Santa Clara University

    Congratulations to all of the BroncoHack participants—and yes, BroncoHack will return next Spring!

  •  Is Facebook Becoming a Better Friend?

    Thursday, Apr. 30, 2015
    This Feb. 8, 2012 photo shows workers inside of Facebook headquarters in Menlo Park, Calif. (AP Photo/Paul Sakuma)
    This Feb. 8, 2012 photo shows workers inside of Facebook headquarters in Menlo Park, Calif. (AP Photo/Paul Sakuma)

    Good friends understand boundaries and keep your secrets.  You can’t be good friends with someone you don’t trust.

    Facebook, the company that made “friend” a verb and invited you to mix together your bosom buddies, relatives, acquaintances, classmates, lovers, co-workers, exes, teachers, and who-knows-who-else into one group it called “friends,” and has been helping you stay in touch with all of them and prompting you to reveal lots of things to all them, is taking some steps to become more trustworthy.

    Specifically, as TechCrunch and others recently reported, as of April 30 Facebook’s modified APIs will no longer allow apps to collect data both from their users and from their users’ Facebook “friends”—something they often did until now, often without the users (or their friends) realizing it.*

    As TechCrunch’s Josh Constine puts it, “Some users will see [this] as a positive move that returns control of personal data to its rightful owners. Just because you’re friends with someone, doesn’t mean you necessarily trust their judgment about what developers are safe to deal with. Now, each user will control their own data destiny.” Moreover, with Facebook’s new APIs, each user will have more “granular control” over what permissions he or she grants to an app in terms of data collection or other actions—such as permission to post to his or her Newsfeed. Constine writes that

    Facebook has now instituted Login Review, where a team of its employees audit any app that requires more than the basic data of someone’s public profile, list of friends, and email address. The Login Review team has now checked over 40,000 apps, and from the experience, created new, more specific permissions so developers don’t have to ask for more than they need. Facebook revealed that apps now ask an average of 50 percent fewer permissions than before.

    These are important changes, specifically intended by Facebook to increase user trust in the platform. They are certainly welcome steps. However, Facebook might ponder the first line of Constine’s TechCrunch article, which reads, “It was always kind of shady that Facebook let you volunteer your friends’ status updates, check-ins, location, interests and more to third-party apps.” Yes, it was. It should have been obvious all along that users should “control their own data destiny.” Facebook’s policies and lack of clarity about what they made possible turned many of us who used it into somewhat inconsiderate “friends.”

    Are there other policies that continue to have that effect? So many of our friendship-related actions are now prompted and shaped by the design of the social platforms on which we perform them—and controlled even more by algorithms such as the Facebook one that determines which of our friends’ posts we see in our Newsfeed (no, they don’t all scroll by in chronological order; what you see is a curated feed, in which the parameters for curation are not fully disclosed and keep changing).

    Facebook might be becoming a better, more trustworthy friend (though a “friend” that, according to The Atlantic, made $5 billion last year by showing you ads, “more than [doubling] digital ad revenue over the course of two years”). Are we becoming better friends, though, too? Or should we be clamoring for even more transparency and more changes that would empower us to be that?

    *  We warned about this practice in our Center’s module about online privacy: “Increasingly, you may… be allowing some entities to collect a lot of personal information about all of your online ‘friends’ by simply clicking ‘allow’ when downloading applications that siphon your friends' information through your account. On the flip side, your ‘friends’ can similarly allow third parties to collect key information about you, even if you never gave that third party permission to do so.” Happily, we’ll have to update that page now…


  •  A New Ethics Case Study

    Friday, Apr. 24, 2015
    A Google receptionist works at the front desk in the company's office in this Oct. 2, 2006, file photo. (AP Photo/Mark Lennihan, File)
    A Google receptionist works at the front desk in the company's office in this Oct. 2, 2006, file photo. (AP Photo/Mark Lennihan, File)

    In October 2014, Google inaugurated a Transparency Report detailing its implementation of the European court decision generally (though mistakenly) described as being about “the right to be forgotten.” To date, according to the report, Google has received more than 244,000 requests for removals of URLs from certain searches involvijng names of EU residents. Aside from such numbers, the Transparency Report includes examples of requests received--noting, in each case, whether or not Google complied with the request.

    The “right to be forgotten” decision and its implementation have raised a number of ethical issues. Given that, we thought it would be useful to draw up an ethics case study that would flesh out those issues; we published that yesterday: see “Removing a Search Result: An Ethics Case Study.”

    What would you decide, if you were part of the decision-making team tasked with evaluating the request described in the case study?


  •  The Revised “Introduction to Software Engineering Ethics” Module Is Now Live

    Thursday, Apr. 9, 2015

    The Center’s “All About Ethics” blog this week highlights the revised version of our free online ethics module for introductory software engineering coursesThat blog post and the earlier coverage of the module in Slate detail the impetus and the intent of the project; a related article by the module’s author (Professor Shannon Vallor) and special contributor (Professor Arvind Narayanan) explains why software engineering courses should include ethics coverage; and, last but not least, the module itself is well worth reading, because the issues that it raises and the case studies that it examines impact all of us—coders or not.

    The revised version of the module, released earlier this week, includes more guidance for instructors who want to use the materials, as well as an extended bibliography. As always, we’d love to hear suggestions for additional case studies and feedback on the module overall!

    Photo: Professor Shannon Vallor, Santa Clara University

  •  Harrison Bergeron in Silicon Valley

    Wednesday, Apr. 1, 2015
    Certain eighth graders I know have been reading “Harrison Bergeron,” so I decided to re-read it, too. The short story, by Kurt Vonnegut, describes a dystopian world in which, in an effort to make all people equal, a government imposes countervailing handicaps on all citizens who are somehow naturally gifted: beautiful people are forced to wear ugly masks; strong people have to carry around weights in proportion to their strength; graceful people are hobbled; etc. In order to make everybody equal, in other words, all people are brought to the lowest common denominator. The title character, Harrison Bergeron, is particularly gifted and therefore particularly impaired. As Vonnegut describes him,
    … Harrison's appearance was Halloween and hardware. Nobody had ever born heavier handicaps. He had outgrown hindrances faster than the H-G men could think them up. Instead of a little ear radio for a mental handicap, he wore a tremendous pair of earphones, and spectacles with thick wavy lenses. The spectacles were intended to make him not only half blind, but to give him whanging headaches besides.
    Scrap metal was hung all over him. Ordinarily, there was a certain symmetry, a military neatness to the handicaps issued to strong people, but Harrison looked like a walking junkyard. In the race of life, Harrison carried three hundred pounds.
    And to offset his good looks, the H-G men required that he wear at all times a red rubber ball for a nose, keep his eyebrows shaved off, and cover his even white teeth with black caps at snaggle-tooth random.
    In classroom discussions, the story is usually presented as a critique of affirmative action. Such discussions miss the fact that affirmative action aims to level the playing field, not the players.
    In the heart of Silicon Valley, in a land that claims to value meritocracy but ignores the ever more sharply tilted playing field, “Harrison Bergeron” seems particularly inapt. But maybe it’s not. Maybe it should be read, only in conjunction with stories like CNN’s recent interactive piece titled “The Poor Kids of Silicon Valley.” Or the piece by KQED’s Rachel Myrow, published last month, which notes that 30% of Silicon Valley’s population lives “below self-sufficiency standards,” and that “the income gap is wider than ever, and wider in Silicon Valley than elsewhere in the San Francisco Bay Area or California.”
    What such (nonfiction, current) stories make clear is that we are, in fact, already hanging weights and otherwise hampering people in our society.  It’s just that we don’t do it to those particularly gifted; we do it to the most vulnerable ones. The kids who have to wake up earlier because they live far from their high-school and have to take two buses since their parents can’t drive them to school, and who end up sleep deprived and less able to learn—the burden is on them. The kids who live in homeless shelters and whose brains might be impacted, long-term, by the stress of poverty—the burden is on them.  The people who work as contractors with limited or no benefits—the burden is on them. The parents who have to work multiple jobs, can’t afford to live close to work, and have no time to read to their kids—the burden is on all of them.
    In a Wired article about a growing number of Silicon Valley “techie” parents who are opting to home-school their kids, Jason Tanz expresses some misgivings about the subject but adds,
    My son is in kindergarten, and I fear that his natural curiosity won’t withstand 12 years of standardized tests, underfunded and overcrowded classrooms, and constant performance anxiety. The Internet has already overturned the way we connect with friends, meet potential paramours, buy and sell products, produce and consume media, and manufacture and deliver goods. Every one of those processes has become more intimate, more personal, and more meaningful. Maybe education can work the same way.
    Set aside the question of whether those processes have indeed become more intimate and meaningful; let’s concentrate on a different question about the possibility that, with the help of the Internet, education might “work the same way”: For whom?
    Are naturally curious and creative kids being hampered by standardized tests and underfunded and overcrowded classrooms? Well then, in Silicon Valley, some of those kids will be homeschooled. The Wired article quotes a homeschooling parent who optimistically foresees a day “when you can hire a teacher by the hour, just as you would hire a TaskRabbit to assemble your Ikea furniture.” As to what happens to the kids of the TaskRabbited teacher? If Harrison Bergeron happens to be one of those, he will be further hampered, and nobody will check whether the weight of the burden will be proportional to anything.
    Meritocracy is a myth when social inequality becomes as vast as it has become in Silicon Valley. Teaching “Harrison Bergeron” to eighth graders in this environment is a cruel joke.
    (Photo by Ken Banks, cropped, used under a Creative Commons license.)
  •  Grant from Intel's Privacy Curriculum Initiative Will Fund New SCU Course

    Friday, Mar. 27, 2015

    Exciting news! A new course now being developed at Santa Clara University, funded by a $25,000 grant from Intel Corporation's Privacy Curriculum Initiative, will bring together engineering, business, and law students to address topics such as privacy by design, effective and accurate privacy policies, best‐practice cybersecurity procedures, and more. Ethics will be an important part of the discussion, and the curriculum will be developed by the High Tech Law Institute in conjunction with Santa Clara University’s School of Engineering, the Leavey School of Business, and the Markkula Center for Applied Ethics.

    More details here!