Rohit Chopra
Rohit Chopra is an associate professor in the Department of Communication at Santa Clara University and a Faculty Scholar at the Markkula Center for Applied Ethics. Views are his own.
As an avid Twitter user, I am always slightly amused when I see fellow tweeters include in their bio the line “Retweets are not endorsements.” As I see it, that sentence may be a way of protecting oneself from charges of trolling or of embarrassing an employer. Yet, every retweet, whether covertly endorsed or not, winds up spreading an idea or message one step further into a web with no way to control its path or impact. The retweet—essentially a forward—compels us to ask what are the ethical implications of forwarding messages.
This question arises even more sharply in the case of WhatsApp, especially though not exclusively in contexts like India, where WhatsApp has been used or abused to spread rumors that have resulted in minorities being attacked, Muslims lynched on allegations of smuggling beef, and tourists passing through towns killed by frenzied mobs on suspicions of being child abductors. At least with Twitter, it is possible to discern some context, either from the bio or timeline of the individual retweeting. It is easier to identify and call out doctored or photoshopped tweets or fake news (although the set of ethical questions around “call-out culture” is a whole other ballgame).
With WhatsApp, the origins and provenance of messages remain unknown, as do the intentions of those who have created such messages. In the Indian context, the government itself is alleged to engage in A/B testing with its volunteer groups to see how messages laced with innuendo or outright falsehoods about critics and opposition leaders will play out. The problem with forwarding such messages, even if one can clearly identify them as unverified statements, lies, or rumors, is that there is significantly less control on WhatsApp than there is on Facebook or Twitter. Encryption, a defining feature of WhatsApp, plus the circles of closed groups and loops in which messages circulate mean that a message sent ironically, even with a framing statement, might wind up endorsing the very position it seeks to critique. WhatsApp’s measures to limit the number of forwards per person to five, as a means of combating misuse, are more a case of treating the symptom than the underlying cause.
There is a deeper philosophical question here about what kind of speech act exactly is a forward. How does that speech act differ when carried out on Facebook, Twitter, or WhatsApp? The big tech platforms have treated all content, expression, or utterances on their site largely as one undifferentiated mass of speech, with certain exceptions such as hate speech, obscene material. They see themselves as neutral platforms which facilitate free expression. What they have not considered meaningfully is the extent to which the logic of communication enabled by the platforms influences the nature of speech, its circulation, and, equally, its reception. For that, they need urgently to turn to media scholars, philosophers of language, and linguists.
This issue and a related discussion was summarized in a post by David DeCosse, director of the Campus Ethics Programs at the Markkula Center for Applied Ethics.