Skip to main content
Markkula Center for Applied Ethics

Digital Dignity in the Age of AI-Generated Emails

Tracy Barba
Woman's hands at a desk writing in a blank notebook. Photo by Photo by Kelly Sikkema on Unsplash.

Woman's hands at a desk writing in a blank notebook. Photo by Photo by Kelly Sikkema on Unsplash.

Woman's hands at a desk writing in a blank notebook. Photo by Kelly Sikkema on Unsplash.

Tracy Barba is director of venture and equity ethics with the Markkula Center for Applied Ethics at Santa Clara University. Views are her own.

 

When email emerged, it was hailed as a revolution—an innovation that would make communication faster, more efficient, and less constrained by time zones or geography. It was a modern miracle. Messages could be sent and received instantly, replacing the cumbersome processes of faxes, voicemails, and physical letters.

But somewhere along the way, email became something else. Instead of making communication effortless, it now clutters inboxes and consumes hours of the workday. The average office worker receives more than 100 emails daily—many of them unnecessary, redundant, or impersonal. The sheer volume has left many feeling digitally exhausted rather than connected.

The latest promise of efficiency is AI-powered email automation—tools that can draft responses, summarize conversations, and even predict the right tone for a message. But in this pursuit of optimization, something is lost: the human connection that makes communication meaningful.

The Automation of Human Connection

At first glance, AI-generated email responses seem like a time-saving breakthrough. With a click, an algorithm can generate a polite reply, suggest a meeting time, or craft a follow-up message—all without requiring the user to think too much about it.

But communication is more than just words exchanged—it is context, intention, and nuance. AI may be able to replicate polite phrasing, but it cannot understand the subtlety of a sigh in text, the weight of a carefully chosen word, or the warmth of a thoughtful note. It cannot replace the instincts that guide us to ask a colleague about their sick child or to read between the lines of a carefully worded email from a boss.

And yet, as AI becomes more adept at imitating human language, its presence in communication is growing. Google’s Smart Reply and Microsoft’s Copilot now suggest responses in everyday email exchanges. More sophisticated AI models can draft conversations based on tone and content analysis. But if machines increasingly handle these interactions, who are we communicating with? And what does it mean to trust a message when we don’t know if a human wrote it?

The Question of Trust

Trust is the foundation of all relationships, professional or personal. When we communicate, we rely not only on words but also on the subtext, the personal history, and the sense of accountability that comes from human interaction.

AI-generated responses challenge that dynamic. If a manager responds to an employee with an AI-suggested message, does it carry the same weight? If a customer receives an automated email apology, does it feel sincere? The efficiency of AI may save time, but it risks creating a world where communication is stripped of personal responsibility, where the line between human and machine is blurred.

This is not an abstract concern. Researchers at Stanford found that people trust human-written communication far more than AI-generated text, even when the AI responses are accurate and well-crafted. The more AI integrates into email and messaging, the more we risk eroding the trust that sustains relationships in workplaces, business dealings, and personal interactions.

The Role of Venture Capital: Betting on Trust, Not Just Efficiency

The rise of generative AI presents a challenge—and an opportunity—for venture capitalists funding the next generation of communication tools. The critical question is not just how efficient these tools can be, but whether they preserve or erode trust in communication.

Companies investing in AI-driven email tools must consider:

  • How does automation preserve the authenticity of communication?
  • Can AI enhance connection rather than dilute it?
  • Will users trust AI-generated responses, or will they question the sincerity of digital interactions?

For venture investors, this is not just an ethical question—it’s a competitive one. In a world increasingly reliant on AI, trust will be a differentiator. The companies that succeed will not be those that simply automate communication, but those that build AI tools that reinforce human connection, foster transparency, and ensure users remain in control of their voice.

The Future of Digital Dignity

Email will not disappear anytime soon, and neither will AI-generated communication. But as automation becomes more pervasive, the real challenge is ensuring that technology does not undermine the very thing that makes communication valuable: its ability to connect us as humans.

If AI is the future of email, it must be developed with a clear focus on maintaining digital dignity—where efficiency does not come at the cost of trust, and automation does not replace the warmth of human presence. Otherwise, we may wake up one day to find that we are no longer talking to each other at all—just exchanging words generated by machines, in a world where no one is really listening.

 

Mar 18, 2025
--
An outreached hand reaching for a heart, surrounded by circuitry. The words

 

Digital Dignity Day

Agenda

Registration

Speakers (coming soon)

Parking Map and Directions

 

 

More Articles on Dignity