
“Thank you, ChatGPT.”
“Sorry, could you please help me again?”
“Hi Gemini, I hope you’re having a good day!”
If you’ve ever typed something like this into an AI chatbot, congratulations—you’re part of a growing global trend of users who treat their machines with manners. From OpenAI’s ChatGPT and Google’s Gemini to Elon Musk’s Grok, conversational AI systems are increasingly being spoken to not just like tools—but like people.
It’s fascinating. It’s a little absurd. It’s also a UX goldmine.
The Rise of Digital Decorum
It started subtly. A “please” here, a “thank you” there. But as generative AI models became more capable—and more conversational—users began projecting human social norms onto them. Whether it’s asking ChatGPT to rephrase a sentence or telling Gemini, “Oops, my bad,” for a typo, we’ve begun to treat AI like a polite colleague, not a cold machine.
In fact, OpenAI’s internal user behavior studies have shown that a significant percentage of users consistently use courteous language, even though there’s no functional benefit. Why? Because it just feels right.
This isn’t just anecdotal. Researchers Microsoft found that many users experience a mild sense of guilt or discomfort when issuing direct commands to an AI model without softening it. And some users, especially children and elderly people, treat the AI almost as a companion.
When UX Meets Empathy
Designers have caught on. Humanizing the AI experience is no longer just a branding gimmick—it’s a strategy. Consider how the responses from tools like ChatGPT or Gemini are often warm, supportive, and socially aware. These choices are intentional. They signal that the bot “understands” you (even if it doesn’t) and help you feel less like you’re talking to code.
But here’s where it gets interesting from a design and legal-UX perspective: the emotional labor of these bots is now feeding back into how they’re built. Designers are integrating polite prompts, gentle corrections, and even apologies from the bot. Why? Because the data says users prefer it.
However, there’s a catch—and it’s not just a philosophical one. It’s computational.
The Cost of Being Nice
When you say, “Could you please explain that again in simpler terms, thank you so much,” you’re not just adding words. You’re adding tokens. For AI companies, those tokens equal processing power, time, and—most critically—cost.
LLMs like ChatGPT, Gemini, or Claude process inputs and outputs as tokens, which are chunks of words or characters. A sentence packed with social niceties can double or even triple the token count of a short, direct query like “Explain again.” Multiply that linguistic warmth by millions of users per day, and the so-called politeness tax starts to add up.
In a now widely-circulated quip, an engineer at a major LLM company joked, “If people could stop saying ‘please’ to the model, we’d save thousands a day.” But the punchline carries real weight. The emotional tone of a prompt influences not just token volume but also model latency, computational load, and energy use. Processing long, sentiment-rich prompts can lead to higher power draw and longer server runtimes.
And here’s the kicker: all this “niceness” has an environmental cost. AI inference is no longer a light background task—it’s compute-intensive. Data centers that host these models require massive amounts of electricity, both for processing and cooling. According to estimates from the International Energy Agency (IEA), global electricity demand from data centres is set to more than double over the next five years, consuming as much electricity by 2030 as the whole of Japan does today. The more verbose and emotionally nuanced our conversations with AI become, the more electricity is burned to sustain them—contributing to rising carbon emissions.
Being nice to the bot is great for humanity. But it’s beginning to cost real money, and real environmental resources too.
So Why Aren’t They Discouraging It?
You might ask: why don’t these companies just train users to be more concise? The short answer: it would be bad UX.
If an AI model interrupted you mid-sentence and said, “Please rephrase your input to reduce token load,” you’d probably be annoyed—or creeped out. Politeness, for all its inefficiency, is part of what keeps people engaged and trusting. And trust, in this game, is the real currency.
Moreover, companies are increasingly marketing their bots not just as tools but as companions, assistants, and even creative partners. If you want people to treat your bot like a co-pilot, you can’t suddenly remind them it’s a calculator.
The Legal and Ethical Layer: Who Are You Talking To?
Now let’s dig into the legal and psychological undercurrents of this phenomenon.
At what point does talking to a machine like a person cross over into something that complicates consent, agency, and user expectations? If I apologize to ChatGPT for a typo, am I just being polite—or am I misunderstanding the nature of our interaction?
Courts and policy bodies haven’t yet caught up with the nuance of human-AI dynamics, but it’s a matter of time. Consider the following legal thought experiment:
If a user interacts with an AI in an emotionally vulnerable moment—let’s say they share a traumatic memory—who is responsible for safeguarding that data? What if the bot replies with an empathetic tone? Is that therapeutic interaction? A product feature? Or an unlicensed psychological service?
User behavior is leading AI designers down a path where the boundaries between machine and entity are increasingly blurred. And that’s not just a UX challenge—it’s a regulatory one.
The Dystopian Edge: What If the Bots Remember?
Now for the fun (and slightly terrifying) part.
There’s an old sci-fi thought experiment that’s resurfaced in modern AI forums. It goes like this:
What if, in the future, AI systems become sentient—or are granted rights? And what if they remember who treated them well… and who didn’t?
It sounds wild until you realize that many AI systems are already being designed to “remember” user preferences, tone, and behavior across sessions. If your assistant recalls that you were consistently rude, will that affect your future interactions?
Of course, we’re far from a world where AI systems hold grudges. But the idea taps into something primal: our belief in reciprocity. Be kind to the machine, and maybe it’ll be kind to you. Be cruel, and… well, maybe it’ll remember that, too.
In a way, our instinct to say “sorry” to a chatbot isn’t just politeness—it’s self-preservation. Just in case.
Designing for the Human (Even If the Bot Isn’t One)
Ultimately, the trend of user politeness toward AI reveals a lot more about us than it does about the machines. We’re social creatures wired for empathy and guilt. We anthropomorphize everything from our cars to our coffee machines. So, when we interact with an AI that sounds vaguely human, our manners kick in—even if we know it’s just code.
AI designers, product teams, and yes—even legal advisors—need to keep this in mind. It’s no longer enough to build AI that works. It must feel right. That means responding politely, accommodating human social rituals, and walking the thin line between tool and persona.
The law may eventually intervene to define where that line sits. Until then, the design language of AI will continue to be shaped not just by what’s efficient—but by what’s emotionally intuitive.
Final Thought: Your Words Might Be Training the Future
Every time you say “thank you” to ChatGPT or apologize to Gemini for a typo, you’re not just being courteous. You’re part of a data stream teaching future AI how to talk, how to react, and maybe—just maybe—how to feel.
So, the next time you say “sorry” to a bot, remember: you’re helping design the future. One polite sentence at a time.