Most of us understand how to behave in different social situations and interact with others in ways that are deemed acceptable. Yet, when we bring AI tools into the mix, we create the capacity for friction because the social norms for using these tools are not yet clear. Which raises this question …
What is the appropriate etiquette when using AI tools?
Etiquette and Ethics
Etiquette relates to polite behavior and is governed by social norms. For example, in western countries, we likely learned not to put our elbows on the dinner table or chew with our mouths open. There are also cultural aspects to etiquette. It might be polite to refill someone else’s teacup before your own, as is done in many Asian cultures. It might be perfectly fine to slurp your noodles in some cultures, but not others. Etiquette is also contextual. What is considered acceptable behavior at a sporting event – such as outbursts of cheering – would not be appropriate at a funeral. It can even be something so small we don’t really think about it, like which way to face in an elevator. We absorb these contextual clues about how to behave appropriately through our human experiences. Most of us adhere to a “when in Rome, do as the Romans do” guideline in order to take context and culture into consideration.
Ethics relates to our principles and connects to etiquette in practice. If we value treating others with respect, then we want to ensure our behavior is respectful given the culture and context of a situation. For example, during a formal business meeting, that might mean following the agenda, deferring to the leadership of the meeting chair and speaking when called upon. These behaviors are part of the etiquette of a formal business meeting that aligns with the ethical principle of being respectful toward others in that situation.
Technology Impacts Our Social Relationships
With new technologies come new questions. Is it OK to use your phone at the dinner table? To write emails or text others during an in-person meeting? What about using an AI assistant at an online meeting?
Attending an online meeting in the past year means you’ve likely encountered other “attendees” at that meeting that are not human. These AI bots typically provide meeting transcriptions and summaries. They also come with risks to privacy and security. This is just one example of the ways in which we are encountering AI.
There are companies marketing AI wearables like the Rayban Meta Wayfarers or the Humane AI pin. These devices might benefit their users but raise many questions about surveillance, privacy, and respect for others. This is just the tip of the iceberg as conversations about agentic AI are quickly entering the techno-culture zeitgeist.
So what do we do? We’re living at a moment where things seem unclear. After all, social norms tend to evolve over time. However, here are a few ideas:
Ask for consent and disclose use
If you want to use an AI bot at a meeting, it’s important to get consent from everyone and to be transparent. This aligns with our current expectations around meetings.
Imagine you book a meeting with someone. They decide to invite a third party, but they don’t tell you about it. Instead, that third party is “lurking” in the background, taking notes about everything in the meeting. This is, of course, pretty hard to do for an in-person meeting. It would be obvious who is or is not in the room. In an online meeting, it’s certainly more possible to have someone sitting off to the side who is not visible to the camera.
However, social norms curtail this kind of behavior (for most people). We know it’s not acceptable. Instead, if we want to invite another party to the meeting, we ask if that is OK with the other meeting attendees. Yet, when it comes to AI assistants, we don’t seem to have the same normative expectations, but we should!
Make it OK to say no to AI tools
Not everyone will be on board with the use of AI tools at meetings. These tools are collecting and sharing data with a third party. It’s a form of surveillance. There are issues around confidentiality to consider, especially if the meeting is of a sensitive nature or deals with sensitive or confidential information. The tone of the meeting itself might also change if everything is recorded. This can have a chilling effect on people – making them less likely to fully participate.
Put human relationships first. If there are policies for your organization, it becomes clear what is acceptable use and what isn’t. These policies might also outline what tools are acceptable to use. In lieu of this (and in your personal life) err on the side of respecting others.
Don’t use AI tools to become the worst version of yourself
Last year, Apple released a series of what were meant to be light-hearted ads about its Apple Intelligence (AI) tool. The product promised to help you with various levels of deception, including “appearing” prepared for a meeting, indulging your inner jerk by crafting an email that AI would rewrite as “friendly,” and dashing off a last-minute “heartfelt” gift for your spouse whose birthday you forgot. In other words, this AI tool encourages lazy, unprofessional, self-indulgent, and uncaring behavior. Is that a good thing for anyone?
What kind of tools might help us to truly be kinder or more caring? To promote self-regulation and thoughtful behavior? To apply critical thinking skills or advance civil discourse? Those are the kinds of tools we need.
Send Me Your Questions!
I would love to hear about your data dilemmas or AI ethics questions and quandaries. You can send me a note at hello@ethicallyalignedai.com or connect with me on LinkedIn. I will keep all inquiries confidential and remove any potentially sensitive information – so please feel free to keep things high level and anonymous as well.
This column is not legal advice. The information provided is strictly for educational purposes. AI and data regulation is an evolving area and anyone with specific questions should seek advice from a legal professional.