Advertisement

Ask a Data Ethicist: Does Using Generative AI Devalue Professional Work?

By on
Read more about author Katrina Ingram.

A couple of weeks ago, a management consultant I met made an off-the-cuff remark that caught my attention. I was asking them about the ways in which generative AI might impact their business and they shared that clients might not want to pay $50,000 for a slide deck anymore if they disclosed that generative AI was used as part of their work. That led to this question …

Does the use of generative AI in producing a professional service devalue the work?

In an effort to get a bigger perspective on this issue, I did some informal, highly unscientific polling of my community. There is no point in providing quantitative summaries from this feedback – it’s not statistically valid or representative (small sample with lots of sampling bias!). However, I will share some of the qualitative feedback as food for thought.

I should also mention that for this column, I’m setting aside commenting on the many ethical challenges I typically write about when it comes to AI in order to focus a little more on this question of perceptions of value. 

My Hypothesis: Generative AI Will Devalue Professional Work

We’ve all likely heard a version of this story …

A technician shows up at a client who has a problem with their mission-critical machine. The technician does what amounts to an hour of onsite work, turning a couple knobs here and there. They fix the machine. The client gets the bill – it’s $10,000. 

The breakdown reads …

Turning the knobs (1 hour)  – $500

Knowing what knobs to turn – $9,500

The point of the story is that we’re paying for the expert advice that was cultivated over many years. But, when it comes to AI, in the long run, aren’t we trying to outsource and automate expertise? 

The AI agenda seems to be building machines that can access much more data in order to arrive at “better” insights while also making the process more efficient with automated decision-making. This is the underlying narrative that organizations buy into when these technologies are deployed. Right now, there is still a lot of room left for humans to “be in the loop,” have oversight, and exert their hard-won professional skills. Yet, the AI dream is that someday, no human skill will be necessary. It will all be automated. In the meantime, automation tends to abstract away more of the skilled work. Think about the move from craftsmanship to manufacturing products on an assembly line and the deskilling of the workforce. 

We’re left with the lower value knob turning parts of the work that cannot be easily outsourced to automation because they involve someone showing up. Clients might even choose to cut out the middleman and do their own knob turning, so to speak, assuming they can gain access to the AI “expertise” of knowing which knob to turn.

To be clear, I don’t believe AI actually has any expertise. Rather, I think it will create convincing or good enough substitutions that simulate professional expert work products. This could be the management consultant’s slide deck, the lawyer’s contract, or an ad agency’s marketing campaign. This is the tangible component of the expert’s recommendation. The core knowledge work deliverable is typically some kind of content. Most of this stuff isn’t all that innovative, but it’s good enough. It meets the basic needs. 

Time Is Money

I think this story about the technician is popular folklore because something about it doesn’t sit well with us in a world that is largely designed around equating time with money. We’re typically told to decouple the value of expertise from the time spent on a specific piece of work. Yet, the relationship between time and money is deeply ingrained in our culture. It’s embedded in the business model of many professional services that are still based on billable hours. Even with a fixed fee contract or value-based pricing, we do the mental calculation of time spent in order to justify the expense and feel like we’re getting good value. We account for level of expertise by attaching a larger dollar value to time. We also rely on market forces to ensure the impacts of technology are kept in check. We expect that technology and automation will drive down cost in the long run. 

Paying for time also holds true in how we arrange compensation for employees. Even those on a salary are expected to show up for a certain number of hours a week. If we use technology to get better at doing our jobs, the number of hours we work doesn’t get reduced. We typically just get more work! We negotiate for vacation and sick leave, the time we’ll still be paid for even if work is not produced. 

I didn’t share my hypothesis with the folks who answered my survey, but I was pretty much in agreement with the consultant that disclosing the use of generative AI would negatively impact the perceived value of the work. I think disclosure is needed, but I also think it might come with a cost, making it harder to do it voluntarily. 

Survey Says: Disclosure, Yes; Devaluation, No

Here are the questions I asked and the general responses:

Do you expect a professional service provider to disclose their use of generative AI? Most said yes, they expected it to be disclosed, but they also shared they did not think companies would do so voluntarily.

If a service provider disclosed they ARE using generative AI, would you expect to pay less, the same, or more for the work? The vast majority said the same as usual, the use of AI makes no difference.

In thinking about how much to pay for a professional service that involves the use of generative AI, please indicate what you feel is important to consider: hours of human time spent on the work, level of relevant human expertise, or amount of overall human effort? Level of relevant human expertise was overwhelmingly selected. Many people also added their own considerations ranging from environmental impacts to the overall quality of the output itself.

Does the use of generative AI in producing a professional service devalue the work? The “no’s” were in the majority, but there was a lot of uncertainty and “it depends” type answers.

I also asked for explanations to responses and the standard, open-ended “is there anything else” question. 

A Real Mixed Bag 

I was surprised by some of the topline results. Yet, in digging into the qualitative explanations I did see more of the issues I had thought about in my hypothesis coming into play.

Survey respondents questioned the idea of paying experts when a machine could play that role with similar results. This fits with the idea of human expertise being selected as the most important variable in what to pay for a professional service. It also challenges my thoughts on time spent on the task. 

A lot of respondents talked about the importance of reliable, high-quality output as the primary focus for deriving value. Many noted the outputs from generative AI are not there yet, but they envisioned this to be the future. Some explicitly said the only thing that matters is the output – it doesn’t matter who (or what) delivered the output. Others spoke of the importance of process, and raised ethical questions around the environment, data, job loss, and other ethical concerns. For them, it was not just about the ends without thinking about the means.

Some mentioned that market forces and competition would bring costs down, so they would wind up paying less overall. One person said they might pay more if AI was used because they felt the quality would be higher. Another noted that less time would mean lower hourly costs, while some felt the human expert might not really add any value at a certain point. There was a comment about the imperative to use AI, that one day human expertise alone would not be enough. Yet, another person felt those using generative AI were lazy or cutting corners. 

Generally, most felt disclosure around the use of generative AI was a good thing so that people could make informed choices related to the output. One person likened it to knowing if salmon was farmed or wild. Certain respondents emphasized the type of work might make a difference as to whether or not disclosure was needed. For example, legal or high-stakes work might require disclosure, while marketing copy might not require it.

If There Is No Penalty for Using It, Why Not Disclose?

My hypothesis centered on the idea that there are negative impacts for disclosing the use of generative AI. The results of my (highly unscientific) poll seem to suggest otherwise. So, if there is no penalty, no devaluation of work, no stigma in using generative AI, why not disclose its use? Disclosure is widely deemed a good idea. It’s cited as a best practice in numerous generative AI guidance and some experts say it’s key to consumer trust. 

It’s because the risk of facing stigma or penalty for using generative AI is a relevant concern. 

In this one study, researchers identified negative brand perceptions when AI was believed to author a marketing message. The negative impact intensified for emotional communications. 

OpenAI has had the ability to watermark its generative-AI-text-based content for close to a year. Why haven’t they done it? Part of the reason relates to their own survey results: 

A company survey found that while global support for AI detection tools was strong, almost 30% of ChatGPT users said they would use the service less if watermarking was implemented.” (Search Engine Journal)

Where do you stand on these issues of disclosure and devaluation?

Send Me Your Questions!

I would love to hear about your data dilemmas or AI ethics questions and quandaries. You can send me a note at hello@ethicallyalignedai.com or connect with me on LinkedIn. I will keep all inquiries confidential and remove any potentially sensitive information – so please feel free to keep things high level and anonymous as well. 

This column is not legal advice. The information provided is strictly for educational purposes. AI and data regulation is an evolving area and anyone with specific questions should seek advice from a legal professional.