Advertisement

The Best and Worst of Times for Generative AI: The Missing Value Prop

By on
Read more about co-authors Gagan Tandon and Jill Stover Heinze.

When generative AI (GenAI) broke onto the scene in late 2022, a floodgate of market opportunity and innovation was unleashed with it. In recent months, we’ve seen substantial technical and affordability progress by many measures: AI now beats out human performance on many benchmarks of simple tasks, OpenAI’s release of its high-performing GPT-4o “mini” small AI model is 60% cheaper than its predecessor, Meta’s open-source Llama 3.1 release gifted developers worldwide with a very capable large language model, and Google’s Gemini 1.5 Pro release can process significantly more information, running up to 1 million tokens (about 1,500 pages of information). 

And yet, the frenzy that greeted GenAI’s rapid developments is tempered by slower-moving regulatory constraints as well as financially limited or risk-averse businesses. According to Gartner, GenAI is creeping toward the deflating but predictable “Trough of Disillusionment,” which signals a more measured, grounded approach to AI innovation that favors stricter testing and evaluation cycles that illuminate only the most attractive investments. 

You don’t have to look far to see executives reconciling their prior lofty and sometimes untethered optimism with practical realities. According to a PwC survey, a sobering 46% of CEOs worldwide agree that GenAI will increase their company’s legal liabilities and reputational risks. In comparison, 64% agree that it will do the same for their cybersecurity risks, and 86% foresee that it will increase competitiveness in their industry.

An Unsettled Legal and Regulatory Environment

Further complicating business leaders’ relationship with GenAI is the ever-evolving crop of legal and regulatory developments that pepper the global landscape (and specifically the U.S. landscape at the state and federal levels). Several novel legal developments made their mark in the first half of this year. A first-of-its-kind lawsuit is pitting major U.S. record labels against GenAI song generation services Suno and Udio, citing alleged copyright infringements resulting from training AI models on artists’ protected works. This case adds to a litany of other such claims across media, calling into question the legal soundness of the fundamental approach of large language models.

Also making headlines is a recently passed California law that takes aim at companies spending $100 million or more to train AI models or $10 million or more to modify them, requiring that they “test the model for its ability to enable cybersecurity or infrastructure attacks or the development of chemical, biological, radioactive, or nuclear weaponry.” The proposed law, pending Governor Newsom’s signature, is drawing criticism from across the tech community, surfacing the underlying tensions in balancing innovation with safety.

On a more settled front, the European Union’s AI Act is now officially in force as of August 1, meaning that companies that operate in or impact the EU must now comply. Various deadlines are set depending on the nature of the AI services provided and their risk levels. As the first major, comprehensive AI law, the Act is setting the tone for how companies and governments worldwide assess and triage risks and mitigations.

A Back-to-Basics Way Forward

How can business leaders thread the needle between the game-changing opportunities GenAI enables and the disruptive threats that come with it? One thing they’re not doing is backing off the technology, even despite pockets of uncertainty. 

Deloitte reports that two-thirds (67%) of organizations are increasing their GenAI investments. However, leaders are rethinking how they extract value from their investments and experiments. Successful proofs-of-concept are making way for sound business cases and compelling, differentiated value propositions that promise market advantages and real customer benefit.

Balancing Act for GenAI Success

As we witness aspiration confronting reality, we see how important it is to achieve a strategic balance of the three classic pillars of sound use cases: 

  1. User desirability
  2. Technical feasibility
  3. ROI

While most companies dove into GenAI with proofs-of-concept to bear out technical feasibility, it’s now imperative to give equal weight to addressing user and business needs in any product strategy. In fact, the most significant barrier to adopting GenAI tools among Americans surveyed by Ipsos (33%) is that they simply don’t see a need for them. 

When businesses over-index on some pillars at the expense of others, the foundation cracks. Take, for example, how two major U.S. brands approached GenAI innovations within their service portfolios:

Back in February, quick-service restaurant Wendy’s sparked considerable customer backlash, including threats of boycotts, when it announced plans to test GenAI-powered “dynamic pricing,” initially reported by media as “surge pricing.” Enabled by digital menu boards, the pricing plan would allow Wendy’s to adjust prices based on customer traffic, which many customers took to mean forcing them to pay more at popular meal times. 

While Wendy’s certainly faltered from a public relations standpoint, it also failed to properly consider the impacts their tech could have on customers’ anxious states of mind at a time of high inflation and widespread suspicions of price gouging. While dynamic pricing may be technically feasible and increase revenues, it could also alienate customers, erode their trust, and cause lasting brand damage.

In contrast, major retailer Walmart demonstrated a relatively savvier use of GenAI. Retail Dive reports that Walmart is equipping its associates with AI tools that bring multiple data sources together to better expose product inventory statuses to shoppers: “Within moments, [associates] can find out if the supplier is out of inventory, if the product is on its way to a store on a truck, or even the approximate time it’ll get moved from the truck to the backroom and then a shelf on the sales floor.” Using its unique internal data to deliver detailed status information via a human employee enhances business transparency and customer service standards, striking a more harmonious balance between technical, business, and user interests. 

What Comes Next?

We’re only in the very early days of understanding the power and appropriate, responsible, and profitable use of GenAI. However, if widespread interest and monetary investments are helpful indicators, we will see continued growth and exploration in this space.

Peering into the far future, we see more reason to be committed to achieving balance. For example, some cite Artificial General Intelligence (AGI), the ability of AI to meet or exceed human performance on most tasks, as a pinnacle of AI development. Our current forays into applying GenAI suggest that success will hinge on whether we center the technology on demonstrable needs that recognize people’s hopes, expectations, and concrete realities. 

Whatever the GenAI future holds, it will certainly be full of learning and discovery.