The vulnerabilities in large language models (LLMs), such as their ability to create hallucinations, their inherent biases, and their susceptibility to corruption, have led to hesitation among businesses in deploying AI applications for business advantage. This concern is compounded further by the lag in legislative regulations to address these risks.
Despite these challenges, businesses are compelled to harness the potential of LLMs for innovation, increased productivity, and competitive edge. Otherwise, they face another risk: falling behind their competitors. For example, this recent study shows that consultants who use AI tools perform at a higher level than those who do not.
The Need for Consistent AI Monitoring
A commonly proposed solution is algorithmic auditing. Calls for the development of auditing frameworks abound, with thousands of regulations in development around the world. In the U.S. Congress alone, there are currently over 80 bills being considered. However, with so much legislative activity, many organizations have adopted a wait-and-see approach to compliance.
But while it might seem daunting to consider such a volume of legislation, most of the proposed rules are reasonably similar. Typically, legislation attempts to harness AI tools that are used for high-stakes purposes, such as those that involve making decisions that impact people (e.g., hiring). And it does so along three dimensions: 1) requiring low levels of bias against protected classes of people, 2) suggesting some form of impact analysis, and 3) prohibiting invasions of data privacy.
Here’s the rub: these are all very basic requirements and any responsible developer (or deployer) of AI tools should be doing these anyway, as a matter of course.
In fact, the “audit” mentality suggests that a single-point-in-time evaluation (once per year?) is sufficient. It is not. Nor is there much consistency in how existing audits are conducted.
In this age of big data, complex AI solutions can and should be evaluated continuously to ensure they do not go off the rails. AI is a powerful tool that can be used for good, but it can easily have negative effects if not closely monitored. Consider a newborn baby – a being with unlimited potential to learn, grow, and impact the world. Yet no parent would allow this new person to live or explore without near constant supervision and care.
Focus on the Output
The solution is straightforward: continuous output monitoring. By focusing on the quality of the outputs rather than the complexities of the models themselves, businesses can safely utilize these powerful tools.
As an example, let’s consider hiring a new employee. There are many AI and other algorithmic tools available that claim to assist in this process and improve the quality of the decision. But given the risk of bias in AI tools, should an organization rely on the vendor’s marketing claims that their tool is low in bias or even “bias free”? Clearly, the answer is no. But is it reasonable to expect a deploying organization to fully and completely understand how the AI operates when even the engineers who built it likely don’t?
When making decisions about people, it is imperative to understand generally which factors are being weighted, but it is likely too onerous to expect a tool user to completely grasp the complexities of how an AI algorithm functions. It is practical to focus on outputs – what information is produced from the tool and what does it mean? Does it predict new hire success on the job? Is it low in observable bias between protected classes of individuals? These are the key outcomes that need to be monitored in an ongoing (and continual if possible) fashion.
Continuous and impartial monitoring of AI outputs against several criteria, like visibility, integrity, optimization, legislative preparedness, effectiveness, and transparency, can mitigate risks and ensure safe deployment. This is different from an audit, as audits represent a single point in time evaluation. Continuous monitoring processes help businesses know whether and when an algorithmic solution goes off the rails. If we are to effectively leverage and control powerful new AI tools, we must institute rigorous and continual monitoring of these tools. Otherwise, we are essentially setting up our newborn baby in an apartment to live on their own!