Artificial Intelligence (AI) has made fantastic advances since its debut about a half century ago. What has changed in the last ten years is that computing power and tools have improved to make it possible for machines to solve more complex problems, so they can handle not just very specific tactical tasks but also make strategic decisions and improve Data Quality across an organization.
Dr. Martin Prescher, CTO at AI-enabled car financing app startup AutoGravity should know. He’s been at both ends of the spectrum, having in the past worked at Siemens where he leveraged basic Machine Learning functionality writing software for industrial control applications.
Today, at AutoGravity, the stakes are raised, as the company uses very diverse data inputs from different sources (car manufacturers, dealers, and lenders) to lead individuals to the right car and the right car financing for them, giving them visibility into monthly payments and appropriate leasing and purchasing options based on their individual profiles as early in the process as possible. Users can get up to four binding offers in just minutes.
Being able to use Artificial Intelligence to make complex decisions – whether for auto buying or anything else – based on various inputs has benefitted from the power that exists in computers to ingest, process, combine and find patterns in data.
What hasn’t changed quite as much is what Prescher calls the prime problem of Data Quality. “The main thing for all AI apps is that you need good data sets to train the machine and make sure computers can predict precisely what the person wants to do,” he says.
Data Quality: An AI Issue All Around
Others have made the same case, including Mark Woolen, chief product officer at predictive marketing software vendor Radius. He wrote recently that the underlying data is “a key foundational piece that impacts AI at scale. Therefore, the proverbial data statement, ‘garbage in, garbage out,’ and the implications of bad Data Quality, is arguably the most understated AI trend.” Woolen himself references another party on the issue, Scott Brinker, who writes in Chiefmartec.com, his chief marketing technologist blog, that “the specific data you feed these algorithms makes all the difference. The strategic battles with AI will be won by the scale, quality, relevance, and uniqueness of your data.” Data Quality, he notes, will become more important along with services and software to support that mission.
In Accenture’s Technology Vision for Insurance 2017, the issue emerges again. According to the report, three-quarters of insurance executives believe that AI will either significantly alter or completely transform the overall insurance industry in the next three years. It even predicts that in five years, more than half of insurance company customers will select services based on the company’s AI instead of on its traditional brand.
However, insurers also say they face challenges integrating AI into their existing technology, citing issues such as Data Quality. “While insurers are increasingly aware that AI is central to their success in the digital economy, their success is by no means assured,” said John Cusano, senior managing director and global head of the Accenture Insurance practice, in a statement. He also noted there that as the technology evolves, insurers will need to address Data Quality among other issues.
In another example, this time discussing the banking sector, Arunkumar Krishnakumar, a founding partner at AU Capital Partners, writes that banks have to resolve internal Data Quality issues to make AI work for operational intelligence (as well as external data ownership issues to make AI work for dealing with customers). “Most successful AI platforms have access to high quality data, and in huge volumes. They also get to see regular transactions across different streams of data that they can then learn from,” he says. In these respects, banks are at a deficit compared to FinTech firms, where AI has been built into their DNA from day one.
“Banks do have data, but the quality, integrity and accuracy of data stored digitally is generally appalling,” he writes. “A few years ago, BCBS 239 emerged as a regulation focused on fixing data in banks, however compliance to that is mostly being treated as a check box exercise costing the banks millions. The point being, banks are years away from processes and infrastructure that provides quality data. If AI is introduced into this landscape, it would be more detrimental to existing processes, as there would be more hands involved in confirming results suggested by AI, and the costs of using AI would outweigh its benefits.”
In a 2016 report, The Economist Intelligence Unit put some metrics behind the Artificial Intelligence Data Quality conundrum. Twenty-one percent of executives queried about the obstacles to implementing AI in business cited Data Quality concerns. While that ranked behind the cost of the technology (an issue for 27% of respondents), the report points out that costs can be managed using third-party cloud platforms and open-source AI platforms for development. “Issues with data availability and quality will take longer for some organisations” to resolve, it states.
Resolving Data Quality Issues
In the case of AutoGravity, in-house data teams work to clean data to make sure it’s always up to date and leveraging the right data feeds, Prescher says. “That is very crucial,” he says. While there are tools and providers that offer such services, companies dealing with consumer data, like AutoGravity, should be careful, he recommends.
“We never want to give out our information and that of our consumers,” Prescher says. “That data can’t leave our servers.” As he sees it, forwarding such data to a third-party provider is handing over “a key piece of your business and value in the ecosystem and we didn’t want to do that.” Since it was determined to handle Data Quality operations in-house, it is developing a proprietary framework to improve the process.
Prescher also says that data security factored into its decision, too.
“To have Artificial Intelligence run in a very secure framework is something that we started with, with the whole company and technology centered around a data security framework which enables us to do Machine Learning on an absolutely secure data set,” he says. “We started with data security and developed our machine learning framework around that.”
Photo Credit: Smart Design/Shutterstock.com