Is Smart Data finally reaching the tipping point for acceptance in the enterprise?
Every week brings multiple newspaper articles about AI-powered applications that rely on Smart Data. Buzzwords abound. It takes expertise to parse the differences between Cognitive Computing, Machine Learning, Deep Learning, Natural Language Processing, Text Analytics, Big Data, and other enabling technologies.
Facing this rising hype cycle, I think it’s important to pay attention to the backend plumbing that empowers autonomous reasoning. The DATAVERSITY® Smart Data 2017 Conference, which wrapped up on February 1, focused on key elements for this plumbing together with their prospects for enterprise-wide adoption.
Here’s my take at where things stand.
Making Semantic Connections
First there’s some good news. Contextual computing for describing the semantics of things on the web (such as people, places, dates, events, recipes, and the like) is alive, well, and proceeding ahead with all deliberate speed.
Richard Wallis from Data Liberate reports that Schema.Org vocabularies are used by more than 12 million domains and encompass over 30 percent of all publicly accessible web pages. There is now a de-facto standard for smart, structured data on the open web, together with policies and procedures in place for making domain-specific extensions.
Google and other web-wide search engines offer web site owners a tantalizing proposition: use these standardized vocabularies to encode content elements and, auto-magically, end users have superior search experiences while website owners expand their reach with high quality traffic.
Google takes SEO a step further by organizing these content elements into its Knowledge Graph of general information. No longer just a search engine, Google charts the semantic connections among disparate content elements and displays results as Knowledge Panels, Info Boxes, Answer Boxes, and Rich Snippets.
By managing the transformation from strings to things, Google is rolling out next-generation experiences for finding good stuff.
Prospects for Enterprise Knowledge Graphs
Next up are efforts to build enterprise knowledge graphs that encapsulate business expertise. The financial services industry is far along this path with the development of the Financial Industry Business Ontology (FIBO). At the conference, David Newman, a senior vice president at Wells Fargo and FIBO program chair, updated the progress of this multi-year effort, which is supported by over 200 financial and technology firms.
Assembling this vertical industry ontology is not for the feint-of-heart. Over more than the last five years, design-teams have encoded the meanings of financial terms, concepts, and related items of business knowledge.
The core FIBO vocabularies are now defined and the potential results are promising. Several large firms are launching pilot projects in 2017, adding knowledge-based reasoning capabilities to develop innovative financial applications. Efforts are also underway to have Schema.Org adopt the public-facing elements of FIBO as financial services-specific extensions to this web-wide vocabulary of things.
What happens with a knowledge graph of industry-specific categories and relationships? Here are two possible examples.
Consider how a mid-sized company might negotiate the conditions of a business loan with little or no involvement from in-the-flesh bankers. The smart application can weigh a myriad of important factors about credit worthiness and prior business relationships before making judgments about terms. A bank can expand its customer base and offer ever more competitive loans without increasing its staff.
Or as another potential experience, a retail bank might automatically offer prospective customers just the products and services that meet their financial situations when they want to open accounts. Customers can immediately establish banking relationships while the smart application sets up the checking account privileges, credit card terms, and overdraft protection limits. Customers have superior service experiences while the bank delivers them at affordable costs.
In short, FIBO is going to have a substantial impact on financial applications – delivering delightful digital experiences while also reducing expenses and operating costs. As the Wall Street Journal reports in mid- February, “Financial institutions see artificial intelligence as a way to improve customer experience and automate routine back-office processes and compliance tasks, which could save money and free employees to focus on value-added activities.”
Within the financial services industry, AI-powered apps are going to leverage the embedded intelligence of a well-defined knowledge graph.
From Knowledge Graphs to Algorithms
Of course a knowledge graph is only a means to an end. Semantics supports what I call the virtuous cycle of metadata enrichment: the more you know about a thing, the better able you are to model relationships and calculate interrelationships with related things. The promise of AI combines Smart Data with savvy algorithms.
Algorithms are required to weave together the answers and queries. Semantic modeling improves the results of autonomous reasoning enabled by algorithms.
Many presenters at the Smart Data Conference described how applications apply such algorithms as Natural Language Processing (NLP), Machine Learning, and Deep Learning with varying degrees of success.
Nevertheless, the jury on algorithms is still out. Reviewing the history of deep learning algorithms, Keith Foote on the DATAVERSITY channel concludes, “Currently, the processing of Big Data and the evolution of Artificial Intelligence are both dependent on Deep Learning. Deep Learning is still evolving and in need of creative ideas.” There are multiple techniques to extract meaning from text and analyze big data to find patterns. Needed are models to identify what to look for in the first place.
The Tipping Point for Smart Data
What is new about algorithms for Smart Data is the deployment model – the Cloud and APIs. IBM is investing heavily in its Watson Bluemix services for delivering its cognitive computing capabilities. Amazon, Facebook, Google, and Microsoft are all enticing developers to utilize APIs for their Machine Learning engines. Developers no longer need to make the up-front IT infrastructure investments before they can begin building intelligent applications and capitalize on the prospects for Smart Data.
In sum, Smart Data 2017 Conference highlights how Smart Data remains a work-in-progress. Businesses and even entire industries can add intelligence to their underlying infrastructure, in pursuit of the holy grail where computers not only store information but have sufficient smarts to produce autonomous results, without being explicitly programmed to do so.
The tipping point in some industries is fast approaching. But Smart Data today remains largely the domain of specialists where some assembly is still required. Needed are tools and frameworks for citizen developers, enabling non-technical, line-of-business staffers to begin deploying AI-powered enterprise applications. Stay tuned for further developments this year and the Smart Data 2018 Conference.