by Angela Guess
Randy Lea of Teradata recently wrote in Forbes, “The word easy is not often associated with big data, but there is at least one pretty easy answer when it comes to the question of why companies embark on big data deployments: money, whether it is making it or saving it. What may be less obvious is why, despite their best intentions, organizations are sometimes getting lost along the big data road. Instead of improving the bottom line through the power of deep insights, big data becomes more of an expense than a boost.”
Lea goes on, “Why does this happen? One reason is that big data initiatives require substantial resources to get the infrastructure in place. It is all too easy for companies to get lost in the IT weeds: the systems, the processes, and the logistics that support big data management and loading come to dominate their landscape. Resources are poured into scaling the organization’s capabilities to store big data, manage big data, and do various kind of processing on big data. None of those things actually deliver the answers that move the business. Instead they represent an intense focus on how to best and most cost-effectively amass data rather than on how to generate value from it.”
Lea continues, “How can an organization tell if it has fallen into this trap, collecting big data without deriving its true value? One powerful indicator is to question whether the organization has built an environment that not only can perform analytics, but what can be thought of as big analytics. At the most basic level, big analytics is the ability to perform multi-genre analytics using SQL, statistical modeling, machine learning, path and pattern analysis, time series, text analysis, and graph analytics (among others), and to do so at scale.”
Photo credit: Flickr/ SanguineSeas