by Angela Guess
Ron Bodkin recently wrote in Dataconomy, “In the world of data science, great strides are being made in the area of deep learning. We’ve made so much progress that it is easy to think that instead of having to embrace data science as a discipline, we can somehow wait a little big longer and have a Watson-like box to perform all of these tasks for us. If you think this way, you are going to miss the boat. Here are three reasons why.”
Bodkin’s first reason: “We Dole Out the Work. Deep learning, and much of data science, is limited to a narrow set of tasks… There is no reason not to be happy about the victories of deep learning. But so far, deep learning systems can do some specialized tasks well, but only under certain circumstances. In a fascinating blog post, Zachary Chase Lipton surveyed a variety of papers that pointed out the flaws in deep learning systems. It turns out that they are often brittle and easily fooled. The key lies in understanding when a certain technique works and when it doesn’t.”
His list continues, “We Provide Context. Driverless cars don’t know where to go or why. Humans are needed to provide context, to frame the problem, to generate the hypothesis, and to decide what deep learning or data science to apply. Even today’s most advanced systems are ‘idiot savants’ that perform a single task really well, but don’t have a broader context. In any machine learning or analytics problem domain, one of the most important roles people play is to define what the goal is. It’s easy to build a system to optimize a value, only to discover you picked the wrong problem. Humans will for a long time be the ones who solely define problems, understand what is really important, and verify that a system is functioning as expected against an intuitive understanding of a problem domain.”
photo credit: Flickr