by Angela Guess
According to a new press release, “Pepperdata, the DevOps for Big Data company, today announced Pepperdata Code Analyzer for Apache Spark, which provides Spark application developers the ability to identify performance issues and connect them to particular blocks of code within an application. Code Analyzer is a new product that follows on the heels of Pepperdata Application Profiler, which provides Hadoop and Spark developers with actionable recommendations for improving job performance. ‘One of the most significant challenges in Big Data is achieving optimal performance,’ said Ash Munshi, CEO of Pepperdata. ‘Code Analyzer fills a huge void in application development for Spark, helping developers optimize Spark applications for large-scale production. Developers are now empowered to improve the performance of Spark applications with new information and insight around the code, build, test and release phases’.”
The release goes on, “The performance metrics from Spark Web UI have historically been a challenge for developers to understand and contextualize, especially without having granular, time-series data on hand. Developers cannot easily drill down into and understand the problematic sections of an application that require optimization. Further, as Spark clusters typically run many applications in parallel, the Spark Web UI doesn’t inform developers how applications are impacted by other applications running on the cluster. Pepperdata Code Analyzer allows Spark application developers to precisely measure how cluster resources – including CPU, memory, and network and disk I/O–are consumed by any particular block of application code. Code Analyzer delivers additional insight by combining application information from the Spark engine with granular time-series data for all applications running on a cluster. Dev teams are empowered with the ability to pinpoint the specific segment of their application code responsible for performance issues.”
Read more at DevOps.com.
Photo credit: Pepperdata