}}

The ability to easily access tremendous amounts of computing power has made data the new basis of competition — businesses must learn to extract value from data and build modern applications that serve customers with personalized services, in real time, and at scale. A de facto architecture is emerging to build and operate fast data applications, and is often referred to as the “SMACK” stack (Spark, Mesosphere, Akka, Cassandra, and Kafka).

Download this free book excerpt from O’Reilly to learn how to use Apache Spark to process data quickly, at scale. This excerpt includes three chapters: an introduction to Spark, how Spark works, and how to leverage Spark settings for optimal performance.

What You Will Learn:

  • Use cases for Spark
  • Why performance matters when analyzing data at very large scale
  • The overall design of Spark and its place in the big data ecosystem
  • How to tune, debug, and optimize Apache Spark to improve performance