Spark in Action, Second Edition
Jean Georges Perrin
  • MEAP began March 2018
  • Publication in Fall 2019 (estimated)
  • ISBN 9781617295522
  • 375 pages (estimated)
  • printed in black & white

I would say that this is the best book on Spark I've read.

Kelvin Johnson
The Spark distributed data processing platform provides an easy-to-implement tool for ingesting, streaming, and processing data from any source. In Spark in Action, Second Edition, you’ll learn to take advantage of Spark’s core features and incredible processing speed, with applications including real-time computation, delayed evaluation, and machine learning. Spark skills are a hot commodity in enterprises worldwide, and with Spark’s powerful and flexible Java APIs, you can reap all the benefits without first learning Scala or Hadoop.

Unlike many Spark books written for data scientists, Spark in Action, Second Edition is designed for data engineers and software engineers who want to master data processing using Spark without having to learn a complex new ecosystem of languages and tools. You’ll instead learn to apply your existing Java and SQL skills to take on practical, real-world challenges.

Table of Contents detailed table of contents

Part 1: Some Theory with Exciting Examples

1 So, what is Spark, anyway?

1.1 The big picture: what Spark is and what it does

1.1.1 What is Spark?

1.1.2 How can you use Spark?

1.1.3 Spark in a data processing scenario

1.1.4 The four pillars of manna

1.1.5 Spark in a data science scenario

1.2 What can I do with Spark?

1.2.1 Spark predicts restaurant quality at NC Eatery

1.2.2 Spark allows fast data transfer for Lumeris

1.2.3 Spark analyzes equipment logs for the CERN

1.2.4 Other use cases

1.3 Why you will love the dataframe

1.3.1 The dataframe from a Java perspective

1.3.2 The dataframe from an RDBMS perspective

1.3.3 A graphical representation of the dataframe

1.4 Your first example

1.4.2 Downloading the code

1.4.3 Running your fist application

1.4.4 Your first code

1.5 What will you learn in this book?

1.6 Summary

2 Architecture and flow

2.1 Building your mental model

2.2 Java code to build your mental model

2.3 Walking through your application

2.3.1 Connecting to a master

2.3.2 Loading or ingesting the CSV file

2.3.3 Transforming your data

2.3.4 Saving the work done in our dataframe to a database

2.4 Summary

3 The majestic role of the dataframe

3.1 The essential role of the dataframe in Spark

3.1.1 Organization of a dataframe

3.1.2 Immutability is not a swear word

3.2 Using dataframes through examples

3.2.1 A dataframe after a simple CSV ingestion

3.2.2 Data is stored in partitions

3.2.3 Digging in the schema

3.2.4 A dataframe after a JSON ingestion

3.2.5 Combining two dataframes

3.3 The dataframe is a Dataset<Row>

3.3.1 Reuse your POJOs

3.3.2 Create a dataset of strings

3.3.3 Converting back and forth

3.4 Dataframe’s ancestor: the RDD

3.5 Summary

4 Fundamentally lazy

4.1 A real-life example of efficient laziness

4.2 A Spark example of efficient laziness

4.2.1 Looking at the results of transformations and actions

4.2.2 The transformation process, step by step

4.2.3 The code behind the transformation/action process

4.2.4 The mystery behind the creation of 7 million datapoints in 182ms

4.2.5 The mystery behind the timing of actions

4.3 Comparing to RDBMS and traditional applications

4.3.1 Working with the teen birth rates dataset

4.3.2 Analyzing the differences between a traditional app and a Spark app

4.4 Spark is amazing for data-focused application

4.5 Catalyst is your app catalyzer

4.6 Summary

5 Building a simple app for deployment

5.1 An ingestion-less example

5.1.1 Calculating Π

5.1.2 The code to approximate Π

5.1.3 What are lambda functions in Java?

5.1.4 Approximating Π using lambda functions

5.2 Interacting with Spark

5.2.1 Local mode

5.2.2 Cluster mode

5.2.3 Interactive mode in Scala and Python

5.3 Summary

6 Deploying your simple app

6.1 Beyond the example: the role of the components

6.1.1 Quick overview of the components and their interaction

6.1.2 Some of the fine prints of the Spark architecture

6.1.3 Going further

6.2 Building a cluster

6.2.1 Building a cluster that works for you

6.2.2 Setting up the environment

6.3 Building your application to run on the cluster

6.3.1 Building your application’s uber JAR

6.3.2 Building your application using Git and Maven

6.4 Running your application on the cluster

6.4.1 Submitting the uber JAR

6.4.2 Running the application

6.4.3 Analyzing the Spark user interface

6.5 Summary

Part 2: Ingestion

7 Ingestion from files

7.1 Common behaviors of parsers

7.2 Complex ingestion from CSV

7.2.1 Desired output

7.2.2 Code

7.3 Ingesting a CSV with a known schema

7.3.1 Desired output

7.3.2 Code

7.4 Ingesting a JSON file

7.4.1 Desired output

7.4.2 Code

7.5 Ingesting a multiline JSON file

7.5.1 Desired output

7.5.2 Code

7.6 Ingesting an XML file

7.6.1 Desired output

7.6.2 Code

7.7 Ingesting a text file

7.7.1 Desired output

7.7.2 Code

7.8 File formats for Big Data

7.8.1 The problem with traditional file formats

7.8.2 Avro is a schema-based serialization format

7.8.3 ORC is a columnar storage format

7.8.4 Parquet is also a columnar storage format

7.8.5 Conclusion

7.9 Ingesting Avro, ORC, and Parquet files

7.9.1 Avro

7.9.2 ORC

7.9.3 Parquet

7.9.4 Summary table

7.10 Summary

8 Ingestion from databases

8.1 Ingestion from relational databases

8.1.1 Database connection checklist

8.1.2 Understanding the data used in the examples

8.1.3 Desired output

8.1.4 Code

8.1.5 Alternative code

8.2 The role of the dialect

8.2.1 What is a dialect anyway?

8.2.2 JDBC dialects provided with Spark

8.2.3 Building your own dialect

8.3 Advanced queries and ingestion

8.3.1 Filtering using a where clause

8.3.2 Joining data in the database

8.3.3 Ingestion and partitioning

8.3.4 Summary of advanced features

8.4 Ingestion from Elasticsearch

8.4.1 Data flow

8.4.2 The New York restaurants dataset digested by Spark

8.4.3 Code to ingest the restaurant dataset from Elasticsearch

8.5 Summary

9 Advanced ingestion: finding data sources and building your own

9.1 What is a data source?

9.2 Benefits of a direct connection to a data source

9.2.1 Temporary files

9.2.2 Data quality scripts

9.2.3 Get data on demand

9.3 Finding data sources at Spark Packages

9.4 Build your own data source

9.4.1 Scope of the example project

9.4.2 Your data source API and options

9.5 Behind the scene: building the data source itself

9.6 The register file and the advertiser class

9.7 The relation between the data and schema

9.7.1 The data source builds the relation

9.7.2 Inside the relation

9.8 Building the schema from a JavaBean

9.9 Building the dataframe is magic with the utilities

9.10 The other classes

9.11 Summary

10 Ingestion through structured streaming

Part 3: Transformation

11 Working with Spark SQL

12 Working with data

13 Aggregate your data

14 Avoid mistakes: cache and checkpoint your data

15 Interfacing with Python

16 User Defined Functions (UDF)

Part 4: Going Further

17 Advanced topics

18 A primer to ML with no math

19 Exporting data

20 Exploring the deployment constraints

Appendixes

Appendix A: Installing Eclipse

A.1 Downloading Eclipse

A.2 Running Eclipse for the first time

Appendix B: Installing Maven

B.1 Installation on Windows

B.2 Installation on MacOS

B.3 Installation on Ubuntu

B.4 Installation on RHEL / Amazon EMR

B.5 Manual installation on Linux and other Unix-like OS

Appendix C: Installing Git

C.1 Installing Git on Windows

C.2 Installing Git on macOS

C.3 Installing Git on Ubuntu

C.4 Installing Git on RHEL / AWS EMR

C.5 Other tools to consider

Appendix D: Downloading the code and getting started with Eclipse

D.1 Downloading the source code from the command line

D.2 Getting started in Eclipse

Appendix E: Installing Elasticsearch and sample data

E.1 Software installation

E.1.1 All platforms

E.1.2 macOS with Homebrew

E.2 Installing the NYC restaurant dataset

E.3 Elasticsearch vocabulary

E.4 Useful commands

E.4.1 Get the server status

E.4.2 Display the structure

E.4.3 Count documents

Appendix F: Maven quick cheat sheet

F.1 Source of packages

F.2 Useful commands

F.3 Useful configuration

F.3.1 Built-in properties

F.3.2 Building a uber jar

F.3.3 Including the source code

F.3.4 Executing from Maven

Appendix G: Getting help with relational databases

G.1 Informix (IBM)

G.1.1 Installing Informix on macOS

G.1.2 Installing Informix on Windows

G.2 MariaDB

G.2.1 Installing MariaDB on macOS

G.2.2 Installing MariaDB on Windows

G.3 MySQL (Oracle)

G.3.1 Installing MySQL on macOS

G.3.2 Installing MySQL on Windows

G.3.3 Loading the Sakila database

G.4 PostgreSQL

G.4.1 Installing PostgreSQL on macOS and Windows

G.4.2 Installing PostgreSQL on Linux

G.4.3 GUI clients for PostgreSQL

Appendix H: A history of enterprise data

H.1 The enterprise problem

H.2 The solution is, hmmm, was the data warehouse

H.3 The ephemeral data lake

H.4 Lightning fast cluster computing

H.5 Java rules, but we’re ok with Python

Appendix I: Reference for ingestion

I.1 Spark datatypes

I.2 Options for CSV ingestion

I.3 Options for JSON ingestion

I.4 Options for XML ingestion

I.5 Methods to implement to build a full dialect

I.6 Options for ingesting and writing data from/to a database

I.7 Options for ingesting and writing data from/to Elasticsearch

Appendix J: A reference for joints

Appendix K: Static functions ease your transformations

Appendix L: Lexicon

L.1 Components and definition at a glance

Appendix P: Installing Spark in production and a few tips

P.1 Installation

P.1.1 Installing Spark on Windows

P.1.2 Installing Spark on macOS

P.1.3 Installing Spark on Ubuntu

P.1.4 Installing Spark on AWS EMR

P.2 Understanding the installation

P.3 Configuration

P.3.1 Properties syntax

P.3.2 Application configuration

P.3.3 Runtime configuration

P.3.4 Other configuration points

Appendix S: Enough (of) Scala

S.1 What is Scala

S.2 Scala to Java conversion

S.2.1 Maps: conversion from Scala to Java

Appendix T: Reference for transformations and actions

Appendix Z: Finding help when you’re stuck

Z.1 Small annoyances here and there

Z.1.1 Service 'sparkDriver' failed after 16 retries…​

Z.1.2 Corrupt record in ingestion

Z.2 Help in the outside world

Z.2.1 User mailing list

Z.2.2 Stack Overflow

About the Technology

Spark is a powerful general-purpose analytics engine that can handle massive amounts of data distributed across clusters with thousands of servers. Optimized to run in memory, this impressive framework can process data up to 100x faster than most Hadoop-based systems. Spark’s support for SQL, along with its ability to rapidly run repeated queries and quickly adapt to modified queries, make it well-suited for machine learning, so important in this age of big data. Whether you’re using Java, Scala, or Python, Spark offers straightforward APIs to access its core features.

About the book

Spark in Action, Second Edition is an entirely new book that teaches you everything you need to create end-to-end analytics pipelines in Spark. Rewritten from the ground up with lots of helpful graphics, you’ll learn the roles of DAGs and dataframes, the advantages of “lazy evaluation”, and ingestion from files, databases, and streams.

By working through carefully-designed Java-based examples, you’ll delve into Spark SQL, interface with Python, and cache and checkpoint your data. Along the way, you’ll learn to interact with common enterprise data technologies like HDFS and file formats like Parquet, ORC, and Avro.

You’ll also discover interesting Spark use cases, like interactive reporting, machine learning pipelines, and even monitoring players in online games. You’ll even get a quick look at machine learning techniques you can apply without a PhD in mathematics! All examples are available in GitHub for you to explore and adapt as you learn. The demand for Spark-savvy developers is so steep, they’re among the highest paid in the industry today!

What's inside

  • Lots of examples based in the Spark Java APIs using real-life dataset and scenarios
  • Examples based on Spark v2.3 Ingestion through files, databases, and streaming
  • Building custom ingestion process
  • Querying distributed datasets with Spark SQL
  • Deploying Spark applications
  • Caching and checkpointing your data
  • Interfacing with data scientists using Python
  • Applied machine learning
  • Spark use cases including Lumeris, CERN, and IBM

About the reader

For beginning to intermediate developers and data engineers comfortable programming in Java. No experience with functional programming, Scala, Spark, Hadoop, or big data is required.

About the author

An experienced consultant and entrepreneur passionate about all things data, Jean Georges Perrin was the first IBM Champion in France, an honor he’s now held for ten consecutive years. Jean Georges has managed many teams of software and data engineers.

Manning Early Access Program (MEAP) Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.

FREE domestic shipping on three or more pBooks

One of the most simple, but powerful introductions and dive-ins that you can ever have on a Apache library!

Igor Franca

A great book for beginners and prospective experts.

Markus Breuer