Welcome to Manning India!

We are pleased to be able to offer regional eBook pricing for Indian residents.
All eBook prices are discounted 40% or more!
Spark in Action, Second Edition
Jean-Georges Perrin
  • ISBN 9781617295522
  • 565 pages (estimated)
  • printed in black & white

I would say that this is the best book on Spark I've read.

Kelvin Johnson
The Spark distributed data processing platform provides an easy-to-implement tool for ingesting, streaming, and processing data from any source. In Spark in Action, Second Edition, you’ll learn to take advantage of Spark’s core features and incredible processing speed, with applications including real-time computation, delayed evaluation, and machine learning. Spark skills are a hot commodity in enterprises worldwide, and with Spark’s powerful and flexible Java APIs, you can reap all the benefits without first learning Scala or Hadoop.

Unlike many Spark books written for data scientists, Spark in Action, Second Edition is designed for data engineers and software engineers who want to master data processing using Spark without having to learn a complex new ecosystem of languages and tools. You’ll instead learn to apply your existing Java and SQL skills to take on practical, real-world challenges.

Table of Contents detailed table of contents

Part 1: The theory crippled by awesome examples

1 So, what is Spark, anyway?

1.1 The big picture: what Spark is and what it does

1.1.1 What is Spark?

1.1.2 How can you use Spark?

1.1.3 Spark in a data processing scenario

1.1.4 The four pillars of manna

1.1.5 Spark in a data science scenario

1.2 What can I do with Spark?

1.2.1 Spark predicts restaurant quality at NC Eatery

1.2.2 Spark allows fast data transfer for Lumeris

1.2.3 Spark analyzes equipment logs for the CERN

1.2.4 Other use cases

1.3 Why you will love the dataframe

1.3.1 The dataframe from a Java perspective

1.3.2 The dataframe from an RDBMS perspective

1.3.3 A graphical representation of the dataframe

1.4 Your first example

1.4.2 Downloading the code

1.4.3 Running your fist application

1.4.4 Your first code

1.5 What will you learn in this book?

1.6 Summary

2 Architecture and flow

2.1 Building your mental model

2.2 Using Java code to build your mental model

2.3 Walking through your application

2.3.1 Connecting to a master

2.3.2 Loading, or ingesting, the CSV file

2.3.3 Transforming your data

2.3.4 Saving the work done in your dataframe to a database

2.4 Summary

3 The majestic role of the dataframe

3.1 The essential role of the dataframe in Spark

3.1.1 Organization of a dataframe

3.1.2 Immutability is not a swear word

3.2 Using dataframes through examples

3.2.1 A dataframe after a simple CSV ingestion

3.2.2 Data is stored in partitions

3.2.3 Digging in the schema

3.2.4 A dataframe after a JSON ingestion

3.2.5 Combining two dataframes

3.3 The dataframe is a Dataset<Row>

3.3.1 Reusing your POJOs

3.3.2 Creating a dataset of strings

3.3.3 Converting back and forth

3.4 Dataframe’s ancestor: the RDD

3.5 Summary

4 Fundamentally lazy

4.1 A real-life example of efficient laziness

4.2 A Spark example of efficient laziness

4.2.1 Looking at the results of transformations and actions

4.2.2 The transformation process, step by step

4.2.3 The code behind the transformation/action process

4.2.4 The mystery behind the creation of 7 million datapoints in 182 ms

4.2.5 The mystery behind the timing of actions

4.3 Comparing to RDBMS and traditional applications

4.3.1 Working with the teen birth rates dataset

4.3.2 Analyzing differences between a traditional app and a Spark app

4.4 Spark is amazing for data-focused applications

4.5 Catalyst is your app catalyzer

4.6 Summary

5 Building a simple app for deployment

5.1 An ingestion-less example

5.1.1 Calculating Π

5.1.2 The code to approximate Π

5.1.3 What are lambda functions in Java?

5.1.4 Approximating Π by using lambda functions

5.2 Interacting with Spark

5.2.1 Local mode

5.2.2 Cluster mode

5.2.3 Interactive mode in Scala and Python

5.3 Summary

6 Deploying your simple app

6.1 Beyond the example: the role of the components

6.1.1 Quick overview of the components and their interaction

6.1.2 Some of the fine prints of the Spark architecture

6.1.3 Going further

6.2 Building a cluster

6.2.1 Building a cluster that works for you

6.2.2 Setting up the environment

6.3 Building your application to run on the cluster

6.3.1 Building your application’s uber JAR

6.3.2 Building your application using Git and Maven

6.4 Running your application on the cluster

6.4.1 Submitting the uber JAR

6.4.2 Running the application

6.4.3 Analyzing the Spark user interface

6.5 Summary

Part 2: Ingestion

7 Ingestion from files

7.1 Common behaviors of parsers

7.2 Complex ingestion from CSV

7.2.1 Desired output

7.2.2 Code

7.3 Ingesting a CSV with a known schema

7.3.1 Desired output

7.3.2 Code

7.4 Ingesting a JSON file

7.4.1 Desired output

7.4.2 Code

7.5 Ingesting a multiline JSON file

7.5.1 Desired output

7.5.2 Code

7.6 Ingesting an XML file

7.6.1 Desired output

7.6.2 Code

7.7 Ingesting a text file

7.7.1 Desired output

7.7.2 Code

7.8 File formats for Big Data

7.8.1 The problem with traditional file formats

7.8.2 Avro is a schema-based serialization format

7.8.3 ORC is a columnar storage format

7.8.4 Parquet is also a columnar storage format

7.8.5 Comparing Avro, ORC, and Parquet

7.9 Ingesting Avro, ORC, and Parquet files

7.9.1 Ingesting Avro

7.9.2 Ingesting ORC

7.9.3 Parquet

7.9.4 Ingesting Avro, Orc, or Parquet reference table

7.10 Summary

8 Ingestion from databases

8.1 Ingestion from relational databases

8.1.1 Database connection checklist

8.1.2 Understanding the data used in the examples

8.1.3 Desired output

8.1.4 Code

8.1.5 Alternative code

8.2 The role of the dialect

8.2.1 What is a dialect anyway?

8.2.2 JDBC dialects provided with Spark

8.2.3 Building your own dialect

8.3 Advanced queries and ingestion

8.3.1 Filtering using a where clause

8.3.2 Joining data in the database

8.3.3 Ingestion and partitioning

8.3.4 Summary of advanced features

8.4 Ingestion from Elasticsearch

8.4.1 Data flow

8.4.2 The New York restaurants dataset digested by Spark

8.4.3 Code to ingest the restaurant dataset from Elasticsearch

8.5 Summary

9 Advanced ingestion: finding data sources & building your own

9.1 What is a data source?

9.2 Benefits of a direct connection to a data source

9.2.1 Temporary files

9.2.2 Data quality scripts

9.2.3 Get data on demand

9.3 Finding data sources at Spark Packages

9.4 Build your own data source

9.4.1 Scope of the example project

9.4.2 Your data source API and options

9.5 Behind the scene: building the data source itself

9.6 The register file and the advertiser class

9.7 The relation between the data and schema

9.7.1 The data source builds the relation

9.7.2 Inside the relation

9.8 Building the schema from a JavaBean

9.9 Building the dataframe is magic with the utilities

9.10 The other classes

9.11 Summary

10 Ingestion through structured streaming

10.1 What’s streaming?

10.2 Creating your first stream

10.2.1 Generating a file stream

10.2.2 Consuming the records

10.2.3 Getting records, not lines

10.3 Ingest data from network streams

10.4 Dealing with multiple streams

10.5 Discretized and structured streaming

10.6 Summary

Part 3: Transforming your data

11 Working with SQL

11.1 Working with Spark SQL

11.2 The difference between local or global view

11.3 Mixing the dataframe API and Spark SQL

11.4 Don’t DELETE it!

11.5 Going further with SQL

11.6 Summary

12 Transforming your data

12.1 What is data transformation?

12.2 Process and example of record-level transformation

12.2.1 Data discovery to understand the complexity

12.2.2 Data mapping to draw the process

12.2.3 Writing the transformation code

12.2.4 Reviewing your data transformation to ensure a quality process

12.2.5 What about sorting?

12.2.6 Wrapping up your first Spark transformation

12.3 Joining datasets

12.3.1 A closer look to the datasets to join

12.3.2 Building the list of higher education institutions per county

12.3.3 Performing the joins

12.4 Performing more transformations

12.5 Summary

13 Transforming entire documents

13.1 Transforming entire documents and their structure

13.1.1 Flattening your JSON document

13.1.2 Building nested documents for transfer and storage

13.2 The magic behind static functions

13.3 Performing more transformations

13.4 Summary

14 Extending transformations with user-defined functions (UDFs)

14.1 Extending Apache Spark

14.2 Registering and calling a UDF

14.2.1 Registering the UDF with Spark

14.2.2 Using the UDF with the dataframe API

14.2.3 Manipulating UDFs with SQL

14.2.4 Implementing the UDF

14.2.5 Writing the service itself

14.3 Using UDFs to ensure a high level of data quality

14.4 Considering UDFs’ constraints

14.5 Summary

15 Aggregating your data

15.1 Aggregating data with Spark

15.1.1 A quick reminder on aggregations

15.1.2 Performing basic aggregations with Spark

15.2 Performing aggregations with live data

15.2.1 Preparing your dataset

15.2.2 Aggregating data to better understand the schools

15.3 Building custom aggregations with UDAF

15.4 Summary

Part 4: Going Further

16 Cache and checkpoint: enhancing Spark’s performances

16.1 Caching and checkpointing can increase performance

16.1.1 The usefulness of Spark caching

16.1.2 The subtle effectiveness of Spark checkpointing

16.1.3 Using cache and checkpoint

16.2 Caching in action

16.3 Going further in performance optimization

16.4 Summary

17 Exporting data & building full data pipelines

17.1 Exporting data

17.1.1 Building a pipeline with NASA datasets

17.1.2 Transforming columns to datetime

17.1.3 Transforming the confidence percentage to confidence level

17.1.4 Exporting the data

17.1.5 Exporting the data: what really happened?

17.2 Delta Lake: enjoying a database close to your system

17.2.1 Understanding why a database is needed

17.2.2 Using Delta Lake in your data pipeline

17.2.3 Consuming data from Delta Lake

17.3 Accessing cloud storage services from Spark

17.4 Summary

18 Exploring the deployment constraints

18.1 Managing resources with YARN, Mesos, and Kubernetes

18.1.1 The built-in standalone mode manages resources

18.1.2 YARN manages resources in a Hadoop environment

18.1.3 Mesos is a standalone resource manager

18.1.4 Kubernetes orchestrates containers

18.1.5 Choosing the right resource manager

18.2 Sharing files with Spark

18.2.1 Accessing the data contained in files

18.2.2 Sharing files through distributed file systems

18.2.3 Accessing files on shared drives or file server

18.2.4 Using file sharing services to distribute files

18.2.5 Other options for accessing files in Spark

18.2.6 Hybrid solution for sharing files with Spark

18.3 Making sure your Spark application is secure

18.3.1 Securing the network components of your infrastructure

18.3.2 Securing Spark’s disk usage

18.4 Summary

Appendixes

Appendix A: Installing Eclipse

A.1 Downloading Eclipse

A.2 Running Eclipse for the first time

Appendix B: Installing Maven

B.1 Installation on Windows

B.2 Installation on MacOS

B.3 Installation on Ubuntu

B.4 Installation on RHEL / Amazon EMR

B.5 Manual installation on Linux and other Unix-like OS

Appendix C: Installing Git

C.1 Installing Git on Windows

C.2 Installing Git on macOS

C.3 Installing Git on Ubuntu

C.4 Installing Git on RHEL / AWS EMR

C.5 Other tools to consider

Appendix D: Downloading the code

D.1 Downloading the source code from the command line

D.2 Getting started in Eclipse

Appendix E: Installing Elasticsearch and sample data

E.1 Software installation

E.1.1 All platforms

E.1.2 macOS with Homebrew

E.2 Installing the NYC restaurant dataset

E.3 Elasticsearch vocabulary

E.4 Useful commands

E.4.1 Get the server status

E.4.2 Display the structure

E.4.3 Count documents

Appendix F: Maven quick cheat sheet

F.1 Source of packages

F.2 Useful commands

F.3 Typical Maven lifecycle

F.4 Useful configuration

F.4.1 Built-in properties

F.4.2 Building a uber jar

F.4.3 Including the source code

F.4.4 Executing from Maven

Appendix G: Getting help with relational databases

G.1 Informix (IBM)

G.1.1 Installing Informix on macOS

G.1.2 Installing Informix on Windows

G.2 MariaDB

G.2.1 Installing MariaDB on macOS

G.2.2 Installing MariaDB on Windows

G.3 MySQL (Oracle)

G.3.1 Installing MySQL on macOS

G.3.2 Installing MySQL on Windows

G.3.3 Loading the Sakila database

G.4 PostgreSQL

G.4.1 Installing PostgreSQL on macOS and Windows

G.4.2 Installing PostgreSQL on Linux

G.4.3 GUI clients for PostgreSQL

Appendix H: A history of enterprise data

H.1 The enterprise problem

H.2 The solution is, hmmm, was the data warehouse

H.3 The ephemeral data lake

H.4 Lightning fast cluster computing

H.5 Java rules, but we’re ok with Python

Appendix I: Reference for ingestion

I.1 Spark datatypes

I.2 Options for CSV ingestion

I.3 Options for JSON ingestion

I.4 Options for XML ingestion

I.5 Methods to implement to build a full dialect

I.6 Options for ingesting and writing data from/to a database

I.7 Options for ingesting and writing data from/to Elasticsearch

Appendix J: A reference for joins

J.1 Setting up the decorum

J.2 Performing an inner join

J.3 Performing an outer join

J.4 Performing a left or left outer join

J.5 Performing a right or right outer join

J.6 Performing a left semi join

J.7 Performing a left anti join

J.8 Performing a cross join

Appendix K: Static functions ease your transformations

K.1 Functions per category

K.1.2 Aggregate functions

K.1.3 Arithmetical functions

K.1.4 Array manipulation functions

K.1.5 Binary operations

K.1.6 Comparison functions

K.1.7 Compute function

K.1.8 Conditional operations

K.1.9 Conversion functions

K.1.10 Data shape functions

K.1.11 Date and time functions

K.1.12 Digest functions

K.1.13 Encoding functions

K.1.14 Formatting functions

K.1.15 JSON (JavaScript object notation) functions

K.1.16 List functions

K.1.17 Mathematical functions

K.1.18 Navigation functions

K.1.19 Rounding functions

K.1.20 Sorting functions

K.1.21 Statistical functions

K.1.22 Streaming functions

K.1.23 String functions

K.1.24 Technical functions

K.1.25 Trigonometry functions

K.1.26 UDFs (user-defined functions) helpers

K.1.27 Validation functions

K.1.28 Deprecated functions

K.2 Functions appearance per version of Spark

K.2.1 Functions appeared in Spark v2.4.0

K.2.2 Functions appeared in Spark v2.3.0

K.2.3 Functions appeared in Spark v2.2.0

K.2.4 Functions appeared in Spark v2.1.0

K.2.5 Functions appeared in Spark v2.0.0

K.2.6 Functions appeared in Spark v1.6.0

K.2.7 Functions appeared in Spark v1.5.0

K.2.8 Functions appeared in Spark v1.4.0

K.2.9 Functions appeared in Spark v1.3.0

Appendix L: Lexicon

L.1 Components and definition at a glance

Appendix M: Generating streaming data

M.1 Need for generating streaming data

M.2 A simple stream

M.3 Joined data

M.4 Types of fields

Appendix N: Reference for streaming

N.1 Output mode

N.2 Sinks

N.3 Sinks, output modes, and options

N.4 Examples of using the different sinks

N.4.1 Output in a file

N.4.2 Output to a Kafka topic

N.4.3 Processing streamed records through foreach

N.4.4 Output in memory and processing from memory

Appendix P: Spark in production: installation and a few tips

P.1 Installation

P.1.1 Installing Spark on Windows

P.1.2 Installing Spark on macOS

P.1.3 Installing Spark on Ubuntu

P.1.4 Installing Spark on AWS EMR

P.2 Understanding the installation

P.3 Configuration

P.3.1 Properties syntax

P.3.2 Application configuration

P.3.3 Runtime configuration

P.3.4 Other configuration points

Appendix S: Enough (of) Scala

S.1 What is Scala

S.2 Scala to Java conversion

S.2.1 Maps: conversion from Scala to Java

Appendix T: Reference for transformations and actions

T.1 Transformations

T.2 Actions

Appendix X: Reference for exporting data

X.1 Specifying the way to save data

X.2 Spark export formats

X.3 Options for the main formats

X.3.1 Exporting as CSV

X.3.2 Exporting as JSON

X.3.3 Exporting as Parquet

X.3.4 Exporting as ORC

X.3.5 Exporting as XML

X.3.6 Exporting as text

X.3.7 Exporting data to Delta Lake

Appendix Z: Finding help when you’re stuck

Z.1 Small annoyances here and there

Z.1.1 Service 'sparkDriver' failed after 16 retries…​

Z.1.2 Requirement failed

Z.1.3 Class cast exception

Z.1.4 Corrupt record in ingestion

Z.1.5 Cannot find winutils.exe

Z.2 Help in the outside world

Z.2.1 User mailing list

Z.2.2 Stack Overflow

About the Technology

Spark is a powerful general-purpose analytics engine that can handle massive amounts of data distributed across clusters with thousands of servers. Optimized to run in memory, this impressive framework can process data up to 100x faster than most Hadoop-based systems. Spark’s support for SQL, along with its ability to rapidly run repeated queries and quickly adapt to modified queries, make it well-suited for machine learning, so important in this age of big data. Whether you’re using Java, Scala, or Python, Spark offers straightforward APIs to access its core features.

About the book

Spark in Action, Second Edition is an entirely new book that teaches you everything you need to create end-to-end analytics pipelines in Spark. Rewritten from the ground up with lots of helpful graphics, you’ll learn the roles of DAGs and dataframes, the advantages of “lazy evaluation”, and ingestion from files, databases, and streams.

By working through carefully-designed Java-based examples, you’ll delve into Spark SQL, interface with Python, and cache and checkpoint your data. Along the way, you’ll learn to interact with common enterprise data technologies like HDFS and file formats like Parquet, ORC, and Avro.

You’ll also discover interesting Spark use cases, like interactive reporting, machine learning pipelines, and even monitoring players in online games. You’ll even get a quick look at machine learning techniques you can apply without a PhD in mathematics! All examples are available in GitHub for you to explore and adapt as you learn. The demand for Spark-savvy developers is so steep, they’re among the highest paid in the industry today!

What's inside

  • Lots of examples based in the Spark Java APIs using real-life dataset and scenarios
  • Examples based on Spark v2.3 Ingestion through files, databases, and streaming
  • Building custom ingestion process
  • Querying distributed datasets with Spark SQL
  • Deploying Spark applications
  • Caching and checkpointing your data
  • Interfacing with data scientists using Python
  • Applied machine learning
  • Spark use cases including Lumeris, CERN, and IBM

About the reader

For beginning to intermediate developers and data engineers comfortable programming in Java. No experience with functional programming, Scala, Spark, Hadoop, or big data is required.

About the author

An experienced consultant and entrepreneur passionate about all things data, Jean-Georges Perrin was the first IBM Champion in France, an honor he’s now held for ten consecutive years. Jean-Georges has managed many teams of software and data engineers.

Manning Early Access Program (MEAP) Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.
MEAP combo $59,99 pBook + eBook + liveBook
MEAP eBook $47,99 pdf + ePub + kindle + liveBook
Prices displayed in rupees will be charged in USD when you check out.

placing your order...

Don't refresh or navigate away from the page.

FREE domestic shipping on three or more pBooks