Mastering Large Datasets with Python
Parallelize and Distribute Your Python Code
John T. Wolohan
  • January 2020
  • ISBN 9781617296239
  • 312 pages
  • printed in black & white

A clear and efficient path to mastery of the map and reduce paradigm for developers of all levels.

Justin Fister, GrammarBot
Modern data science solutions need to be clean, easy to read, and scalable. In Mastering Large Datasets with Python, author J.T. Wolohan teaches you how to take a small project and scale it up using a functionally influenced approach to Python coding. You’ll explore methods and built-in Python tools that lend themselves to clarity and scalability, like the high-performing parallelism method, as well as distributed technologies that allow for high data throughput. The abundant hands-on exercises in this practical tutorial will lock in these essential skills for any large-scale data science project.

About the Technology

Programming techniques that work well on laptop-sized data can slow to a crawl—or fail altogether—when applied to massive files or distributed datasets. By mastering the powerful map and reduce paradigm, along with the Python-based tools that support it, you can write data-centric applications that scale efficiently without requiring codebase rewrites as your requirements change.

About the book

Mastering Large Datasets with Python teaches you to write code that can handle datasets of any size. You’ll start with laptop-sized datasets that teach you to parallelize data analysis by breaking large tasks into smaller ones that can run simultaneously. You’ll then scale those same programs to industrial-sized datasets on a cluster of cloud servers. With the map and reduce paradigm firmly in place, you’ll explore tools like Hadoop and PySpark to efficiently process massive distributed datasets, speed up decision-making with machine learning, and simplify your data storage with AWS S3.
Table of Contents detailed table of contents

Part 1

1 Introduction

1.1 What you’ll learn in this book

1.2 Why large datasets?

1.3 What is parallel computing?

1.3.1 Understanding parallel computing

1.3.2 Scalable computing with the map and reduce style

1.3.3 When to program in a map and reduce style

1.3 The Map & Reduce style

1.4.1 The map function for transforming data

1.4.2 The reduce function for advanced transformations

1.4.3 Map and reduce for data transformation pipelines

1.5 Distributed computing for speed and scale

1.6 Hadoop: A distributed framework for map and reduce

1.7 Spark for high-powered map, reduce, and more

1.8 AWS Elastic MapReduce—​Large datasets in the cloud

Summary

2 Accelerating large dataset work: Map and parallel computing

2.1 An introduction to map

2.1.1 Lazy functions

2.2 Parallel processing

2.2.1 Processors and Processing

2.2.2 Parallelization and Pickling

2.2.3 Order and Parallelization

2.2.4 State and Parallelization

2.3 Putting it all together: Scraping a Wikipedia network

2.3.1 Visualizing our graph

2.4 Exercises

2.4.1 Problems of parallelization

2.4.2 Map function

2.4.3 Parallelization and speed

2.4.4 Pickling storage

2.4.5 Web scraping data

2.4.6 Heterogenous map transformations

Summary

3 Function pipelines for mapping complex transformations

3.1 Helper functions and function chains

3.2 Unmasking hacker communications

3.2.1 Creating helper functions

3.2.2 Creating a pipeline

3.3 Twitter demographic projections

3.3.1 Tweet-level pipeline

3.3.2 User-level pipeline

3.3.3 Applying the pipeline

3.4 Exercises

3.4.1 Helper functions and function pipelines

3.4.2 Math teacher trick

3.4.3 Caesar’s cipher

Summary

4 Processing large datasets with lazy workflows

4.1 What is laziness?

4.2 Some lazy functions to know

4.2.1 Shrinking sequences with the filter function

4.2.2 Combining sequences with zip

4.2.3 Lazy file searching with iglob

4.3 Understanding iterators: the magic behind lazy Python

4.3.1 The backbone of lazy Python: Iterators

4.3.2 Generators: Functions for creating data

4.4 The poetry puzzle: lazily processing a large dataset

4.4.1 Reading poems in with iglob

4.4.2 A poem cleaning regular expression class

4.4.3 Calculating the ratio of articles

4.5 Lazy simulations: Simulating fishing villages

4.5.1 Creating a village class

4.5.2 Designing the simulation class for our fishing simulation

4.6 Exercises

4.6.1 Lazy functions

4.6.2 Fizz Buzz generator

4.6.3 Repeat access

4.6.4 Parallel simulations

4.6.5 Scrabble words

Summary

5 Accumulation operations with reduce

5.1 N-to-X with reduce

5.2 The three parts of reduce

5.2.1 Accumulation functions in reduce

5.2.2 Concise accumulations with lambda functions

5.2.3 Initializers for complex start behavior in reduce

5.3 Reductions you’re familiar with

5.3.1 Creating a filter with reduce

5.3.2 Creating frequencies with reduce

5.4 Using map and reduce together

5.5.1 Using map to clean our car data

5.5.2 Using reduce for sums and counts

5.5.3 Applying the map and reduce pattern to cars data

5.6 Speeding up map and reduce

5.7 Exercises

5.7.1 Situations to use reduce

5.7.2 Lambda functions

5.7.3 Largest numbers

5.7.4 Group words by length

Summary

6 Speeding up map and reduce with advanced parallelization

6.1 Getting the most out of parallel map

6.1.1 Chunk sizes and getting the most out of parallel map

6.1.2 Parallel map runtime with variable sequence and chunk size

6.1.3 More parallel maps: imap and starmap

6.2 Solving the parallel map reduce paradox

6.2.1 Parallel reduce for faster reductions

6.2.2 Combination functions and the parallel reduce workflow

6.2.3 Implementing parallel summation, filter, and frequencies with fold

Summary

Part 2

7 Processing truly big datasets with Hadoop and Spark

7.1 Distributed computing

7.2 Hadoop for batch processing

7.2.1 Getting to know the four Hadoop modules

7.3 Using Hadoop to find high scoring words

7.3.1 MapReduce jobs using Python and Hadoop Streaming

7.3.2 Scoring words using Hadoop Streaming

7.4 Spark for interactive workflows

7.4.1 Big datasets in-memory with Spark

7.4.2 PySpark for mixing Python and Spark

7.4.3 Enterprise data analytics Spark SQL

7.4.4 Columns of data with Spark DataFrame

7.5 Document word scores in Spark

7.5.1 Setting up Spark

7.5.2 MapReduce Spark jobs with spark-submit

7.6 Exercises

7.6.1 Hadoop streaming scripts

7.6.2 Spark interface

7.6.3 RDDs

7.6.4 Passing data between steps

Summary

8 Best practices for large data with Apache Streaming and mrjob

8.1 Unstructured data: logs and documents

8.2 Tennis analytics with Hadoop

8.2.1 A mapper for reading match data

8.2.2 Reducer for calculating tennis player ratings

8.3 mrjob for Pythonic Hadoop streaming

8.3.1 The Pythonic structure of a mrjob job

8.3.2 Counting errors with mrjob

8.4 Tennis match analysis with mrjob

8.4.1 Counting Serena’s dominance by court type

8.4.2 Sibling rivalry for the ages

8.5 Exercises

8.5.1 Hadoop data formats

8.5.2 More Hadoop data formats

8.5.3 Hadoop’s native tongue

8.5.4 Designing common patterns in MRJob

Summary

9 PageRank with map and reduce in PySpark

9.1 A closer look at PySpark

9.1.1 Map-like methods in PySpark

9.1.2 Reduce-like methods in PySpark

9.1.3 Convenience methods in PySpark

9.2 Tennis rankings with Elo and PageRank in PySpark

9.2.1 Revisiting Elo ratings with PySpark

9.2.2 Introducing the PageRank algorithm

9.2.3 Ranking tennis players with PageRank

9.3 Exercises

9.3.1 sumByKey

9.3.2 sumByKey with toolz

9.3.3 Spark & toolz

9.3.4 Wikipedia PageRank

Summary

10 Faster decision-making with machine learning and PySpark

10.1 What is machine learning?

10.1.1 Machine learning as self-adjusting judgmental algorithms

10.1.2 Common applications of machine learning

10.2 Machine learning basics with decision tree classifiers

10.2.1 Designing decision tree classifiers

10.2.2 Implementing a decision tree in PySpark

10.3 Fast random forest classifications in PySpark

10.3.1 Understanding random forest classifiers

10.3.2 Implementing a random forest classifier

10.4 Exercises

10.4.1 ML question

10.4.3 Decision trees on Iris dataset

10.4.5 Other classifiers

Summary

Part 3

11 Large datasets in the cloud with Amazon Web Services and S3

11.1 AWS Simple Storage Service—​A solution for large datasets

11.1.1 Limitless storage with S3

11.1.2 Cloud-based storage for scalability

11.1.3 Objects for convenient heterogenous storage

11.1.4 Managed service for conveniently managing large datasets

11.1.5 Life cycle policies for managing large datasets over time

11.2 Storing data in the cloud with S3

11.2.1 Storing data with S3 through the browser

11.2.2 Programmatic access to S3 with Python and boto

11.3 Exercises

11.3.1 S3 Storage classes

11.3.2 S3 storage region

11.3.3 Object storage

Summary

12 MapReduce in the cloud with Amazon’s Elastic MapReduce

12.1 Running Hadoop on EMR with mrjob

12.1.1 Convenient cloud clusters with EMR

12.1.2 Starting EMR clusters with mrjob

12.1.3 The AWS EMR browser interface

12.2 Machine learning in the cloud with Spark on EMR

12.2.1 Writing our machine learning model

12.2.2 Setting up an EMR cluster for Spark

12.2.3 Running PySpark jobs from our cluster

12.3 Exercises

12.3.1 R-series cluster

12.3.2 Back-to-back Hadoop jobs

12.3.3 Instance types

Summary

What's inside

  • An introduction to the map and reduce paradigm
  • Parallelization with the multiprocessing module and pathos framework
  • Hadoop and Spark for distributed computing
  • Running AWS jobs to process large datasets

About the reader

For Python programmers who need to work faster with more data.

About the author

J. T. Wolohan is a lead data scientist at Booz Allen Hamilton, and a PhD researcher at Indiana University, Bloomington.

placing your order...

Don't refresh or navigate away from the page.
print book $29.99 $49.99 pBook + eBook + liveBook
Additional shipping charges may apply
Mastering Large Datasets with Python (print book) added to cart
continue shopping
go to cart

eBook $24.99 $39.99 3 formats + liveBook
Mastering Large Datasets with Python (eBook) added to cart
continue shopping
go to cart

Prices displayed in rupees will be charged in USD when you check out.

FREE domestic shipping on three or more pBooks