Welcome to Manning India!

We are pleased to be able to offer regional eBook pricing for Indian residents.
All eBook prices are discounted 40% or more!
Mastering Large Datasets with Python
Parallelize and Distribute Your Python Code
John T. Wolohan
  • MEAP began February 2019
  • Publication in December 2019 (estimated)
  • ISBN 9781617296239
  • 296 pages (estimated)
  • printed in black & white

The author does an incredible job onexploring each aspect of manipulating large datasets with Python locally and in the cloud.

Ariel Gamino
Modern data science solutions need to be clean, easy to read, and scalable. In Mastering Large Datasets with Python, author J.T. Wolohan teaches you how to take a small project and scale it up using a functionally-influenced approach to Python coding. You’ll explore methods and built-in Python tools that lend themselves to clarity and scalability, like the high-performing parallelism method, as well as distributed technologies that allow for high data throughput. The abundant hands-on exercises in this practical tutorial will lock in these essential skills for any large scale data science project.
Table of Contents detailed table of contents

Part 1: The map & reduce style

1 Introduction

1.1 What you will learn in this book

1.2 What is parallel computing?

1.2.1 Understanding parallel computing

1.2.2 Map & Reduce style as a construction project

1.2.3 When to program in a Map & Reduce style?

1.3 The Map & Reduce style

1.3.1 The map function

1.3.2 The reduce function

1.3.3 Map and Reduce

1.4 Distributed computing

1.5 Hadoop

1.6 Spark

1.7 AWS Elastic MapReduce

1.8 Summary

2 Working with large datasets faster: parallelization and the map function

2.1 An introduction to map

2.1.1 Lazy functions

2.2 Parallel processing

2.2.1 Processors and Processing

2.2.2 Parallelization and Pickling

2.2.3 Order and Parallelization

2.2.4 State and Parallelization

2.3 Putting it all together: Scraping a Wikipedia network

2.3.1 Visualizing our graph

2.4 Exercises

2.4.1 Problems of parallelization

2.4.2 Map function

2.4.3 Parallelization and speed

2.4.4 Pickling storage

2.4.5 Web scraping data

2.4.6 Heterogenous map transformations

2.5 Summary

3 Function pipelines for mapping complex transformations

3.1 Helper functions and function chains

3.2 Unmasking Hacker Communications

3.2.1 Creating helper functions

3.2.2 Creating a pipeline

3.3 Twitter demographic projections

3.3.1 Tweet-level pipeline

3.3.2 User-level pipeline

3.3.3 Applying the pipeline

3.4 Exercises

3.4.1 Helper functions and function pipelines

3.4.2 “Math teacher” trick

3.4.3 Caesar’s cipher

3.5 Summary

4 Processing large datasets with lazy workflows

4.1 What is laziness?

4.2 Some lazy functions to know

4.2.1 Shrinking sequences with the filter function

4.2.2 Combining sequences with zip

4.2.3 Lazy file searching with iglob

4.3 Understanding iterators: the magic behind lazy Python

4.3.1 The backbone of lazy Python: Iterators

4.3.2 Generators: Functions for creating data

4.4 The poetry puzzle: lazily processing a large dataset

4.4.1 Reading poems in with iglob

4.4.2 A poem cleaning regular expression class

4.4.3 Calculating the ratio of articles

4.5 Lazy simulations: Simulating fishing villages

4.5.1 Creating a village class

4.5.2 Designing the simulation class for our fishing simulation

4.6 Exercises

4.6.1 Lazy functions

4.6.2 Fizz Buzz generator

4.6.3 Repeat access

4.6.4 Parallel simulations

4.6.5 Scrabble words

4.7 Summary

5 Accumulation operations with Reduce

5.1 N-to-X with Reduce

5.2 The three parts of Reduce

5.2.1 Accumulation functions in Reduce

5.2.2 Concise accumulations with lambda functions

5.2.3 Initializers for complex start behavior in reduce

5.3 Reductions you’re familiar with

5.3.1 Creating a filter with reduce

5.3.2 Creating frequencies with reduce

5.4 Using map and reduce together

5.5.1 Using map to clean our car data

5.5.2 Using reduce for sums and counts

5.5.3 Applying the map and reduce pattern to cars data

5.6 Speeding up map and reduce

5.7 Exercises

5.7.1 Situations to use reduce

5.7.2 Lambda functions

5.7.3 Largest numbers

5.7.4 Group words by length

5.8 Summary

6 Speeding up map and reduce with advanced parallelization

6.1 Getting the most out of parallel map

6.1.1 Chunk sizes and getting the most out of parallel map

6.1.2 Parallel map runtime with variable sequence and chunk size

6.1.3 More parallel maps: imap and starmap

6.2 Solving the parallel map reduce paradox

6.2.1 Parallel reduce for faster reductions

6.2.2 Combination functions and the parallel reduce workflow

6.2.3 Implementing parallel summation, filter, and frequencies with fold

6.3 Summary

Part 2: Distributed frameworks

7 Processing truly big datasets with Hadoop and Spark

7.1 Distributed computing

7.2 Hadoop for batch processing

7.2.1 Getting to know the four Hadoop modules

7.3 Using Hadoop to find high scoring words

7.3.1 MapReduce jobs using Python and Hadoop Streaming

7.3.2 Scoring words using Hadoop Streaming

7.4 Spark for interactive workflows

7.4.1 Big datasets in-memory with Spark

7.4.2 PySpark for mixing Python and Spark

7.4.3 Enterprise data analytics Spark SQL

7.4.4 Columns of data with Spark DataFrame

7.5 Document word scores in Spark

7.5.1 Setting up Spark

7.5.2 MapReduce Spark jobs with spark-submit

7.6 Summary

8 Best practices for large data with Apache Streaming and MRJob

8.1 Unstructured data: logs and documents

8.2 Tennis analytics with Hadoop

8.2.1 A mapper for reading match data

8.2.2 Reducer for calculating tennis player ratings

8.3 mrjob for Pythonic Hadoop streaming

8.3.1 The Pythonic structure of a mrjob job

8.3.2 Counting errors with mrjob

8.4 Tennis match analysis with mrjob

8.4.1 Counting Serena’s dominance by court type

8.4.2 Sibling rivalry for the ages

8.5 Summary

9 PageRank with Map and Reduce in PySpark

9.1 A closer look at PySpark

9.1.1 Map-like methods in PySpark

9.1.2 Reduce-like methods in PySpark

9.1.3 Convenience methods in PySpark

9.2 Tennis rankings with Elo and PageRank in PySpark

9.2.1 Revisiting Elo ratings with PySpark

9.2.2 Introducing the PageRank algorithm

9.2.3 Ranking tennis players with PageRank

9.3 Exercises

9.3.1 sumByKey

9.3.2 sumByKey with toolz

9.3.3 Spark & toolz

9.3.4 Wikipedia PageRank

9.4 Summary

10 Faster decision making with machine learning and PySpark

10.1 What is machine learning?

10.1.1 Machine learning as self-adjusting judgmental algorithms

10.1.2 Common applications of machine learning

10.2 Machine learning basics with decision tree classifiers

10.2.1 Designing decision tree classifiers

10.2.2 Implementing a decision tree in PySpark

10.3 Fast random forest classifications in PySpark

10.3.1 Understanding random forest classifiers

10.3.2 Implementing a random forest classifier

10.4 Exercises

10.4.1 ML question

10.4.3 Decision trees on Iris dataset

10.4.5 Other classifiers

10.5 Summary

Part 3: Massively parallel cloud

11 Large datasets in the cloud with Amazon Web Services and S3

12 MapReduce in the cloud with Amazon’s Elastic MapReduce

About the Technology

Python is a data scientist’s dream-come-true, thanks to readily available libraries that support tasks like data analysis, machine learning, visualization, and numerical computing. What’s more, Python’s high-level nature makes for easy-to-read, concise code, which means speedy development and easy maintenance—valuable benefits in the multi-user development environments so prevalent in the realm of big data analysis. Python achieves superbly with features like its map and reduce functions.



The demand for data scientists is high!

According to a recent IBM study, not only will the demand for data scientists skyrocket 28% by 2020, but those who can confidently work with programming concepts like map/reduce, distributed data technologies like Hadoop and Spark, and cloud platforms like AWS, GCP, and Azure command the highest salaries! With the ever-increasing volume of accessible data, the forecast is bright for data scientists with these valuable skills!

About the book

Mastering Large Datasets with Python teaches you to write easily readable, easily scalable Python code that can efficiently process large volumes of structured and unstructured data. With an emphasis on clarity, style, and performance, author J.T. Wolohan expertly guides you through implementing a functionally-influenced approach to Python coding. You’ll get familiar with Python’s functional built-ins like the functools operator and itertools modules, as well as the toolz library.

You’ll also dive into parallel processing using the standard multiprocessing module, the third-party pathos framework, Apache Hadoop, Apache Spark, and PySpark. You’ll even learn how to use these tools on a cloud platform like AWS. The many hands-on exercises throughout ensure your new knowledge sticks. By the end of this comprehensive guide, you’ll have a solid grasp on the tools and methods that will take your code beyond the laptop and your data science career to the next level!

What's inside

  • An introduction to functional and parallel programming
  • Data science workflow
  • Profiling code for better performance
  • Python multiprocessing
  • Practical exercises including full-scale distributed applications

About the reader

Readers should have intermediate Python programming skills.

About the author

J.T. Wolohan is a lead data scientist at Booz Allen Hamilton and a PhD researcher at Indiana University, Bloomington, affiliated with the Department of Information and Library Science and the School of Informatics and Computing. His professional work focuses on rapid prototyping and scalable AI. His research focuses on computational analysis of social uses of language online.

Manning Early Access Program (MEAP) Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.
MEAP combo $49.99 pBook + eBook + liveBook
MEAP eBook $25.00 $39.99 pdf + ePub + kindle + liveBook
Prices displayed in rupees will be charged in USD when you check out.

placing your order...

Don't refresh or navigate away from the page.

FREE domestic shipping on three or more pBooks