Modern data science solutions need to be clean, easy to read, and scalable. In Python for Big Datasets, author J.T. Wolohan teaches you how to take a small project and scale it up using a functionally-influenced approach to Python coding. You’ll explore methods and built-in Python tools that lend themselves to clarity and scalability, like the high-performing parallelism method, as well as distributed technologies that allow for high data throughput. The abundant hands-on exercises in this practical tutorial will lock in these essential skills for any large scale data science project.
It's a joy to read with examples that will help you learn the topic.
Part 1: The map & reduce style
1 A practical parallel introduction
1.1 What you will learn in this book
1.2 What is parallel computing?
1.2.1 Understanding parallel computing
1.2.2 Map & Reduce style as a construction project
1.2.3 When to program in a Map & Reduce style?
1.3 The Map & Reduce style
1.3.1 The map function
1.3.2 The reduce function
1.3.3 Map and Reduce
1.4 Distributed computing
1.7 AWS Elastic MapReduce
2 Parallel basics and web scraping
2.1 An introduction to map
2.1.1 Lazy functions
2.2 Parallel processing
2.2.1 Processors and Processing
2.2.2 Paralleization and Pickling
2.2.3 Order and Parallelization
2.2.4 State and Parallelization
2.3 Putting it all together: Scraping a Wikipedia network
2.3.1 Visualizing our graph
3 Function pipelines for complex data
4 Lazy workflows for big data
5 Basics of reduce
6 Parallel computing with reduce
Part 2: Distributed frameworks
7 Basics of Apache Hadoop and Spark
8 Hadoop MapReduce
9 Map & Reduce style in Spark
10 Distributed machine learning with Spark
Part 3: Massively parallel cloud
11 Cloud computing ecosystem
12 Amazon Elastic MapReduce (EMR)
About the Technology
Python is a data scientist’s dream-come-true, thanks to readily available libraries that support tasks like data analysis, machine learning, visualization, and numerical computing. What’s more, Python’s high-level nature makes for easy-to-read, concise code, which means speedy development and easy maintenance—valuable benefits in the multi-user development environments so prevalent in the realm of big data analysis. Python achieves superbly with features like its map and reduce functions.
The demand for data scientists is high!
According to a recent IBM study, not only will the demand for data scientists skyrocket 28% by 2020, but those who can confidently work with programming concepts like map/reduce, distributed data technologies like Hadoop and Spark, and cloud platforms like AWS, GCP, and Azure command the highest salaries! With the ever-increasing volume of accessible data, the forecast is bright for data scientists with these valuable skills!
About the book
Python for Big Datasets teaches you to write easily readable, easily scalable Python code that can efficiently process large volumes of structured and unstructured data. With an emphasis on clarity, style, and performance, author J.T. Wolohan expertly guides you through implementing a functionally-influenced approach to Python coding. You’ll get familiar with Python’s functional built-ins like the functools operator and itertools modules, as well as the toolz library.
You’ll also dive into parallel processing using the standard multiprocessing module, the third-party pathos framework, Apache Hadoop, Apache Spark, and PySpark. You’ll even learn how to use these tools on a cloud platform like AWS. The many hands-on exercises throughout ensure your new knowledge sticks. By the end of this comprehensive guide, you’ll have a solid grasp on the tools and methods that will take your code beyond the laptop and your data science career to the next level!
- An introduction to functional and parallel programming
- Data science workflow
- Profiling code for better performance
- Python multiprocessing
- Practical exercises including full-scale distributed applications
About the authorJ.T. Wolohan is a lead data scientist at Booz Allen Hamilton and a PhD researcher at Indiana University, Bloomington, affiliated with the Department of Information and Library Science and the School of Informatics and Computing. His professional work focuses on rapid prototyping and scalable AI. His research focuses on computational analysis of social uses of language online.
placing your order...Don't refresh or navigate away from the page.