John Wolohan

J.T. Wolohan is a senior artificial intelligence and natural language processing architect at Booz Allen Hamilton. He has taught programming to learners of all levels: from elementary and middle school students up to graduate students and professionals. In addition to his interests in distributed and parallel computing, J.T. enjoys running, cooking, and spending time with his family.

books by John Wolohan

Object Storage Across the Cloud

  • February 2020
  • ISBN 9781617297786
  • 125 pages

Object Storage Across the Cloud is a collection of chapters from four Manning books, chosen by data scientist JT Wolohan, with the goal of helping you become comfortable developing with object storage, no matter which provider you choose. This mini ebook explores choosing the right storage class, access control and lifecycle configuration, and several common use cases. You’ll delve into the internals of the AWS S3 object store and use this highly popular system to host a website. In a chapter that examines Microsoft Azure’s Blob Storage, you’ll learn about Azure naming conventions, choosing an Azure storage service, creating an Azure storage account, and designing storage account access. And to put your newfound object storage knowledge to the test, you’ll use object storage to power big data analytics as you run both Hadoop and Spark jobs in the cloud using Amazon EMR. This packed primer is an excellent showcase of object storage, its many uses and benefits, and how to choose the best platform for your task!

Mastering Large Datasets with Python

  • January 2020
  • ISBN 9781617296239
  • 312 pages
  • printed in black & white
  • Available translations: Simplified Chinese

Mastering Large Datasets with Python teaches you to write code that can handle datasets of any size. You’ll start with laptop-sized datasets that teach you to parallelize data analysis by breaking large tasks into smaller ones that can run simultaneously. You’ll then scale those same programs to industrial-sized datasets on a cluster of cloud servers. With the map and reduce paradigm firmly in place, you’ll explore tools like Hadoop and PySpark to efficiently process massive distributed datasets, speed up decision-making with machine learning, and simplify your data storage with AWS S3.