Kubernetes in Action
Marko Lukša
  • MEAP began August 2016
  • Publication in Summer 2017 (estimated)
  • ISBN 9781617293726
  • 425 pages (estimated)
  • printed in black & white

Monolithic applications are becoming a thing of the past as we move towards smaller, independently running microservices that can be developed, deployed, updated and scaled individually. But it can be difficult to configure, manage, and keep the whole system running properly. This is where Kubernetes comes in. Think of Kubernetes as an operating system for your cluster, making it easier to organize and schedule your application's components across a fleet of machines. With Kubernetes, you don't have to worry about which specific machine in your data center your application is running on. And even better, it provides you with primitives for health checking and replicating your application across these machines. Each layer in your application is decoupled from other layers so you can scale, update, and maintain them independently. With more and more big companies accepting the Kubernetes model as the best way to run apps, it is set to become the standard way of running distributed apps both in the cloud and local on-premises infrastructure.

"Loved the examples. In my experience, they match closely with the type of problems I have been dealing in production deployments over the past 15 years."

~ Antonio Magnaghi

"The book is an incredible resource for everyone who wants to start using Kubernetes."

~ Alessandro Campeis

"The most comprehensive book on Kubernetes in print!"

~ David Di Maria

Table of Contents detailed table of contents

Part 1: The Overview

1. Introducing Kubernetes

1.1. Understanding the need for a system like Kubernetes

1.1.1. Moving from monolithic apps to microservices

1.1.2. Providing a consistent environment to applications

1.1.3. Moving to continuous delivery and DevOps

1.2. Introducing container technologies

1.2.1. Understanding what containers are

1.2.2. Introducing the Docker container platform

1.2.3. Introducing rkt - an alternative to Docker

1.3. Introducing Kubernetes

1.3.1. Understanding its origins

1.3.2. Looking at Kubernetes from the top of a mountain

1.3.3. Understanding the architecture of a Kubernetes cluster

1.3.4. Running an application on Kubernetes

1.3.5. Understanding the benefits of using Kubernetes

1.4. Summary

2. First steps with Docker and Kubernetes

2.1. Creating, running and sharing a Docker image

2.1.1. Installing Docker and running a Hello World container

2.1.2. Creating a trivial Node.js app

2.1.3. Creating a Dockerfile for the Docker image

2.1.4. Building the Docker image

2.1.5. Running the Docker image

2.1.6. Exploring the inside of a running container

2.1.7. Stopping and removing a container

2.1.8. Pushing the image to a Docker registry

2.2. Setting up a Kubernetes cluster

2.2.1. Running a local single-node Kubernetes cluster with minikube

2.2.2. Using a hosted Google Container Engine Kubernetes cluster

2.2.3. Setting up an alias and command-line completion for kubectl

2.3. Running our first app on Kubernetes

2.3.1. Deploying our Node.js app

2.3.2. Accessing our web application

2.3.3. The logical parts of our system

2.3.4. Horizontally scaling the application

2.3.5. Examining what nodes our app is running on

2.3.6. Introducing the Kubernetes Dashboard

2.4. Summary

Part 2: Core concepts

3. Deploying containers in Pods

3.1. Introducing pods

3.1.1. Running multiple containers as a unit

3.1.2. Organizing containers across pods properly

3.2. Creating pods from YAML or JSON descriptors

3.2.1. Examining a YAML descriptor of an existing pod

3.2.2. Preparing a simple YAML descriptor for a pod

3.2.3. Using kubectl create to create the pod

3.2.4. Connecting to the pod through port forwarding

3.3. Organizing large numbers of pods through labels

3.3.1. Introducing labels

3.3.2. Setting labels when creating a pod

3.3.3. Modifying labels of existing pods

3.4. Operating on subsets of pods through label selectors

3.5. Annotating pods

3.5.1. Looking up an object’s annotations

3.5.2. Adding and modifying annotations

3.6. Using namespaces to group resources

3.6.1. Discovering other namespaces and their pods

3.6.2. Creating a namespace

3.6.3. Creating objects in other namespaces

3.6.4. Understanding the isolation provided by namespaces

3.7. Constraining the list of nodes a pod is allowed to be scheduled to

3.7.1. Categorizing worker nodes through labels

3.7.2. Scheduling pods to specific types of worker nodes

3.7.3. Preventing pods from being scheduled to a node

3.8. Keeping pods healthy

3.8.1. Introducing liveness probes

3.8.2. Creating an HTTP-based liveness probe

3.8.3. Creating good and effective liveness probes

3.9. Stopping and removing pods

3.10. Summary

4. Replicating pods and keeping them running

4.1. Introducing Replication Controllers

4.1.1. The operation of a replication controller

4.1.2. Parts of a replication controller

4.2. Creating, using and deleting a replication controller

4.2.1. Creating a replication controller

4.2.2. Seeing the replication controller in action

4.2.3. Moving pods in and out of the scope of a replication controller

4.2.4. Changing the pod template

4.2.5. Horizontally scaling pods

4.2.6. Deleting a replication controller

4.3. Using ReplicaSets instead of replication controllers

4.3.1. Comparing a ReplicaSet to a ReplicationController

4.3.2. Creating a ReplicaSet

4.4. Using a DaemonSet to run exactly one instance of a pod on each node

4.5. Running pods that perform a single completable task

4.6. Scheduling jobs to run at some time in the future

4.6.1. Creating a CronJob

4.6.2. Understanding how scheduled jobs are run

4.7. Summary

5. Exposing pods as a service

5.1. Introducing services

5.1.1. Exposing multiple pods through a single address

5.1.2. Discovering services

5.2. Connecting to external services

5.2.1. Introducing service endpoints

5.2.2. Manually configuring service endpoints

5.2.3. Creating an alias for an external service

5.3. Exposing services to external clients

5.3.1. Exposing a service externally by setting an external IP address on it

5.3.2. Exposing a service through a fixed port on all the cluster nodes

5.3.3. Exposing a service externally through a cloud provider’s load balancer

5.4. Exposing services externally through an Ingress object

5.4.1. Creating an Ingress object

5.4.2. Configuring Ingress to handle TLS traffic

5.5. Controlling when a pod is a part of a service

5.5.1. Introducing readiness probes

5.5.2. Adding a readiness probe to a pod

5.6. Using a headless service to discover pods

5.6.1. Creating a headless service

5.6.2. Discovering pods through DNS

5.6.3. Discovering all pods - including those that aren’t ready

5.7. Troubleshooting services

5.8. Summary

6. Sharing disk storage between containers in a Pod

6.1. Understanding Volumes

6.1.1. Using Volumes to enable communication between a pod’s containers

6.1.2. Types of volumes

6.2. Using Volumes to share data between containers

6.2.1. Using an emptyDir volume

6.2.2. Using a Git repository as the starting point for a volume

6.3. Accessing files from the worker node’s filesystem

6.4. Using persistent storage across pods on different nodes

6.4.1. Using a GCE Persistent Disk in a Volume

6.4.2. Using other types of persistent volumes

6.5. Decoupling actual storage with PersistentVolumes and PersistentVolumeClaims

6.6. Dynamic provisioning of persistent volumes

6.6.1. Defining the available storage types with StorageClass objects

6.6.2. Requesting the storage class in a PersistentVolumeClaim

6.7. Summary

7. Passing configuration and sensitive information to containers

7.1. Configuring apps in general

7.2. Passing command-line arguments to containers

7.3. Setting environment variables for a container

7.3.1. Specifying an environment variable in a container definition

7.4. Decoupling configuration with a ConfigMap

7.4.1. Creating a ConfigMap

7.4.2. Passing a ConfigMap entry to containers via an environment variable

7.4.3. Passing a ConfigMap entry as a command-line argument

7.4.4. Using a configMap volume to expose ConfigMap entries as files

7.4.5. Updating an app’s config without having to restart the app

7.5. Using Secrets to pass sensitive data to containers

7.5.1. Exposing a secret as a set files in a secret volume

7.5.2. Creating a Secret

7.5.3. Comparing ConfigMaps and Secrets

7.5.4. Understanding image pull secrets

7.6. Summary

8. Deploying and updating apps

8.1. Replacing pods with a new version

8.1.1. Deleting old pods and replacing them with new ones afterwards

8.1.2. Spinning up new pods and then deleting the old ones

8.2. Performing an automatic rolling update with kubectl

8.3. Using Deployments for managing apps declaratively

8.3.1. Creating a Deployment

8.3.2. Updating a Deployment

8.3.3. Rolling back a deployment

8.3.4. Controlling the rate of the rollout

8.3.5. Blocking rollouts of bad versions

8.4. Summary

Part 3: Beyond the basics

9. Running clustered stateful apps with StatefulSets

9.1. Running multiple replicas with separate storage and identity for each

9.2. Introducing StatefulSets

9.2.1. Providing a stable network identity

9.2.2. Providing stable dedicated storage to each pet

9.3. Creating and using a StatefulSet

9.3.1. Creating the app and container image

9.3.2. Deploying the app through a StatefulSet

9.3.3. Playing with our pets

9.4. Discovering other members of the StatefulSet

9.4.1. Implementing peer discovery through DNS

9.4.2. Updating a stateful set

9.4.3. Trying out our clustered data store

9.5. Summary

10. Understanding Kubernetes internals

10.1. Understanding the architecture

10.1.1. Using etcd as reliable storage for API objects

10.1.2. About the API server

10.1.3. Introducing the Scheduler

10.1.4. Introducing the controllers running in the Controller Manager

10.1.5. Understanding what the Kubelet does

10.1.6. Understanding the role of the Service Proxy

10.1.7. Bringing it all together

10.2. Understanding how controllers cooperate

10.3. Learning what a running pod actually is

10.4. Understanding inter-pod networking

10.5. Understanding how services are implemented

10.5.1. Understanding how kube-proxy uses iptables

10.6. Summary

11. Accessing cluster metadata from your app

11.1. Passing metadata through the Downward API

11.1.1. Understanding the available metadata

11.1.2. Exposing metadata through environment variables

11.1.3. Passing metadata through files in a downwardAPI volume

11.2. Accessing the Kubernetes API server from within a pod

11.2.1. Finding the API server’s address

11.2.2. Verifying the server’s identity

11.2.3. Authenticating with the API server

11.2.4. Simplifying access to the API server by using a kubectl proxy ambassador container

11.2.5. Using client libraries to talk to the API server

11.3. Understanding service accounts

11.3.1. Understanding the difference between users and service accounts

11.3.2. Creating additional service accounts

11.3.3. Assigning a service account to a pod

11.4. Summary

12. Managing computational resources

12.1. Requesting resources for a pod’s containers

12.1.1. Understanding how resource requests affect scheduling

12.1.2. Understanding how CPU requests affect CPU time sharing

12.1.3. Requesting GPU and custom resources

12.2. Limiting resources

12.2.1. Exceeding the limits

12.3. Understanding Pod QoS classes

12.3.1. Defining the QoS class for a pod

12.3.2. Understanding which process gets killed when memory is low

12.4. Setting default requests and limits for pods per namespace

12.4.1. Creating a LimitRange object

12.4.2. Enforcing the limits

12.4.3. Applying the default resource requests and limits

12.5. Limiting the total resources available in a namespace

12.5.1. Specifying a quota for CPU and memory

12.5.2. Specifying a quota for persistent storage

12.5.3. Limiting the number of objects that can be created

12.5.4. Specifying quotas for specific pod states and/or QoS classes

12.6. Monitoring pod resource usage

12.6.1. Collecting and retrieving actual resource usages

12.6.2. Storing and analyzing historical resource consumption statistics

12.7. Summary

13. Auto-scaling of pods & nodes

13.1. Horizontal pod auto-scaling

13.1.1. Understanding the autoscaling process

13.1.2. Scaling based on CPU utilization

13.1.3. Scaling based on memory consumption

13.1.4. Scaling based on custom metrics

13.1.5. Scaling all the way down to zero replicas

13.2. Vertical pod auto-scaling

13.3. Horizontal scaling of cluster nodes

13.3.1. Introducing the Cluster Autoscaler

13.3.2. Enabling the cluster autoscaler

13.4. Summary

14. Best practices for developing apps

14.1. Bringing everything together

14.2. Understanding the pod’s lifecycle

14.2.1. Applications must expect to be killed and relocated

14.2.2. Rescheduling of dead or half-dead pods

14.2.3. Understanding the startup order of multiple pods is not configurable

14.2.4. Adding lifecycle hooks

14.2.5. Understanding pod shutdown

14.3. Making sure all client requests are handled properly

14.3.1. Preventing broken client connections when a pod is starting up

14.3.2. Preventing broken connections during pod shutdown

14.4. Making your apps easy to run and manage in Kubernetes

14.4.1. Making good container images

14.4.2. Tagging your images

14.4.3. Using multi-dimensional labels instead of single-dimensional ones

14.4.4. Describing each resource through annotations

14.4.5. Providing information on why the process terminated

14.5. Summary

15. Extending Kubernetes

15.1. Defining custom API objects

15.1.1. Introducing ThirdPartyResources

15.1.2. Automating ThirdPartyResources

15.1.3. Validating custom objects

15.1.4. Providing a custom API server for your custom objects

15.2. Replacing Kubernetes components

15.2.1. Replacing Docker with rkt

15.2.2. Using other container runtimes through the CRI

15.3. Platforms built on top of Kubernetes

15.3.1. Red Hat OpenShift Container Platform

15.3.2. Deis Workflow and Helm

15.4. Summary

About the book

Kubernetes in Action teaches developers how to use Kubernetes to deploy self-healing scalable distributed applications. This clearly-written guide begins by looking at the problems system administrators and software developers face when running microservice-based applications and how deploying onto Kubernetes solves them. Next, you'll get your feet wet by running your first simple containerized web application on a Kubernetes cluster running in Google Container Engine. The second part of the book explains the main concepts developers need to understand in order to run multi-component applications in Kubernetes, while the last part will explain what goes on inside Kubernetes and teach you how to tie everything you've learned in the first two parts together. By the end, you'll be able to build and deploy applications in a proper way to take full advantage of the Kubernetes platform.

What's inside

  • Using Docker and Kubernetes
  • Deploying containers by creating Pods
  • Securely delivering sensitive information to containers
  • Understanding Kubernetes internals
  • Monitoring distributed apps
  • Automatically scaling your system

About the reader

The book is for both application developers as well as system administrators who want to learn about Kubernetes from the developer's perspective

About the author

Marko Lukša is a software engineer at Red Hat with the Cloud Enablement Team, whose responsibility is to make Red Hat's Enterprise Middleware products run on OpenShift, the PaaS platform built on top of Kubernetes. He also has 15 years of teaching others, helping him understand the learner's perspective and how to present difficult topics in a clear and understandable way.

Manning Early Access Program (MEAP) Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.
Buy
MEAP combo $49.99 pBook + eBook
MEAP eBook $39.99 pdf + ePub + kindle

FREE domestic shipping on three or more pBooks