Kubernetes in Action
Marko Lukša
  • MEAP began August 2016
  • Publication in October 2017 (estimated)
  • ISBN 9781617293726
  • 552 pages (estimated)
  • printed in black & white

Monolithic applications are becoming a thing of the past as we move towards smaller, independently running microservices that can be developed, deployed, updated and scaled individually. But it can be difficult to configure, manage, and keep the whole system running properly. This is where Kubernetes comes in. Think of Kubernetes as an operating system for your cluster, making it easier to organize and schedule your application's components across a fleet of machines. With Kubernetes, you don't have to worry about which specific machine in your data center your application is running on. And even better, it provides you with primitives for health checking and replicating your application across these machines. Each layer in your application is decoupled from other layers so you can scale, update, and maintain them independently. With more and more big companies accepting the Kubernetes model as the best way to run apps, it is set to become the standard way of running distributed apps both in the cloud and local on-premises infrastructure.

"Loved the examples. In my experience, they match closely with the type of problems I have been dealing in production deployments over the past 15 years."

~ Antonio Magnaghi

"The book is an incredible resource for everyone who wants to start using Kubernetes."

~ Alessandro Campeis

"The most comprehensive book on Kubernetes in print!"

~ David Di Maria

Table of Contents detailed table of contents

Part 1: The Overview

1. Introducing Kubernetes

1.1. Understanding the need for a system like Kubernetes

1.1.1. Providing a consistent environment to applications

1.1.2. Moving to continuous delivery and DevOps

1.2. Introducing container technologies

1.2.1. Understanding what containers are

1.2.2. Introducing the Docker container platform

1.2.3. Introducing rkt — an alternative to Docker

1.3. Introducing Kubernetes

1.3.1. Understanding its origins

1.3.2. Looking at Kubernetes from the top of a mountain

1.3.3. Understanding the architecture of a Kubernetes cluster

1.3.4. Running an application on Kubernetes

1.3.5. Understanding the benefits of using Kubernetes

1.4. Summary

2. First steps with Docker and Kubernetes

2.1. Creating, running and sharing a Docker image

2.1.1. Installing Docker and running a Hello World container

2.1.2. Creating a trivial Node.js app

2.1.3. Creating a Dockerfile for the Docker image

2.1.4. Building the Docker image

2.1.5. Running the Docker image

2.1.6. Exploring the inside of a running container

2.1.7. Stopping and removing a container

2.1.8. Pushing the image to a Docker registry

2.2. Setting up a Kubernetes cluster

2.2.1. Running a local single-node Kubernetes cluster with minikube

2.2.2. Using a hosted Google Container Engine Kubernetes cluster

2.2.3. Setting up an alias and command-line completion for kubectl

2.3. Running our first app on Kubernetes

2.3.1. Deploying our Node.js app

2.3.2. Accessing our web application

2.3.3. The logical parts of our system

2.3.4. Horizontally scaling the application

2.3.5. Examining what nodes our app is running on

2.3.6. Introducing the Kubernetes Dashboard

2.4. Summary

Part 2: Core concepts

3. Pods: running containers in Kubernetes

3.1. Introducing pods

3.1.1. Understanding why we need pods

3.1.2. Understanding pods

3.1.3. Organizing containers across pods properly

3.2. Creating pods from YAML or JSON descriptors

3.2.1. Examining a YAML descriptor of an existing pod

3.2.2. Creating a simple YAML descriptor for a pod

3.2.3. Using kubectl create to create the pod

3.2.4. Viewing application logs

3.2.5. Sending requests to the pod

3.3. Organizing pods with labels

3.3.1. Introducing labels

3.3.2. Specifying labels when creating a pod

3.3.3. Modifying labels of existing pods

3.4. Listing subsets of pods through label selectors

3.4.1. Listing pods using a label selector

3.4.2. Using multiple conditions in a label selector

3.5. Using labels and selectors to constrain pod scheduling

3.5.1. Using labels for categorizing worker nodes

3.5.2. Scheduling pods to specific nodes

3.5.3. Scheduling to one specific node

3.6. Annotating pods

3.6.1. Looking up an object's annotations

3.6.2. Adding and modifying annotations

3.7. Using namespaces to group resources

3.7.1. Understanding the need for namespaces

3.7.2. Discovering other namespaces and their pods

3.7.3. Creating a namespace

3.7.4. Managing objects in other namespaces

3.7.5. Understanding the isolation provided by namespaces

3.8. Stopping and removing pods

3.8.1. Deleting a pod by name

3.8.2. Deleting pods using label selectors

3.8.3. Deleting pods by deleting the whole namespace

3.8.4. Deleting all pods in a namespace, while keeping the namespace

3.8.5. Deleting (almost) all resources in a namespace

3.9. Summary

4. ReplicaSets & co.: deploying managed pods

4.1. Keeping pods healthy

4.1.1. Introducing liveness probes

4.1.2. Creating an HTTP-based liveness probe

4.1.3. Seeing a liveness probe in action

4.1.4. Configuring additional properties of the liveness probe

4.1.5. Creating good and effective liveness probes

4.2. Introducing Replication Controllers

4.2.1. The operation of a replication controller

4.2.2. Creating a replication controller

4.2.3. Seeing the replication controller in action

4.2.4. Moving pods in and out of the scope of a replication controller

4.2.5. Changing the pod template

4.2.6. Horizontally scaling pods

4.2.7. Deleting a replication controller

4.3. Using ReplicaSets instead of replication controllers

4.3.1. Comparing a ReplicaSet to a ReplicationController

4.3.2. Defining a ReplicaSet

4.3.3. Creating and examining a ReplicaSet

4.3.4. Using the ReplicaSet's more expressive label selectors

4.3.5. Wrapping up ReplicaSets

4.4. Running exactly one pod on each node with DaemonSets

4.4.1. Using a DaemonSet for running a pod on every node

4.4.2. Using a DaemonSet to run pods only on some nodes

4.5. Running pods that perform a single completable task

4.5.1. Introducing the Job resource

4.5.2. Defining a Job

4.5.3. Seeing a Job run a pod

4.5.4. Running multiple pod instances in a Job

4.5.5. Limiting the time allowed for a job pod to complete

4.6. Scheduling jobs to run periodically or once in the future

4.6.1. Creating a CronJob

4.6.2. Understanding how scheduled jobs are run

4.7. Summary

5. Services: enabling clients to discover and talk to pods

5.1. Introducing services

5.1.1. Creating services

5.1.2. Discovering services

5.2. Connecting to services living outside the cluster

5.2.1. Introducing service endpoints

5.2.2. Manually configuring service endpoints

5.2.3. Creating an alias for an external service

5.3. Exposing services to external clients

5.3.1. Using a NodePort service

5.3.2. Exposing a service through an external load balancer

5.3.3. Understanding the peculiarities of external connections

5.4. Exposing services externally through an Ingress resource

5.4.1. Creating an Ingress resource

5.4.2. Accessing the service through the ingress

5.4.3. Exposing multiple services through the same domain name

5.4.4. Configuring Ingress to handle TLS traffic

5.5. Signaling when a pod is ready to accept connections

5.5.1. Introducing readiness probes

5.5.2. Adding a readiness probe to a pod

5.5.3. Understanding how real-world readiness probes should work

5.6. Using a headless service for discovering individual pods

5.6.1. Creating a headless service

5.6.2. Discovering pods through DNS

5.6.3. Discovering all pods - including those that aren't ready

5.7. Troubleshooting services

5.8. Summary

6. Volumes: attaching disk storage to containers

6.1. Introducing Volumes

6.1.1. Explaining volumes in an example

6.1.2. Introducing available volumes types

6.2. Using volumes to share data between containers

6.2.1. Using an emptyDir volume

6.2.2. Using a Git repository as the starting point for a volume

6.3. Accessing files on the worker node's filesystem

6.3.1. Introducing the hostPath volume

6.3.2. Examining system pods that use hostPath volumes

6.4. Using persistent storage

6.4.1. Using a GCE Persistent Disk in a pod volume

6.4.2. Using other types of volumes with underlying persistent storage

6.5. Decoupling pods from the underlying storage technology

6.5.1. Introducing PersistentVolumes and PersistentVolumeClaims

6.5.2. Creating a PersistentVolume

6.5.3. Claiming a PersistentVolume by creating a PersistentVolumeClaim

6.5.4. Using a PersistentVolumeClaim in a pod

6.5.5. Understanding the benefits of using persistent volumes and claims

6.5.6. Recycling persistent volumes

6.6. Dynamic provisioning of persistent volumes

6.6.1. Defining the available storage types through StorageClass resources

6.6.2. Requesting the storage class in a PersistentVolumeClaim

6.6.3. Dynamic provisioning without specifying a storage class

6.7. Summary

7. ConfigMaps & Secrets: configuring applications

7.1. Configuring apps in general

7.2. Passing command-line arguments to containers

7.2.1. Defining the command and arguments in Docker

7.3. Setting environment variables for a container

7.3.1. Specifying an environment variable in a container definition

7.4. Decoupling configuration with a ConfigMap

7.4.1. Creating a ConfigMap

7.4.2. Passing a ConfigMap entry to a container as an environment variable

7.4.3. Passing all entries of a ConfigMap as environment variables at once

7.4.4. Passing a ConfigMap entry as a command-line argument

7.4.5. Using a configMap volume to expose ConfigMap entries as files

7.4.6. Updating an app's config without having to restart the app

7.5. Using Secrets to pass sensitive data to containers

7.5.1. Exposing a secret as a set files in a secret volume

7.5.2. Creating a Secret

7.5.3. Comparing ConfigMaps and Secrets

7.5.4. Understanding image pull secrets

7.6. Summary

8. Accessing pod metadata and other resources from applications

8.1. Passing metadata through the Downward API

8.1.1. Understanding the available metadata

8.1.2. Exposing metadata through environment variables

8.1.3. Passing metadata through files in a downwardAPI volume

8.2. Talking to the Kubernetes API server

8.2.1. Exploring the Kubernetes REST API

8.2.2. Talking to the API server from within a pod

8.2.3. Simplifying API server communication with ambassador containers

8.2.4. Using client libraries to talk to the API server

8.3. Summary

9. Deployments: updating applications declaratively

9.1. Replacing pods with a new version

9.1.1. Deleting old pods and replacing them with new ones afterwards

9.1.2. Spinning up new pods and then deleting the old ones

9.2. Performing an automatic rolling update with kubectl

9.3. Using Deployments for managing apps declaratively

9.3.1. Creating a Deployment

9.3.2. Updating a Deployment

9.3.3. Rolling back a deployment

9.3.4. Controlling the rate of the rollout

9.3.5. Blocking rollouts of bad versions

9.4. Summary

Part 3: Beyond the basics

10. StatefulSets: deploying replicated stateful applications

10.1. Running multiple replicas with separate storage and identity for each

10.2. Introducing StatefulSets

10.2.1. Providing a stable network identity

10.2.2. Providing stable dedicated storage to each pet

10.3. Creating and using a StatefulSet

10.3.1. Creating the app and container image

10.3.2. Deploying the app through a StatefulSet

10.3.3. Playing with our pets

10.4. Discovering other members of the StatefulSet

10.4.1. Implementing peer discovery through DNS

10.4.2. Updating a stateful set

10.4.3. Trying out our clustered data store

10.5. Summary

11. Understanding Kubernetes internals

11.1. Understanding the architecture

11.1.1. Understanding the distributed nature of Kubernetes components

11.1.2. Understanding how Kubernetes uses etcd

11.1.3. Understanding what the API server does

11.1.4. Understanding how the API server notifies clients of resource changes

11.1.5. Understanding the Scheduler

11.1.6. Introducing the controllers running in the Controller Manager

11.1.7. Understanding what the Kubelet does

11.1.8. Understanding the role of the Service Proxy

11.1.9. Introducing Kubernetes add-ons

11.1.10. Bringing it all together

11.2. Understanding how controllers cooperate

11.2.1. Understanding which components are involved

11.2.2. Understanding the chain of events

11.2.3. Observing cluster events

11.3. Exploring what a running pod is

11.4. Understanding inter-pod networking

11.4.1. Understanding what the network must be like

11.4.2. Diving deeper into how networking works

11.4.3. Introducing the Container Network Interface

11.5. Understanding how services are implemented

11.5.1. Introducing the kube-proxy

11.5.2. Understanding how kube-proxy uses iptables

11.6. Understanding highly-available clusters

11.6.1. Making your apps highly available

11.6.2. Making Kubernetes Control Plane components highly available

11.7. Summary

12. Securing clusters using authentication and authorization

12.1. Understanding authentication

12.1.1. Understanding users and groups

12.1.2. Introducing service accounts

12.1.3. Creating service accounts

12.1.4. Assigning a service account to a pod

12.2. Securing the cluster with Role Based Access Control

12.2.1. Introducing the RBAC authorization plugin

12.2.2. Introducing RBAC resources

12.2.3. Using Roles and RoleBindings

12.2.4. Using ClusterRoles and ClusterRoleBindings

12.2.5. Understandingdefault ClusterRoles and ClusterRoleBindings

12.2.6. Granting authorization permissions wisely

12.3. Summary

13. Securing cluster nodes and the network

13.1. Using the host node’s namespaces in a pod

13.1.1. Using the node’s network namespace in a pod

13.1.2. Binding to a host port without using the host’s network namespace

13.1.3. Using the node’s PID and IPC namespaces

13.2. Configuring the container’s security context

13.2.1. Running a container as a specific user

13.2.2. Preventing a container from running as root

13.2.3. Running pods in privileged mode

13.2.4. Adding individual Kernel capabilities to a container

13.2.5. Dropping capabilities from a container

13.2.6. Preventing processes from writing to the container’s filesystem

13.2.7. Sharing volumes when containers run as different users

13.3.1. Introducing the PodSecurityPolicy resource

13.3.2. Understanding runAsUser, fsGroup and supplementalGroups policies

13.3.3. Configuring allowed, default and disallowed capabilities

13.3.4. Constraining the types of volumes pods can use

13.3.5. Assigningdifferent PodSecurityPolicies to different users and groups

13.4. Isolating the pod network

13.4.1. Enabling network isolation in a namespace

13.4.2. Allowing only some pods in the namespace to connect to a server pod

13.4.3. Isolating the network between Kubernetes namespaces

13.5. Summary

14. Managing computational resources

14.1. Requesting resources for a pod’s containers

14.1.1. Creating pods with resource requests

14.1.2. Understanding how resource requests affect scheduling

14.1.3. Understanding how CPU requests affect CPU time sharing

14.1.4. Defining and requesting custom resources

14.2. Limiting resources available to a container

14.2.1. Setting a hard limit for the amount of resources a container can use

14.2.2. Exceeding the limits

14.2.3. Understanding how apps in containers see limits

14.3. Understanding PodQoS classes

14.3.1. Defining the QoS class for a pod

14.3.2. Understanding which process getskilled when memory is low

14.4. Setting default requests and limits for pods per namespace

14.4.1. Introducing the LimitRange resource

14.4.2. Creating a LimitRange object

14.4.3. Enforcing the limits

14.4.4. Applying default resource requests and limits

14.5. Limiting the total resources available in a namespace

14.5.1. Introducing the ResourceQuota resource

14.5.2. Specifying a quota for persistent storage

14.5.3. Limiting the number of objects that can be created

14.5.4. Specifying quotas for specific pod states and/or QoS classes

14.6. Monitoring pod resource usage

14.6.1. Collecting and retrieving actual resource usages

14.6.2. Storing and analyzing historical resource consumption statistics

14.7. Summary

15. Automatic scaling of pods & cluster nodes

15.1. Horizontal pod auto-scaling

15.1.1. Understanding the autoscaling process

15.1.2. Scaling based on CPU utilization

15.1.3. Scaling based on memory consumption

15.1.4. Scaling all the way down to zero replicas

15.2. Vertical pod auto-scaling

15.3. Horizontal scaling of cluster nodes

15.3.1. Preventing pods from being scheduled to a node

15.3.2. Introducing the Cluster Autoscaler

15.3.3. Enabling the cluster autoscaler

15.3.4. Limiting pod disruption during cluster scale-down

15.4. Summary

16. Advanced scheduling

16.1. Using taints and tolerations to repel pods from some nodes

16.1.1. Introducing taints and tolerations

16.1.2. Adding custom taints to a node

16.1.3. Adding tolerations to Pods

16.1.4. Understanding what taints and tolerations can be used for

16.2. Using node affinity to attract pods to certain nodes

16.2.1. Specifying hard node affinity rules

16.2.2. Prioritizing nodes when scheduling a Pod

16.3. Co-locating pods with pod affinity and anti-affinity

16.3.1. Using inter-pod affinity to deploy Pods on the same node

16.3.2. Deploying pods in the same rack, availability zone or geographic region

16.3.3. Expressing Pod affinity preferences instead of hard requirements

16.3.4. Scheduling pods away from each other with pod anti-affinity

16.4. Summary

17. Best practices for developing apps

17.1. Bringing everything together

17.2. Understanding the pod's lifecycle

17.2.1. Applications must expect to be killed and relocated

17.2.2. Rescheduling of dead or partially dead pods

17.2.3. Adding lifecycle hooks

17.2.4. Understanding pod shutdown

17.3. Making sure all client requests are handled properly

17.3.1. Preventing broken client connections when a pod is starting up

17.3.2. Preventing broken connections during pod shutdown

17.3.3. Making good container images

17.3.4. Properly tagging your images and using imagePullPolicy wisely

17.3.5. Using multi-dimensional instead of single-dimensional labels

17.3.6. Describing each resource through annotations

17.3.7. Providing information on why the process terminated

17.4. Best practices for development and testing

17.4.1. Running apps outside of Kubernetes during development

17.4.2. Using minikube as the next step in development

17.4.3. Versioning resource manifests

17.4.4. Employing Continuous Integration and Continuous Delivery (CI/CD)

17.5. Summary

18. Extending Kubernetes

18.1. Defining custom API objects

18.1.1. Introducing ThirdPartyResources

18.1.2. Automating ThirdPartyResources

18.1.3. Validating custom objects

18.2. Replacing Kubernetes components

18.2.1. Replacing Docker with rkt

18.2.2. Using other container runtimes through the CRI

18.3. Platforms built on top of Kubernetes

18.3.1. Red Hat OpenShift Container Platform

18.4. Summary

Appendixes

Appendix A: Using kubectl with multiple clusters

A.1. Switching between minikube and Google Container Engine

A.2. Using kubectl with multiple clusters or namespaces

A.2.1. Configuring the location of the kubeconfig file

A.2.2. Understanding the contents of the kubeconfig file

A.2.3. Listing, adding and modifying kubeconfig entries

A.2.4. Switching between contexts

A.2.5. Listing contexts and clusters

A.2.6. Cleaning up contexts and clusters

Appendix B: Setting up a multi-node cluster with kubeadm

B.1. Setting up the OS and required packages

B.1.1. Creating the virtual machine

B.1.2. Configuring the network adapter for the VM

B.1.3. Installing the operating system

B.1.4. Installing Docker and Kubernetes

B.1.5. Cloning the VM

B.2. Configuring the master with kubeadm

B.3. Configuring worker nodes with kubeadm

B.4. Setting up the container network

B.5. Using the cluster from our local machine

Appendix C: Using other Container Runtimes

C.1. Replacing Docker with rkt

C.1.1. Configuring Kubernetes to use rkt

C.1.2. Trying out rkt with Minikube

C.2. Using other container runtimes through the CRI

C.2.1. Introducing the CRI-O Container Runtime

C.2.2. Running apps in Virtual Machines instead of Containers

Appendix D: Cluster federation

D.1. Introducing Kubernetes Cluster Federation

D.2. Understanding the architecture

D.3. Understanding federated API objects

D.4. Achieving high availability through federated clusters

About the book

Kubernetes in Action teaches developers how to use Kubernetes to deploy self-healing scalable distributed applications. This clearly-written guide begins by looking at the problems system administrators and software developers face when running microservice-based applications and how deploying onto Kubernetes solves them. Next, you'll get your feet wet by running your first simple containerized web application on a Kubernetes cluster running in Google Container Engine. The second part of the book explains the main concepts developers need to understand in order to run multi-component applications in Kubernetes, while the last part will explain what goes on inside Kubernetes and teach you how to tie everything you've learned in the first two parts together. By the end, you'll be able to build and deploy applications in a proper way to take full advantage of the Kubernetes platform.

What's inside

  • Using Docker and Kubernetes
  • Deploying containers by creating Pods
  • Securely delivering sensitive information to containers
  • Understanding Kubernetes internals
  • Monitoring distributed apps
  • Automatically scaling your system

About the reader

The book is for both application developers as well as system administrators who want to learn about Kubernetes from the developer's perspective

About the author

Marko Lukša is a software engineer at Red Hat with the Cloud Enablement Team, whose responsibility is to make Red Hat's Enterprise Middleware products run on OpenShift, the PaaS platform built on top of Kubernetes. He also has 15 years of teaching others, helping him understand the learner's perspective and how to present difficult topics in a clear and understandable way.

Manning Early Access Program (MEAP) Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.
Buy
MEAP combo $54.99 pBook + eBook + liveBook
MEAP eBook $43.99 pdf + ePub + kindle + liveBook

FREE domestic shipping on three or more pBooks