Exploring Kubernetes
With chapters selected by Marko Lukša
  • September 2018
  • ISBN 9781617295539
  • 151 pages
Exploring Kubernetes
With chapters selected by Marko Lukša
Kubernetes is the Greek for "helmsman," the skilled oarsman that guided ships through challenging waters. Like it’s namesake, the Kubernetes container orchestration system safely manages the many components of a distributed application, organizing the structure and flow of containers and services for maximum efficiency. Kubernetes serves as an operating system for your container clusters, eliminating the need to manually customize the underlying network and server infrastructure into your designs. Kubernetes can be a powerful ally in your battle against application complexity, and this free eBook is the perfect place to get started!

Exploring Kubernetes is a collection of chapters from Kubernetes in Action by Marko Lukša, member of Red Hat’s Cloud Enablement Team. In it, you’ll learn to take your first steps in the world of Kubernetes and how to use it with Docker to control your clusters. You’ll then discover everything you need to use Pods, Kubernetes-managed clusters of containers, including how to manage and keep them healthy. Finally, you’ll see how to implement services to your Pods and expose them to both internal and external clients. When you finish, you’ll have a good overview of Kubernetes and how to get started with it quickly, opening the door for you to continue your learning and bring it into your daily work!
Table of Contents detailed table of contents

2 First steps with Docker and Kubernetes

2.1. Creating, running, and sharing a container image

2.1.1 Installing Docker and running a Hello World container

2.1.2 Creating a trivial Node.js app

2.1.3 Creating a Dockerfile for the image

2.1.4 Building the container image

2.1.5 Running the container image

2.1.6 Exploring the inside of a running container

2.1.7 Stopping and removing a container

2.1.8 Pushing the image to an image registry

2.2 Setting up a Kubernetes cluster

2.2.1 Running a local single-node Kubernetes cluster with Minikube

2.2.2 Using a hosted Kubernetes cluster with Google Kubernetes Engine

2.2.3 Setting up an alias and command-line completion for kubectl

2.3. Running your first app on Kubernetes

2.3.1 Deploying your Node.js app

2.3.2 Accessing your web application

2.3.3 The logical parts of your system

2.3.4 Horizontally scaling the application

2.3.5 Examining what nodes your app is running on

2.3.6 Introducing the Kubernetes dashboard

2.4. Summary

3 Pods: running containers in Kubernetes

3.1 Introducing pods

3.1.1 Understanding why we need pods

3.1.2 Understanding pods

3.1.3 Organizing containers across pods properly

3.2. Creating pods from YAML or JSON descriptors

3.2.1 Examining a YAML descriptor of an existing pod

3.2.2 Creating a simple YAML descriptor for a pod

3.2.3 kubectl create to create the pod

3.2.4 Viewing application logs

3.2.5 Sending requests to the pod

3.3. Organizing pods with labels

3.3.1 Introducing labels

3.3.2 Specifying labels when creating a pod

3.3.3 Modifying labels of existing pods

3.4. Listing subsets of pods through label selectors

3.4.1 Listing pods using a label selector

3.4.2 Using multiple conditions in a label selector

3.5. Using labels and selectors to constrain pod scheduling

3.5.1 Using labels for categorizing worker nodes

3.5.2 Scheduling pods to specific nodes

3.5.3 Scheduling to one specific node

3.6. Annotating pods

3.6.1 Looking up an object’s annotations

3.6.2 Adding and modifying annotations

3.7. Using namespaces to group resources

3.7.1 Understanding the need for namespaces

3.7.2 Discovering other namespaces and their pods

3.7.3 Creating a namespace

3.7.4 Managing objects in other namespaces

3.7.5 Understanding the isolation provided by namespaces

3.8. Stopping and removing pods

3.8.1 Deleting a pod by name

3.8.2 Deleting pods using label selectors

3.8.3 Deleting pods by deleting the whole namespace

3.8.4 Deleting all pods in a namespace, while keeping the namespace

3.8.5 Deleting (almost) all resources in a namespace

3.9. Summary

4 Replication and other controllers: deploying managed pods

4.1. Keeping pods healthy

4.1.1 Introducing liveness probes

4.1.2 Creating an HTTP-based liveness probe

4.1.3 Seeing a liveness probe in action

4.1.4 Configuring additional properties of the liveness probe

4.1.5 Creating effective liveness probes

4.2. Introducing ReplicationControllers

4.2.1 The operation of a ReplicationController

4.2.2 Creating a ReplicationController

4.2.3 Seeing the ReplicationController in action

4.2.4 Moving pods in and out of the scope of a ReplicationController

4.2.5 Changing the pod template

4.2.6 Horizontally scaling pods

4.2.7 Deleting a ReplicationController

4.3. Using ReplicaSets instead of ReplicationControllers

4.3.1 Comparing a ReplicaSet to a ReplicationController

4.3.2 Defining a ReplicaSet

4.3.3 Creating and examining a ReplicaSet

4.3.4 Using the ReplicaSet’s more expressive label selectors

4.3.5 Wrapping up ReplicaSets

4.4. Running exactly one pod on each node with DaemonSets

4.4.1 Using a DaemonSet to run a pod on every node

4.4.2 Using a DaemonSet to run pods only on certain nodes

4.5. Running pods that perform a single completable task

4.5.1 Introducing the Job resource

4.5.2 Defining a Job resource

4.5.3 Seeing a Job run a pod

4.5.4 Running multiple pod instances in a Job

4.5.5 Limiting the time allowed for a Job pod to complete

4.6. Scheduling Jobs to run periodically or once in the future

4.6.1 Creating a CronJob

4.6.2 Understanding how scheduled jobs are run

4.7. Summary

5 Services: enabling clients to discover and talk to pods

5.1. Introducing services

5.1.1 Explaining services with an example

5.1.2 Creating services

5.1.3 Discovering services

5.2. Connecting to services living outside the cluster

5.2.1 Introducing service endpoints

5.2.2 Manually configuring service endpoints

5.2.3 Creating an alias for an external service

5.3. Exposing services to external clients

5.3.1 Using a NodePort service

5.3.2 Exposing a service through an external load balancer

5.3.3 Understanding the peculiarities of external connections

5.4. Exposing services externally through an Ingress resource

5.4.1 Understanding why Ingresses are needed

5.4.2 Understanding that an Ingress controller is required

5.4.3 Creating an Ingress resource

5.4.4 Accessing the service through the Ingress

5.4.5 Exposing multiple services through the same Ingress

5.4.6 Configuring Ingress to handle TLS traffic

5.5. Signaling when a pod is ready to accept connections

5.5.1 Introducing readiness probes

5.5.2 Adding a readiness probe to a pod

5.5.3 Understanding what real-world readiness probes should do

5.6. Using a headless service for discovering individual pods

5.6.1 Creating a headless service

5.6.2 Discovering pods through DNS

5.6.3 Discovering all pods—​even those that aren’t ready

5.7. Troubleshooting services

5.8. Summary

What's inside

This book contains the following chapters from Kubernetes in Action by Marko Lukša:
  • First steps with Docker and Kubernetes
  • Pods: running containers in Kubernetes
  • Replication and other controllers: deploying managed Pods
  • Services: enabling clients to discover and talk to Pods
The complete edition of Kubernetes in Action is available now in print, eBook, and liveBook formats. Use code learnkub50 at checkout to save 50% when you purchase it at manning.com.

About the author

Marko Lukša is a software engineer with more than 20 years of professional experience developing everything from simple web applications to full ERP systems, frameworks, and middleware software. He is also the author of Kubernetes in Action (Manning, 2017).

FREE domestic shipping on three or more pBooks