Overview

17 Deploying Per-Node Workloads with DaemonSets

DaemonSets run exactly one Pod replica on each eligible node, making them ideal for per-node infrastructure such as log collectors, metrics agents, kube-proxy, and CNI plugins. Unlike Deployments, you don’t specify a replica count; the controller reconciles against the node list, creating one Pod per node and reacting to node joins/leaves and stray Pods. You can scope placement to certain nodes with a node selector, and, because control-plane nodes are usually tainted, add tolerations if the daemon must run there. This unified, Kubernetes-native approach replaces ad hoc node-level installation methods so you manage system daemons the same way as application workloads.

A DaemonSet defines a label selector and a Pod template; the controller injects nodeAffinity into each Pod so the scheduler targets a specific node. Its status reports node-oriented counts (desired, current, ready, available, updated, misscheduled), reflecting that updates may briefly run old/new Pods side-by-side on a node. You can dynamically move nodes into or out of scope by changing node labels, use standard labels (kubernetes.io/arch, kubernetes.io/os) for heterogeneous clusters, or split multi-arch images across multiple DaemonSets. Updates support RollingUpdate (defaults maxSurge=0, maxUnavailable=1, governed by minReadySeconds) for safe, one-node-at-a-time replacement, or OnDelete for manual, high-control rollouts of cluster-critical daemons; deleting the DaemonSet removes its Pods.

Node agents often need elevated host access. You can grant full privileges (privileged: true) or minimal kernel access via Linux capabilities, mount hostPath volumes to reach host files (for example, kernel modules or locks), and optionally share host namespaces (hostNetwork, and if required hostIPC/hostPID). Critical daemons should use PriorityClasses (for example, system-node-critical) so they preempt lower-priority workloads when resources are tight. For node-local client communication, three patterns are shown: bind a container port to a hostPort and let clients discover the node IP via the Downward API; run with hostNetwork and bind directly to a host port; or, preferably, expose the daemon through a ClusterIP Service configured with internalTrafficPolicy=Local, which routes each client Pod to the agent on its own node without opening node ports to the outside.

DaemonSets run a Pod replica on each node, whereas ReplicaSets scatter them around the cluster.
The DaemonSet controller’s reconciliation loop
A node selector is used to deploy DaemonSet Pods on a subset of cluster nodes.
How do we get client pods to only talk to the locally-running daemon Pod?
Exposing a daemon Pod via a host port
Exposing a daemon Pod by using the host node’s network namespace
Exposing daemon Pods via a Service with internal traffic policy set to Local

Summary

  • A DaemonSet object represents a set of daemon Pods distributed across the cluster Nodes so that exactly one daemon Pod instance runs on each node.
  • A DaemonSet is used to deploy daemons and agents that provide system-level services such as log collection, process monitoring, node configuration, and other services required by each cluster Node.
  • When you add a node selector to a DaemonSet, the daemon Pods are deployed only on a subset of all cluster Nodes.
  • A DaemonSet doesn't deploy Pods to control plane Nodes unless you configure the Pod to tolerate the Nodes' taints.
  • The DaemonSet controller ensures that a new daemon Pod is created when a new Node is added to the cluster, and that it’s removed when a Node is removed.
  • Daemon Pods are updated according to the update strategy specified in the DaemonSet. The RollingUpdate strategy updates Pods automatically and in a rolling fashion, whereas the OnDelete strategy requires you to manually delete each Pod for it to be updated.
  • If Pods deployed through a DaemonSet require extended access to the Node's resources, such as the file system, network environment, or privileged system calls, you configure this in the Pod template in the DaemonSet.
  • Daemon Pods should generally have a higher priority than Pods deployed via Deployments. This is achieved by setting a higher PriorityClass for the Pod.
  • Client Pods can communicate with local daemon Pods through a Service with internalTrafficPolicy set to Local, or through the Node's IP address if the daemon Pod is configured to use the node's network environment (hostNetwork) or a host port is forwarded to the Pod (hostPort).

FAQ

What is a DaemonSet and when should I use one?A DaemonSet ensures exactly one Pod replica runs on each (eligible) node. It’s ideal for per-node system services such as log/metrics collectors, network/storage plugins, kube-proxy, or any agent that must run locally on every node or on a selected subset of nodes.
How does a DaemonSet decide where to run Pods and how is scheduling handled?The DaemonSet controller reconciles nodes and Pods so that each eligible node has one matching Pod. It creates Pods with nodeAffinity targeting specific nodes and lets the Kubernetes Scheduler place them (instead of hard-setting nodeName). When nodes are added/removed, Pods are created/deleted accordingly; missing Pods are recreated and extra matching Pods are removed.
Why might I see fewer daemon Pods than the total number of nodes?Control-plane nodes are typically tainted so normal workloads aren’t scheduled there. DaemonSets follow the same rule, so your controller might place Pods only on worker nodes. To include control-plane nodes, add tolerations (for example, a toleration with operator: Exists) to the Pod template so Pods can tolerate those node taints.
What’s the difference between a DaemonSet’s label selector and its node selector?The label selector matches Pods to the DaemonSet and is immutable; it must match labels in the Pod template. The node selector filters which nodes are eligible to run the Pods and is mutable; you can change it to move nodes in or out of scope. Don’t confuse Pod label selectors with node selectors—they serve different purposes.
How can I run daemon Pods only on specific nodes?Set spec.template.spec.nodeSelector in the DaemonSet to match desired node labels (for example, gpu: cuda or standard labels like kubernetes.io/arch=amd64, kubernetes.io/os=linux). You can dynamically include/exclude nodes by adding/removing labels. For heterogeneous architectures, either create multiple DaemonSets targeting different arch labels or use a single DaemonSet with a multi-arch image.
How do I inspect a DaemonSet and interpret its status?Use kubectl get ds (optionally -o wide) for a quick view and kubectl describe ds for details and events. In the status: desiredNumberScheduled is how many nodes should run a Pod; currentNumberScheduled is how many do; numberReady/numberAvailable indicate readiness/availability; updatedNumberScheduled shows how many nodes have Pods matching the current template; numberMisscheduled counts nodes running a Pod that shouldn’t be.
How are DaemonSets updated and what strategies exist?DaemonSets support RollingUpdate (default) and OnDelete. RollingUpdate typically uses maxSurge=0 and maxUnavailable=1, replacing Pods one node at a time after readiness and minReadySeconds. Many daemons shouldn’t run two instances per node, so maxSurge stays 0. OnDelete is semi-automatic: you delete Pods manually, and the controller recreates them with the new template—useful for careful, stepwise updates of critical agents.
What special privileges or host access do daemon Pods often require?Node agents may need elevated kernel or host access. Options include: running privileged (broad kernel access), adding specific Linux capabilities (e.g., NET_ADMIN, NET_RAW) instead of full privilege, mounting hostPath volumes for files like /run/xtables.lock or /lib/modules, and using host namespaces such as hostNetwork (and optionally hostPID/hostIPC) when needed. Grant only the minimum required privileges.
How can client Pods reliably talk to the local daemon on the same node?Three common approaches: 1) hostPort: map a node port to the container’s port and have clients connect to the node’s IP (clients can discover the node IP via the Downward API). 2) hostNetwork: run the agent on the node’s network and bind to a known port. 3) A Service with internalTrafficPolicy=Local: exposes Pods via cluster networking but routes only to Pods on the same node—cleanest and least invasive; prefer this unless you need external node-level access.
How do I make daemon Pods higher priority and ensure they aren’t evicted first?Assign a PriorityClass via spec.template.spec.priorityClassName (for example, system-node-critical or system-cluster-critical, or your own). Higher-priority Pods can preempt lower-priority ones on a full node, ensuring critical agents like kube-proxy keep running.

pro $24.99 per month

  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose one free eBook per month to keep
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime

lite $19.99 per month

  • access to all Manning books, including MEAPs!

team

5, 10 or 20 seats+ for your team - learn more


choose your plan

team

monthly
annual
$49.99
$399.99
only $33.33 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • Kubernetes in Action, Second Edition ebook for free
choose your plan

team

monthly
annual
$49.99
$399.99
only $33.33 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • Kubernetes in Action, Second Edition ebook for free