Kubernetes Upgrade: The Definitive Guide to Do-It-Yourself (2024)

Kubernetes Upgrade: The Definitive Guide to Do-It-Yourself (1)

Overview

This article will cover:

  • Kubernetes Upgrade Paths
  • Upgrading Kubernetes: A Step-by-Step Guide
  • Etcd Upgrade Paths
  • Upgrading etcd

Operating enterprise Kubernetes deployment is difficult. That is due in no small part to the fact that Kubernetes is not just one tool, but a collection of a dozen-odd components that provide functionality ranging from application deployments and upgrades, to logging and monitoring, to persistent data storage.

Kubernetes is one of the most active projects on Github to date, having amassed more than 80k commits and 550 releases. The process of installing an HA Kubernetes cluster on-premises or in the Cloud is well documented and, in most cases, we don’t have to perform many steps. There are additional tools like Kops or Kubespray that help to automate some of this process.

Every so often, though, we are required to upgrade the cluster to keep up with the latest security features and bug fixes, as well as benefit from new features being released on an on-going basis. This is especially important when we have installed a really outdated version (for example v1.9) or if we want to automate the process and always be on top of the latest supported version.

In general, when operating an HA Kubernetes Cluster, the upgrade process involves two separate tasks which may not overlap or be performed simultaneously: upgrading the Kubernetes Cluster; and, if needed, upgrading the etcd cluster which is the distributed key-value backing store of Kubernetes. Let’s see how we can perform those tasks with minimal disruptions.

Kubernetes Upgrade Paths

Note that this upgrade process is specifically for manually installing Kubernetes in the Cloud or on-premises. It does not cover Managed Kubernetes Environments (like our own, where Upgrades are automatically handled by the platform), or Kubernetes services on public clouds (such as AWS’ EKS or Azure Kubernetes Service), which have their own upgrade process.

For the purposes of this tutorial, we assume that a healthy 3-node Kubernetes and Etcd Clusters have been provisioned. I’ve setup mine using six DigitalOcean Droplets plus one for the worker node.

Let’s say that we have the following Kubernetes master nodes all running v1.13:

NameAddressHostname
kube-110.0.11.1kube-1.example.com
kube-210.0.11.2kube-2.example.com
kube-310.0.11.3kube-3.example.com

Also, we have one worker node running v1.13:

The process of upgrading the Kubernetes master nodes is documented on the Kubernetes documentation site. The following are the current paths:

There is only one documented version for HA Clusters here, but we can reuse the steps for the other upgrade paths. In this example, we are going to see an upgrade path from v1.13 to v.1.14 HA. Skipping a version – for example, upgrading from v1.13 to v.1.15 – is not recommended.

Before we start, we should always check the release notes of the version that we intend to upgrade, just in case they mention breaking changes.

Upgrading Kubernetes: A Step-by-Step Guide

Let’s follow the upgrade steps now:

1Login into the first node and upgrade the kubeadm tool only:

$ ssh admin@10.0.11.1$ apt-mark unhold kubeadm && \$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm

The reason why we run apt-mark unhold and apt-mark hold is because if we upgrade kubeadm then the installation will automatically upgrade the other components like kubelet to the latest version (which is v1.15) by default, so we would have a problem. To fix that, we use hold to mark a package as held back, which will prevent the package from being automatically installed, upgraded, or removed.

2Verify the upgrade plan:

$ kubeadm upgrade plan...COMPONENT CURRENT AVAILABLEAPI Server v1.13.0 v1.14.0Controller Manager v1.13.0 v1.14.0Scheduler v1.13.0 v1.14.0Kube Proxy v1.13.0 v1.14.0...

3Apply the upgrade plan:

$ kubeadm upgrade plan apply v1.14.0

4Update Kubelet and restart the service:

$ apt-mark unhold kubelet && apt-get update && apt-get install -y kubelet=1.14.0-00 && apt-mark hold kubelet$ systemctl restart kubelet

5Apply the upgrade plan to the other master nodes:

$ ssh admin@10.0.11.2$ kubeadm upgrade node experimental-control-plane$ ssh admin@10.0.11.3$ kubeadm upgrade node experimental-control-plane

6Upgrade kubectl on all master nodes:

$ apt-mark unhold kubectl && apt-get update && apt-get install -y kubectl=1.14.0-00 && apt-mark hold kubectl

7Upgrade kubeadm on first worker node:

$ ssh worker@10.0.12.1$ apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.14.0-00 && apt-mark hold kubeadm

8Login to a master node and drain first worker node:

$ ssh admin@10.0.11.1$ kubectl drain worker --ignore-daemonsets

9Upgrade kubelet config on worker node:

$ ssh worker@10.0.12.1$ kubeadm upgrade node config --kubelet-version v1.14.0

10Upgrade kubelet on worker node and restart the service:

$ apt-mark unhold kubelet && apt-get update && apt-get install -y kubelet=1.14.0-00 && apt-mark hold kubelet$ systemctl restart kubelet

11Restore worker node:

$ ssh admin@10.0.11.1$ kubectl uncordon workerStep 12: Repeat steps 7-11 for the rest of the worker nodes.Step 13: Verify the health of the cluster:$ kubectl get nodes

Etcd Upgrade Paths

As you already know, etcd is the highly distributed key-value backing store for Kubernetes, and it’s essentially the point of truth. When we are running an HA Kubernetes cluster, we also want to run an HA etcd cluster because we want to have a fallback just in case some nodes fail.

Typically, we would have a minimum of 3 etcd nodes running with the latest supported version. The process of upgrading the etcd nodes is documented in the etcd repo. These are the current paths:

When planning for etcd upgrades, you should always follow this plan:

  • Check which version you are using. For example:
    $ ./etcdctl endpoint status
  • Do not jump more than one minor version. For example, do not upgrade from 3.3 to 3.5. Instead, go from 3.3 to 3.4, and then from 3.4 to 3.5.
  • Use the bundled Kubernetes etcd image. The Kubernetes team bundles a custom etcd image located here which contains etcd and etcdctl binaries for multiple etcd versions as well as a migration operator utility for upgrading and downgrading etcd. This will help you automate the process of migrating and upgrading etcd instances.

Out of those paths, the most important change is the path from 2.3 to 3.0, as there is a major API change which is documented here. You should also take note that:

    • Etcd v3 is able to handle requests for both the v2 and v3 data. For example, we can use the ETCDCTL_API env variable to specify the API version:
      $ ETCDCTL_API=2 ./etcdctl endpoint status
  • Running etcd v3 against the v2 data dir doesn’t automatically upgrade the data dir to the v3 format.
  • Using v2 api against etcd v3 only updates the v2 data stored in etcd.

You may also wonder which versions of Kubernetes have support for each etcd version. There is a small section in the documentation which says:

  • Kubernetes v1.0: supports etcd2 only
  • Kubernetes v1.5.1: etcd3 support added, new clusters still default to etcd
  • Kubernetes v1.6.0: new clusters created with kube-up.sh default to etcd3, and kube-apiserver defaults to etcd3
  • Kubernetes v1.9.0: deprecation of etcd2 storage backend announced
  • Kubernetes v1.13.0: etcd2 storage backend removed, kube-apiserver will refuse to start with –storage-backend=etcd2, with the message etcd2 is no longer a supported storage backend

So, based on that information, if you are running Kubernetes v1.12.0 with etcd2, then you are required to upgrade etcd to v3 when you upgrade Kubernetes to v1.13.0 as –storage-backend=etcd3 is not supported. If you have Kubernetes v1.12.0 and below, you can have both etcd2 and etcd3 running.

Before every step, we should always perform basic maintenance procedures such as periodic snapshots and periodic smoke rollbacks. Make sure to check the health of the cluster:

Let’s say we have the following etcd cluster nodes:

NameAddressHostname
etcd-110.0.11.1etcd-1.example.com
etcd-210.0.11.2etcd-2.example.com
etcd-310.0.11.3etcd-3.example.com
$ ./etcdctl cluster-healthmember 6e3bd23ae5f1eae2 is healthy: got healthy result from http://10.0.1.1:22379member 924e2e83f93f2565 is healthy: got healthy result from http://10.0.1.2:22379member 8211f1d0a64f3269 is healthy: got healthy result from http://10.0.1.3:22379cluster is healthy

Upgrading etcd

Based on the above considerations, a typical upgrade etcd procedure consists of the following steps:

1Login to the first node and stop the existing etcd process:

$ ssh 10.0.1.1$ kill `pgrep etcd`

2Backup the etcd data directory to provide a downgrade path in case of errors:

$ ./etcdctl backup \ --data-dir %data_dir% \ [--wal-dir %wal_dir%] \ --backup-dir %backup_data_dir% [--backup-wal-dir %backup_wal_dir%]

3Download the new binary taken from etcd releases page and start the etcd server using the same configuration:

ETCD_VER=v3.3.15# choose either URLGOOGLE_URL=https://storage.googleapis.com/etcdGITHUB_URL=https://github.com/etcd-io/etcd/releases/downloadDOWNLOAD_URL=${GOOGLE_URL}rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gzrm -rf /usr/local/etcd && mkdir -p /usr/local/etcdcurl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gztar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /usr/local/etcd --strip-components=1rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz/usr/local/etcd/etcd --versionETCDCTL_API=3 /usr/local/etcd/etcdctl version# start etcd server/usr/local/etcd/etcd -name etcd-1 -listen-peer-urls http://10.0.1.1:2380 -listen-client-urls http://10.0.1.1:2379,http://127.0.0.1:2379 -advertise-client-urls http://10.0.1.1:2379,http://127.0.0.1:2379

4Repeat step 1 to step 3 for all other members.

5Verify that the cluster is healthy:

$ ./etcdctl endpoint health10.0.1.1:12379 is healthy: successfully committed proposal: took =10.0.1.2:12379 is healthy: successfully committed proposal: took =10.0.1.3:12379 is healthy: successfully committed proposal: took =

Note: If you are having issues connecting to the cluster, you may need to provide HTTPS transport security certificates; for example:

$ ./etcdctl --ca-file=/etc/kubernetes/pki/etcd/ca.crt --cert-file=/etc/kubernetes/pki/etcd/server.crt --key-file=/etc/kubernetes/pki/etcd/server.key endpoint health

For convenience, you can use the following environmental variables:

ETCD_CA_FILE=/etc/kubernetes/pki/etcd/ca.crtETCD_CERT_FILE=/etc/kubernetes/pki/etcd/server.crtETCD_KEY_FILE=/etc/kubernetes/pki/etcd/server.key

Recommended Readings

  • Kubernetes Service Mesh: A Comparison of Istio, Linkerd and Consul
  • A Practical Guide to Kubernetes Service Discovery
  • Kubernetes Multi-Tenancy Best Practices
  • Kubernetes CI/CD Best Practices
  • Kubernetes Autoscaling
  • Kubernetes Stateful Applications

Final thoughts

In this article, we showed step-by-step instructions on how to upgrade both Kubernetes and Etcd clusters. These are important maintenance procedures and eventualities for the day-to-day operations in a typical business environment. All participants who work with HA Kubernetes deployments should become familiar with the previous steps.

However, if you favor operational velocity and fewer maintenance tasks, you can consider using to a fully managed Kubernetes solution that automates both deployments and Day2 operations – including zero-touch upgrades.

Learn more about Platform9 Managed Kubernetes.

Try our Sandbox to experience remote Kubernetes upgrades – with no operational overhead or service downtime.

Interested in More Content?

  • Author
  • Recent Posts

Platform9

Platform9 is the only container management and orchestration solution that supports the complete lifecycle of the cloud-native journey on infrastructure anywhere. Our unique management plane simplifies operations, including observability & monitoring, increases uptime, minimizes costs, and scales as you scale. Built on open foundations, our platform enables enterprises to keep up with the ever-changing cloud-native ecosystem. Our cloud-native wizards provide Always-on Assurance™ every step of the way to ensure a smooth, simplified journey. Innovative enterprises like Juniper, Kingfisher Plc, Mavenir, Redfin, and Cloudera achieve 4x faster time-to-market, up to 90% reduction in operational costs, and 99.99% uptime.Platform9 is a better way to go cloud native, paving the way for an open distributed cloud

Latest posts by Platform9 (see all)

  • Beyond Kubernetes Operations: Discover Platform9’s Always-On Assurance™ - November 29, 2023
  • Platform9 Introduces Elastic Machine Pool (EMP) at KubeCon 2023 to Optimize Costs on AWS EKS - November 15, 2023
  • KubeCon 2023 Through Platform9’s Lens: Key Takeaways and Innovative Demos - November 14, 2023
Kubernetes Upgrade: The Definitive Guide to Do-It-Yourself (2024)

FAQs

Kubernetes Upgrade: The Definitive Guide to Do-It-Yourself? ›

In Kubernetes this is done with rolling updates. A rolling update allows a Deployment update to take place with zero downtime. It does this by incrementally replacing the current Pods with new ones.

How to do a Kubernetes upgrade? ›

Lets get started with the Upgrade.
  1. Step 1: Check the existing Kubeadm version. ...
  2. Step 2: unhold kubeadm and Install the latest version. ...
  3. Step 3: Decide on the upgrade version. ...
  4. Step 6: Drain the Node to evict all workloads. ...
  5. Step 7: Upgrade Kubelet and Kubectl. ...
  6. Step 7: Uncordon the Node and Verify the Node Status.
Mar 17, 2024

How would I upgrade Kubernetes version without any down time? ›

In Kubernetes this is done with rolling updates. A rolling update allows a Deployment update to take place with zero downtime. It does this by incrementally replacing the current Pods with new ones.

What are the challenges of upgrading Kubernetes? ›

What Makes Kubernetes Upgrades So Challenging?
  • Kubernetes isn't, and shouldn't be, vertically integrated.
  • You don't know what'll break before it breaks.
  • Application performance impact is hard to quantify.
  • Stateful sets are pets… ...
  • Rollbacks are a pain.
  • Components hit end-of-support/end-of-life frequently.
Apr 5, 2023

How long does it take to upgrade a Kubernetes cluster? ›

The update takes several minutes to complete. After your cluster update is complete, update your nodes to the same Kubernetes minor version as your updated cluster. For more information, see Self-managed node updates and Updating a managed node group.

What are the types of upgrade in Kubernetes? ›

There are two types of cluster upgrade events: UPGRADE_MASTER and UPDATE_CLUSTER . UPGRADE_MASTER changes the Kubernetes control plane version. UPDATE_CLUSTER means an update not changing the Kubernetes control plane version.

What is update strategy in Kubernetes? ›

Rolling Update Deployment. The rolling deployment is the default deployment strategy in Kubernetes. It replaces pods, one by one, of the previous version of our application with pods of the new version without any cluster downtime.

How to safely upgrade Kubernetes? ›

Upgrading Kubernetes: A Step-by-Step Guide
  1. 1Login into the first node and upgrade the kubeadm tool only: ...
  2. 2Verify the upgrade plan: ...
  3. 3Apply the upgrade plan: ...
  4. 4Update Kubelet and restart the service: ...
  5. 5Apply the upgrade plan to the other master nodes: ...
  6. 6Upgrade kubectl on all master nodes:
Sep 24, 2019

How often do you upgrade Kubernetes? ›

Every release gets patched for 14 months, which seems like a lot but chances are you aren't going to be installing the absolute latest release. So the answer to "how often do you need to be rolling out upgrades to Kubernetes" is often. They are targeting 3 releases a year, down from the previous 4 releases a year.

How to upgrade kubectl to latest version? ›

To install or update kubectl on Windows

Download the kubectl binary for your cluster's Kubernetes version from Amazon S3. (Optional) Verify the downloaded binary with the SHA-256 checksum for your binary.

What is the biggest problem with Kubernetes? ›

15 Common Kubernetes Pitfalls & Challenges
  • Deploying Containers With the “Latest” Tag.
  • Not Using Liveness and Readiness Probes.
  • Broken Pod Affinity/Anti-Affinity Rules.
  • Forgetting Network Policies.
  • No Monitoring/Logging.
  • Label Selector Mismatches.
  • Service Port Mismatches.
  • Using Multiple Load Balancers.
May 12, 2023

What is the biggest disadvantage of Kubernetes? ›

Disadvantages: Despite its numerous advantages, Kubernetes also poses some challenges: 1. Complexity: Kubernetes has a steep learning curve and requires expertise in containerization, networking, and distributed systems, making it challenging for inexperienced users to deploy and manage effectively.

Why is Kubernetes so difficult? ›

Kubernetes requires a lot of configuration

In order to do all this coordination among components, Kubernetes requires a huge amount of configuration. There are many many different parameters and settings to manage in order to make the environment suitable.

What is the best development cluster for Kubernetes? ›

If you're looking for something that is as close to your production cluster as possible, then you will likely have to go with kubeadm or even K3s, if you're running K3s in production. If your goal is to just have an easy cluster to test with, Docker Desktop or minikube will likely be the best choice.

How many nodes should a Kubernetes cluster have? ›

The total number of nodes required for a cluster varies, depending on the organization's needs. However, as a basic and general guideline, have at least a dozen worker nodes and two master nodes for any cluster where availability is a priority.

What command is used to update a Kubernetes object? ›

Kubernetes objects can be created, updated, and deleted by using the kubectl command-line tool along with an object configuration file written in YAML or JSON.

How to upgrade Kubernetes in Docker Desktop? ›

When Kubernetes is turned on and running, an additional status bar in the Docker Dashboard footer and Docker menu displays. Docker Desktop does not upgrade your Kubernetes cluster automatically after a new update. To upgrade your Kubernetes cluster to the latest version, select Reset Kubernetes Cluster.

How do I upgrade my helm in Kubernetes? ›

Upgrading to a new version of the helm chart
  1. The release name. This can be obtained using helm ls .
  2. The service name. Use the kubectl get svc -l release=={{release_name}} command and substitute {{release_name}} with the value from above.
  3. The current service resource type ( NodePort or ClusterIP ).

Top Articles
Latest Posts
Article information

Author: Merrill Bechtelar CPA

Last Updated:

Views: 5913

Rating: 5 / 5 (70 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Merrill Bechtelar CPA

Birthday: 1996-05-19

Address: Apt. 114 873 White Lodge, Libbyfurt, CA 93006

Phone: +5983010455207

Job: Legacy Representative

Hobby: Blacksmithing, Urban exploration, Sudoku, Slacklining, Creative writing, Community, Letterboxing

Introduction: My name is Merrill Bechtelar CPA, I am a clean, agreeable, glorious, magnificent, witty, enchanting, comfortable person who loves writing and wants to share my knowledge and understanding with you.