Jump to content

Search the Community

Showing results for tags 'csi'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 1 result

  1. In Kubernetes, persistent volumes were initially managed by in-tree plugins, but this approach hindered development and feature implementation since in-tree plugins were compiled and shipped as part of k8s source code. To address this, the Container Storage Interface (CSI) was introduced, standardizing storage system exposure to containerized workloads. CSI drivers for standard volumes like Google Cloud PersistentDisk were developed and are continuously evolving. The implementation for in-tree plugins is being transitioned to CSI drivers. If you have a Google Kubernetes Engine (GKE) cluster(s) that is still using the in-tree volumes, please follow the instructions below to learn how to migrate to CSI provisioned volumes. Why migrate? There are various benefits to using a gce-pd CSI Driver, including improved deployment automation, customer managed keys, volume snapshots and more. In GKE version 1.22 and later, CSI Migration is enabled. Existing volumes that use the gce-pd provider managed through CSI drivers via transparent migration in the kubernetes controller backend. No changes are required to any StorageClass. You must use the pd.csi.storage.gke.io provider in the StorageClass to enable features like CMEK or volume snapshots. An example of a storage Class with an in-tree storage plugin and a CSI driver. code_block <ListValue: [StructValue([('code', 'apiVersion: storage.k8s.io/v1\r\nkind: StorageClass\r\n...\r\nprovisioner: kubernetes.io/gce-pd <--- in-tree\r\nprovisioner: pd.csi.storage.gke.io <--- CSI provisioner'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ebc86d6f1c0>)])]> [Please perform the below actions in your test/dev environment first] Before you begin: To test migration, create a GKE cluster. Once the cluster is ready, check the provisioner of your default storage class. If it’s already a CSI provisioner pd.csi.storage.gke.io then change it to gce-pd (in-tree) by following these instructions Refer to this page if you want to deploy a stateful PostgreSQL database application in a GKE cluster.. We will refer to this sample application throughout this blog. Again, make sure that a storage class (standard) with gce-pd provisioner creates the volumes (PVCs) attached to the pods. As a next step, we will backup this application using Backup for GKE (BfG) and restore the application while changing the provisioner from gce-pd (in-tree) to pd.csi.storage.io (the CSI driver). Create a backup Plan Please follow this page to ensure you have BfG enabled on your cluster. When you enable the BfG agent in your GKE cluster, BfG provides a CustomResourceDefinition that introduces a new kind of Kubernetes resource: the ProtectedApplication. For more on ProtectedApplication, please visit this page. A sample manifest file: code_block <ListValue: [StructValue([('code', 'kind: ProtectedApplication\r\napiVersion: gkebackup.gke.io/v1alpha2\r\nmetadata:\r\n name: postgresql\r\n namespace: blog\r\nspec:\r\n resourceSelection:\r\n type: Selector\r\n selector:\r\n matchLabels:\r\n app.kubernetes.io/name: postgresql-ha\r\n components:\r\n - name: postgresql\r\n resourceKind: StatefulSet\r\n resourceNames: ["db-postgresql-ha-postgresql"]\r\n strategy:\r\n type: BackupAllRestoreAll\r\n backupAllRestoreAll: {}'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ebc86d6f2e0>)])]> If Ready to backup status shows as true, your application is ready for backup. code_block <ListValue: [StructValue([('code', '❯ kubectl describe protectedapplication postgresql\r\n......\r\nStatus:\r\n Ready To Backup: true'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ebc86d6f400>)])]> Let’s create a backup plan following these instructions. Up until now, we have only created a backup plan and haven’t taken an actual backup. But before we start the backup process, we have to bring down the application. Bring down the Application We have to bring down the application right before taking its backup (This is where the Application downtime starts). We are doing it to prevent any data loss during this migration. My application is currently exposed via a service db-postgresql-ha-pgpool with the following selectors: We’ll patch this service by overriding above selectors with a null value so that no new request can reach the database. Save this file as patch.yaml and apply it using kubectl. code_block <ListValue: [StructValue([('code', 'spec:\r\n selector:\r\n app.kubernetes.io/instance: ""\r\n app.kubernetes.io/name: ""\r\n app.kubernetes.io/component: ""'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ebc86d6ff10>)])]> code_block <ListValue: [StructValue([('code', '❯ kubectl patch service db-postgresql-ha-pgpool --patch-file patch.yaml\r\nservice/db-postgresql-ha-pgpool patched'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ebc865c8370>)])]> You should no longer be able to connect to your app (i.e., database) now. Start a Backup Manually Navigate to the GKE Console → Backup for GKE → Backup Plans Click Start a backup as shown below. Restore from the Backup We will restore this backup to a Target Cluster. Please note that you do have an option to select the same cluster as your source and your target cluster. The recommendation is to use a new GKE cluster as your target cluster. Restore process completes in following two steps: Create a restore plan Restore a backup using the restore plan Create a restore plan You can follow these instructions to create a restore plan. While adding the transformation rule(s) , we will change the storage class from standard to standard-rwo. Add transformation rules → Add Rule (Rename a PVC’s Storage Class) Please see this page for more details. Next, review the configuration and create a plan. Restore backup using the (previously created) restore plan When a backup is restored, the Kubernetes resources are re-created in the target cluster. Navigate to the GKE Console → Backup for GKE → BACKUPS tab to see the latest backup(s). Select the backup you took before bringing down the application to view the details and click on SET UP A RESTORE. Fill all the mandatory fields and click RESTORE. Once done, switch the context to the target cluster and see how BfG has restored the application successfully in the same namespace. The data was restored into new PVCs (verify with kubectl -n blog get pvc). Their storageclass is gce-pd-gkebackup-de, which is a special storageclass used to provision volumes from the backup. Let’s get the details of one of the restored volumes to confirm BfG has successfully changed the provisioner from in-tree to CSI New volumes are created by the CSI provisioner. Great! Bring up the application Let’s patch the service db-postgresql-ha-pgpool back with the original selectors to bring our application up. Save this patch file as new_patch.yaml and apply using kubectl. We are able to connect to our database application now. Note: This downtime will depend on your application size. For more information, please see this link. Use it todayBackup for GKE can help you reduce the overhead of this migration with a minimal downtime. It can also help you prepare for disaster recovery. View the full article
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...