Search the Community
Showing results for tags 'oci'.
-
Author: Sascha Grunert Seccomp stands for secure computing mode and has been a feature of the Linux kernel since version 2.6.12. It can be used to sandbox the privileges of a process, restricting the calls it is able to make from userspace into the kernel. Kubernetes lets you automatically apply seccomp profiles loaded onto a node to your Pods and containers. But distributing those seccomp profiles is a major challenge in Kubernetes, because the JSON files have to be available on all nodes where a workload can possibly run. Projects like the Security Profiles Operator solve that problem by running as a daemon within the cluster, which makes me wonder which part of that distribution could be done by the container runtime. Runtimes usually apply the profiles from a local path, for example: apiVersion: v1 kind: Pod metadata: name: pod spec: containers: - name: container image: nginx:1.25.3 securityContext: seccompProfile: type: Localhost localhostProfile: nginx-1.25.3.json The profile nginx-1.25.3.json has to be available in the root directory of the kubelet, appended by the seccomp directory. This means the default location for the profile on-disk would be /var/lib/kubelet/seccomp/nginx-1.25.3.json. If the profile is not available, then runtimes will fail on container creation like this: kubectl get pods NAME READY STATUS RESTARTS AGE pod 0/1 CreateContainerError 0 38s kubectl describe pod/pod | tail Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 117s default-scheduler Successfully assigned default/pod to 127.0.0.1 Normal Pulling 117s kubelet Pulling image "nginx:1.25.3" Normal Pulled 111s kubelet Successfully pulled image "nginx:1.25.3" in 5.948s (5.948s including waiting) Warning Failed 7s (x10 over 111s) kubelet Error: setup seccomp: unable to load local profile "/var/lib/kubelet/seccomp/nginx-1.25.3.json": open /var/lib/kubelet/seccomp/nginx-1.25.3.json: no such file or directory Normal Pulled 7s (x9 over 111s) kubelet Container image "nginx:1.25.3" already present on machine The major obstacle of having to manually distribute the Localhost profiles will lead many end-users to fall back to RuntimeDefault or even running their workloads as Unconfined (with disabled seccomp). CRI-O to the rescue The Kubernetes container runtime CRI-O provides various features using custom annotations. The v1.30 release adds support for a new set of annotations called seccomp-profile.kubernetes.cri-o.io/POD and seccomp-profile.kubernetes.cri-o.io/<CONTAINER>. Those annotations allow you to specify: a seccomp profile for a specific container, when used as: seccomp-profile.kubernetes.cri-o.io/<CONTAINER> (example: seccomp-profile.kubernetes.cri-o.io/webserver: 'registry.example/example/webserver:v1') a seccomp profile for every container within a pod, when used without the container name suffix but the reserved name POD: seccomp-profile.kubernetes.cri-o.io/POD a seccomp profile for a whole container image, if the image itself contains the annotation seccomp-profile.kubernetes.cri-o.io/POD or seccomp-profile.kubernetes.cri-o.io/<CONTAINER>. CRI-O will only respect the annotation if the runtime is configured to allow it, as well as for workloads running as Unconfined. All other workloads will still use the value from the securityContext with a higher priority. The annotations alone will not help much with the distribution of the profiles, but the way they can be referenced will! For example, you can now specify seccomp profiles like regular container images by using OCI artifacts: apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/POD: quay.io/crio/seccomp:v2 spec: … The image quay.io/crio/seccomp:v2 contains a seccomp.json file, which contains the actual profile content. Tools like ORAS or Skopeo can be used to inspect the contents of the image: oras pull quay.io/crio/seccomp:v2 Downloading 92d8ebfa89aa seccomp.json Downloaded 92d8ebfa89aa seccomp.json Pulled [registry] quay.io/crio/seccomp:v2 Digest: sha256:f0205dac8a24394d9ddf4e48c7ac201ca7dcfea4c554f7ca27777a7f8c43ec1b jq . seccomp.json | head { "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "defaultErrno": "ENOSYS", "archMap": [ { "architecture": "SCMP_ARCH_X86_64", "subArchitectures": [ "SCMP_ARCH_X86", "SCMP_ARCH_X32" # Inspect the plain manifest of the image skopeo inspect --raw docker://quay.io/crio/seccomp:v2 | jq . { "schemaVersion": 2, "mediaType": "application/vnd.oci.image.manifest.v1+json", "config": { "mediaType": "application/vnd.cncf.seccomp-profile.config.v1+json", "digest": "sha256:ca3d163bab055381827226140568f3bef7eaac187cebd76878e0b63e9e442356", "size": 3, }, "layers": [ { "mediaType": "application/vnd.oci.image.layer.v1.tar", "digest": "sha256:92d8ebfa89aa6dd752c6443c27e412df1b568d62b4af129494d7364802b2d476", "size": 18853, "annotations": { "org.opencontainers.image.title": "seccomp.json" }, }, ], "annotations": { "org.opencontainers.image.created": "2024-02-26T09:03:30Z" }, } The image manifest contains a reference to a specific required config media type (application/vnd.cncf.seccomp-profile.config.v1+json) and a single layer (application/vnd.oci.image.layer.v1.tar) pointing to the seccomp.json file. But now, let's give that new feature a try! Using the annotation for a specific container or whole pod CRI-O needs to be configured adequately before it can utilize the annotation. To do this, add the annotation to the allowed_annotations array for the runtime. This can be done by using a drop-in configuration /etc/crio/crio.conf.d/10-crun.conf like this: [crio.runtime] default_runtime = "crun" [crio.runtime.runtimes.crun] allowed_annotations = [ "seccomp-profile.kubernetes.cri-o.io", ] Now, let's run CRI-O from the latest main commit. This can be done by either building it from source, using the static binary bundles or the prerelease packages. To demonstrate this, I ran the crio binary from my command line using a single node Kubernetes cluster via local-up-cluster.sh. Now that the cluster is up and running, let's try a pod without the annotation running as seccomp Unconfined: cat pod.yaml apiVersion: v1 kind: Pod metadata: name: pod spec: containers: - name: container image: nginx:1.25.3 securityContext: seccompProfile: type: Unconfined kubectl apply -f pod.yaml The workload is up and running: kubectl get pods NAME READY STATUS RESTARTS AGE pod 1/1 Running 0 15s And no seccomp profile got applied if I inspect the container using crictl: export CONTAINER_ID=$(sudo crictl ps --name container -q) sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp null Now, let's modify the pod to apply the profile quay.io/crio/seccomp:v2 to the container: apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/container: quay.io/crio/seccomp:v2 spec: containers: - name: container image: nginx:1.25.3 I have to delete and recreate the Pod, because only recreation will apply a new seccomp profile: kubectl delete pod/pod pod "pod" deleted kubectl apply -f pod.yaml pod/pod created The CRI-O logs will now indicate that the runtime pulled the artifact: WARN[…] Allowed annotations are specified for workload [seccomp-profile.kubernetes.cri-o.io] INFO[…] Found container specific seccomp profile annotation: seccomp-profile.kubernetes.cri-o.io/container=quay.io/crio/seccomp:v2 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Pulling OCI artifact from ref: quay.io/crio/seccomp:v2 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Retrieved OCI artifact seccomp profile of len: 18853 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer And the container is finally using the profile: export CONTAINER_ID=$(sudo crictl ps --name container -q) sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp | head { "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "architectures": [ "SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32" ], "syscalls": [ { The same would work for every container in the pod, if users replace the /container suffix with the reserved name /POD, for example: apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/POD: quay.io/crio/seccomp:v2 spec: containers: - name: container image: nginx:1.25.3 Using the annotation for a container image While specifying seccomp profiles as OCI artifacts on certain workloads is a cool feature, the majority of end users would like to link seccomp profiles to published container images. This can be done by using a container image annotation; instead of being applied to a Kubernetes Pod, the annotation is some metadata applied at the container image itself. For example, Podman can be used to add the image annotation directly during image build: podman build \ --annotation seccomp-profile.kubernetes.cri-o.io=quay.io/crio/seccomp:v2 \ -t quay.io/crio/nginx-seccomp:v2 . The pushed image then contains the annotation: skopeo inspect --raw docker://quay.io/crio/nginx-seccomp:v2 | jq '.annotations."seccomp-profile.kubernetes.cri-o.io"' "quay.io/crio/seccomp:v2" If I now use that image in an CRI-O test pod definition: apiVersion: v1 kind: Pod metadata: name: pod # no Pod annotations set spec: containers: - name: container image: quay.io/crio/nginx-seccomp:v2 Then the CRI-O logs will indicate that the image annotation got evaluated and the profile got applied: kubectl delete pod/pod pod "pod" deleted kubectl apply -f pod.yaml pod/pod created INFO[…] Found image specific seccomp profile annotation: seccomp-profile.kubernetes.cri-o.io=quay.io/crio/seccomp:v2 id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Pulling OCI artifact from ref: quay.io/crio/seccomp:v2 id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Retrieved OCI artifact seccomp profile of len: 18853 id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Created container 116a316cd9a11fe861dd04c43b94f45046d1ff37e2ed05a4e4194fcaab29ee63: default/pod/container id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer export CONTAINER_ID=$(sudo crictl ps --name container -q) sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp | head { "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "architectures": [ "SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32" ], "syscalls": [ { For container images, the annotation seccomp-profile.kubernetes.cri-o.io will be treated in the same way as seccomp-profile.kubernetes.cri-o.io/POD and applies to the whole pod. In addition to that, the whole feature also works when using the container specific annotation on an image, for example if a container is named container1: skopeo inspect --raw docker://quay.io/crio/nginx-seccomp:v2-container | jq '.annotations."seccomp-profile.kubernetes.cri-o.io/container1"' "quay.io/crio/seccomp:v2" The cool thing about this whole feature is that users can now create seccomp profiles for specific container images and store them side by side in the same registry. Linking the images to the profiles provides a great flexibility to maintain them over the whole application's life cycle. Pushing profiles using ORAS The actual creation of the OCI object that contains a seccomp profile requires a bit more work when using ORAS. I have the hope that tools like Podman will simplify the overall process in the future. Right now, the container registry needs to be OCI compatible, which is also the case for Quay.io. CRI-O expects the seccomp profile object to have a container image media type (application/vnd.cncf.seccomp-profile.config.v1+json), while ORAS uses application/vnd.oci.empty.v1+json per default. To achieve all of that, the following commands can be executed: echo "{}" > config.json oras push \ --config config.json:application/vnd.cncf.seccomp-profile.config.v1+json \ quay.io/crio/seccomp:v2 seccomp.json The resulting image contains the mediaType that CRI-O expects. ORAS pushes a single layer seccomp.json to the registry. The name of the profile does not matter much. CRI-O will pick the first layer and check if that can act as a seccomp profile. Future work CRI-O internally manages the OCI artifacts like regular files. This provides the benefit of moving them around, removing them if not used any more or having any other data available than seccomp profiles. This enables future enhancements in CRI-O on top of OCI artifacts, but also allows thinking about stacking seccomp profiles as part of having multiple layers in an OCI artifact. The limitation that it only works for Unconfined workloads for v1.30.x releases is something different CRI-O would like to address in the future. Simplifying the overall user experience by not compromising security seems to be the key for a successful future of seccomp in container workloads. The CRI-O maintainers will be happy to listen to any feedback or suggestions on the new feature! Thank you for reading this blog post, feel free to reach out to the maintainers via the Kubernetes Slack channel #crio or create an issue in the GitHub repository. View the full article
-
One principle of GitOps is to have the desired state declarations as Versioned and Immutable, where Git repositories play an important role as the source of truth. But can you have an alternative to a Git repository for storing and deploying your Kubernetes manifests via GitOps? What if you could package your Kubernetes manifests into a container image instead? What if you can reuse the same authentication and authorization mechanism as your container images? To answer the above questions, an understanding of OCI registries and OCI artifacts is needed. Simply put, OCI registries are the registries that are typically used for container images but can be expanded to store other types of data (aka OCI artifacts) such as Helm charts, Kubernetes manifests, Kustomize overlays, scripts, etc. Using OCI Registries and OCI Artifacts provides you with the following advantages: Less tools to operate: Single artifact registry can store expanded data types apart from container images. In-built release archival system: OCI registries give users two sets of mutable and immutable URLs which are tags and content-addressable ones. Flourishing ecosystem: Standardized and supported by dozen of providers which helps users take advantage of new features and tools developed by large Kubernetes community Given these benefits, and in addition to the support of files stored in Git repositories, we are thrilled to announce two new formats supported by Config Sync 1.13 to deploy OCI artifacts: Sync OCI artifacts from Artifact Registry Sync Helm charts from OCI registries Config Sync is an open source tool that provides GitOps continuous delivery for Kubernetes clusters. The Open Container Initiative (OCI) is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes. OCI artifacts give you the power of storing and distributing different types of data such as Kubernetes manifests, Helm Charts, and Kustomize overlays, in addition to container images via OCI registries. Throughout this blog, you will see how you can leverage the two new formats (OCI artifacts and Helm charts) supported by Config Sync, by using: oras and helm to package and push OCI artifacts Artifact registry as OCI registry to store the OCI artifacts GKE cluster to host the OCI artifacts synced Config Sync installed in that GKE cluster to sync the OCI artifacts Initial setup First, you need to have a common setup for the two scenarios by configuring and securing the access from the GKE cluster with Config Sync to the Artifact Registry repository. Initialize the Google Cloud project you will use throughout this blog: code_block [StructValue([(u'code', u'PROJECT=SET_YOUR_PROJECT_ID_HERE\r\ngcloud config set project $PROJECT'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca253a6c10>)])] Create a GKE cluster with Workload Identity registered in a fleet to enable Config Management: code_block [StructValue([(u'code', u'CLUSTER_NAME=oci-artifacts-cluster\r\nREGION=us-east4\r\ngcloud services enable container.googleapis.com\r\ngcloud container clusters create ${CLUSTER_NAME} \\\r\n --workload-pool=${PROJECT}.svc.id.goog \\\r\n --region ${REGION}\r\ngcloud services enable gkehub.googleapis.com\r\ngcloud container fleet memberships register ${CLUSTER_NAME} \\\r\n --gke-cluster ${REGION}/${CLUSTER_NAME} \\\r\n --enable-workload-identity\r\ngcloud beta container fleet config-management enable'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca263d4e10>)])] Install Config Sync in the GKE cluster: code_block [StructValue([(u'code', u'cat <<EOF > acm-config.yaml\r\napplySpecVersion: 1\r\nspec:\r\n configSync:\r\n enabled: true\r\nEOF\r\ngcloud beta container fleet config-management apply \\\r\n --membership ${CLUSTER_NAME} \\\r\n --config acm-config.yaml'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca263d4cd0>)])] Create an Artifact Registry repository to host OCI artifacts (--repository-format docker): code_block [StructValue([(u'code', u'CONTAINER_REGISTRY_NAME=oci-artifacts\r\ngcloud services enable artifactregistry.googleapis.com\r\ngcloud artifacts repositories create ${CONTAINER_REGISTRY_NAME} \\\r\n --location ${REGION} \\\r\n --repository-format docker'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca15b519d0>)])] Create a dedicated Google Cloud Service Account with the fine granular access to that Artifact Registry repository with the roles/artifactregistry.reader role: code_block [StructValue([(u'code', u'GSA_NAME=oci-artifacts-reader\r\ngcloud iam service-accounts create ${GSA_NAME} \\\r\n --display-name ${GSA_NAME}\r\ngcloud artifacts repositories add-iam-policy-binding ${CONTAINER_REGISTRY_NAME} \\\r\n --location ${REGION} \\\r\n --member "serviceAccount:${GSA_NAME}@${PROJECT}.iam.gserviceaccount.com" \\\r\n --role roles/artifactregistry.reader'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca15b51090>)])] Allow Config Sync to synchronize resources for a specific RootSync: code_block [StructValue([(u'code', u'ROOT_SYNC_NAME=root-sync-oci\r\ngcloud iam service-accounts add-iam-policy-binding \\\r\n --role roles/iam.workloadIdentityUser \\\r\n --member "serviceAccount:${PROJECT}.svc.id.goog[config-management-system/root-reconciler-${ROOT_SYNC_NAME}]" \\\r\n ${GSA_NAME}@${PROJECT}.iam.gserviceaccount.com'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6cd0>)])] Login to Artifact Registry so you can push OCI artifacts to it in a later step: code_block [StructValue([(u'code', u'gcloud auth configure-docker ${REGION}-docker.pkg.dev'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce61d0>)])] Build and sync an OCI artifact Now that you have completed your setup, let's illustrate our first scenario where you want to sync a Namespace resource as an OCI image. Create a Namespace resource definition: code_block [StructValue([(u'code', u'cat <<EOF> test-namespace.yaml\r\napiVersion: v1\r\nkind: Namespace\r\nmetadata:\r\n name: test\r\nEOF'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6350>)])] Create an archive of that file: code_block [StructValue([(u'code', u'tar -cf test-namespace.tar test-namespace.yaml'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6990>)])] Push that artifact to Artifact Registry. In this tutorial, we use oras, but there are other tools that you can use like crane. code_block [StructValue([(u'code', u'oras push \\\r\n ${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}/my-namespace-artifact:v1 \\\r\n test-namespace.tar'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6290>)])] Set up Config Sync to deploy this artifact from Artifact Registry: code_block [StructValue([(u'code', u'cat << EOF | kubectl apply -f -\r\napiVersion: configsync.gke.io/v1beta1\r\nkind: RootSync\r\nmetadata:\r\n name: ${ROOT_SYNC_NAME}\r\n namespace: config-management-system\r\nspec:\r\n sourceFormat: unstructured\r\n sourceType: oci\r\n oci:\r\n image: ${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}/my-namespace-artifact:v1\r\n dir: .\r\n auth: gcpserviceaccount\r\n gcpServiceAccountEmail: ${GSA_NAME}@${PROJECT}.iam.gserviceaccount.com\r\nEOF'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca14f1eb10>)])] Check the status of the sync with the nomos tool: code_block [StructValue([(u'code', u'nomos status --contexts $(k config current-context)'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca14f1ef50>)])] Verify that the Namespace test is synced: code_block [StructValue([(u'code', u'kubectl get ns test'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca14f1e990>)])] And voilà! You just synced a Namespace resource as an OCI artifact with Config Sync. Build and sync a Helm chart Now, let's see how you could deploy a Helm chart hosted in a private Artifact Registry. Create a simple Helm chart: code_block [StructValue([(u'code', u'helm create test-chart'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca25ff7f50>)])] Package the Helm chart: code_block [StructValue([(u'code', u'helm package test-chart --version 0.1.0'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca2434b150>)])] Push the chart to Artifact Registry: code_block [StructValue([(u'code', u'helm push \\\r\n test-chart-0.1.0.tgz \\\r\n oci://${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca1580b890>)])] Set up Config Sync to deploy this Helm chart from Artifact Registry: code_block [StructValue([(u'code', u'cat << EOF | kubectl apply -f -\r\napiVersion: configsync.gke.io/v1beta1\r\nkind: RootSync\r\nmetadata:\r\n name: ${ROOT_SYNC_NAME}\r\n namespace: config-management-system\r\nspec:\r\n sourceFormat: unstructured\r\n sourceType: helm\r\n helm:\r\n repo: oci://${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}\r\n chart: test-chart\r\n version: 0.1.0\r\n releaseName: test-chart\r\n namespace: default\r\n auth: gcpserviceaccount\r\n gcpServiceAccountEmail: ${GSA_NAME}@${PROJECT}.iam.gserviceaccount.com\r\nEOF'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca1580b290>)])] Check the status of the sync with the nomos tool: code_block [StructValue([(u'code', u'nomos status --contexts $(k config current-context)'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca1580bc10>)])] Verify that the resources in the Namespace test-chart are synced: code_block [StructValue([(u'code', u'kubectl get all -n default'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eca25bd62d0>)])] And voilà! You just synced an Helm chart with Config Sync. Towards more scalability and security In this blog, you synced both an OCI artifact and an Helm chart with Config Sync. OCI registries and OCI artifacts are new kids on the block that can also work alongside with the Git option depending on your needs and use-cases. One of such patterns could be Git still acting as the source of truth for the declarative configs in addition to the well established developer workflow it provides: pull request, code review, branch strategy, etc. The continuous integration pipelines, triggered by pull requests or merges, will run tests against the declarative configs to eventually push the OCI artifacts in an OCI registry. Finally, the continuous reconciliation of GitOps will take it from here and will reconcile between the desired state, now stored in an OCI registry, with the actual state, running in Kubernetes. Your Kubernetes manifests as OCI artifacts are now just seen like any container images for your Kubernetes clusters as they are pulled from OCI registries. This continuous reconciliation from OCI registries, not interacting with Git, has a lot of benefits in terms of scalability, performance and security as you will be able to configure very fine grained access to your OCI artifacts. To get started, check out the two Sync OCI artifacts from Artifact Registry and the Sync Helm charts from OCI registries features today. You could also find this other tutorial showing how you can package and push an Helm chart to GitHub Container Registry with GitHub actions, and then how you can deploy this Helm chart with Config Sync. Attending KubeCon + CloudNativeCon North America 2022 in October? Come check out our session Build and Deploy Cloud Native (OCI) Artifacts, the GitOps Way during the GitOpsCon North America 2022 co-located event on October, 25th. Hope to see you there! Config Sync is open sourced. We are open to contributions and bug fixes if you want to get involved in the development of Config Sync. You can also use the repository to track ongoing work, or build from source to try out bleeding-edge functionalities. Related Article Google Cloud at KubeCon EU: New projects, updated services, and how to connect Engage with experts and learn more about Google Kubernetes Engine at KubeCon EU. Read Article
-
- helm
- helm charts
-
(and 1 more)
Tagged with:
-
I’m excited that Oracle Cloud Infrastructure (OCI) is sponsoring and participating in this year’s Kubecon + CloudNativeCon EU in Valencia, Spain. There, the OCI team can answer any questions that you have about modernizing your applications and demo how customers can use cloud services to build and manage modern applications in the most cost-performant way. […] The post OCI at KubeCon + CloudNativeCon Europe appeared first on DevOps.com. View the full article
-
Forum Statistics
67.4k
Total Topics65.3k
Total Posts