Jump to content

Search the Community

Showing results for tags 'canonical kubernetes'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 2 results

  1. Kubernetes revolutionised container orchestration, allowing faster and more reliable application deployment and management. But even though it transformed the world of DevOps, it introduced new challenges around security maintenance, networking and application lifecycle management. Canonical has a long history of providing production-grade Kubernetes distributions, which gave us great insights into Kubernetes’ challenges and the unique experience of delivering K8s that match the expectations of both developers and operations teams. Unsurprisingly, there is a world of difference between them. Developers need a quick and reproducible way to set up an application environment on their workstations. Operations teams with clusters powering the edge need lightweight high-availability setups with reliable upgrades. Cloud installations need intelligent cluster lifecycle automation to ensure applications can be integrated with each other and the underlying infrastructure. We provide two distributions, Charmed Kubernetes and MicroK8s, to meet those different expectations. Charmed Kubernetes wraps upstream K8s with software operators to provide lifecycle management and automation for large and complex environments. It is also the best choice if the Kubernetes cluster has to integrate with custom storage, networking or GPU components. Microk8s has a thriving community of users; it is a production-grade, ZeroOps solution that powers laptops and edge environments. It is the simplest way to get Kubernetes anywhere and focus on software product development instead of working with infrastructure routines and operations. After providing Kubernetes distributions for over seven years, we decided to consolidate our experience into a new distribution that combines the best of both worlds: ZeroOps for small clusters and intelligent automation for larger production environments that also want to benefit from the latest community innovations. Canonical Kubernetes will be our third distribution and an excellent foundation for future MicroK8s and Charmed Kubernetes releases. You can find its beta in our Snap Store under the simple name k8s. We based it on the latest upstream Kubernetes 1.30 beta, which officially came out on 12 March. It will be a CNCF conformant distribution with an enhanced security posture and best-in-class open source components for the most demanding user needs: network, DNS, metrics server, local storage, ingress, gateway, and load balancer. ZeroOps with the most essential features built-in Canonical Kubernetes is easy to install and easy to maintain. Like MicroK8s, Canonical Kubernetes is installed as a snap, giving developers a great installation experience and advanced security features such as automated patch upgrades. Adding new nodes to your cluster comes with minimum hassle. It also provides a quick way to set up high availability. You need two commands to get a single node cluster, one for installation and another for cluster bootstrap. You can try it out now on your console by installing the k8s snap from the beta channel: sudo snap install k8s --channel=1.30-classic/beta --classic sudo k8s bootstrap If you look at the status of your cluster just after bootstrap – with the help of the k8s status command – you might immediately spot that the network, dns, and metrics-server are already running. In addition to those three, Canonical Kubernetes also provides local-storage, ingress, gateway, and load-balancer, which you can easily enable. Under the hood, these are powered by Cilium, CoreDNS, OpenEBS, and Metrics Server. We bundle these as built-in features to ensure tight integration and a seamless experience. We want to emphasise standard Kubernetes APIs and abstractions to minimise disruption during upgrades while enabling the platform to evolve. All our built-in features come with default configurations that make sense for the most popular use cases, but you can easily change them to suit your needs. Same Kubernetes for developer workstations, edge, cloud and data centres Typical application development flows start with the developer workstation and go through CI/CD pipelines to end up in the production environment. These software delivery stages, spanning various environments, should be closely aligned to enhance developer experience and avoid infrastructure configuration surprises as your software progresses through the pipeline. When done right, you can deploy applications faster. You also get better security assurance as everyone can use the same K8s binary offered by the same vendor across the entire infrastructure software stack. When you scale up from the workstation to a production environment, you will inevitably be exposed to a different class of problems inherent to large-scale infrastructure. For instance, managing and upgrading cluster nodes becomes complicated and time-consuming as the number of nodes and applications grows. To provide the smooth automation administrators need, we offer Kubernetes lifecycle management through Juju, Canonical’s open source orchestration engine for software operators. If you have Juju installed on your machine already, a Canonical Kubernetes cluster is only a single command away: juju deploy k8s --channel edge By letting Juju Charm automate your lifecycle management, you can benefit from its rich integration ecosystem, including the Canonical Observability Stack. Enhanced security posture Security is critical to any Kubernetes cluster, and we have addressed it from the beginning. Canonical Kubernetes 1.30 instals as a snap with a classic confinement level, enabling automatic patch upgrades to protect your infrastructure against known vulnerabilities. Canonical Kubernetes will be shipped as a strict snap in the future, which means it will run in complete isolation with minimal access to the underlying system’s resources. Additionally, Canonical Kubernetes will comply with security standards like FIPS, CIS and DISA-STIG. Critical functionalities we have built into Canonical Kubernetes, such as networking or dns, are shipped as secure container images maintained by our team. Those images are built with Ubuntu as their base OS and benefit from the same security commitments we make on the distribution. While it is necessary to contain core Kubernetes processes, we must also ensure that the user or operator-provided workloads running on top get a secure, adequately controlled environment. Future versions of Canonical Kubernetes will provide AppArmor profiles for the containers that do not inherit the enhanced features of the underlying container runtime. We will also work on creating an allowlist for kernel modules that can be loaded using the Kubernetes Daemonsets. It will contain a default list of the most popular modules, such as GPU modules needed by AI workloads. Operators will be able to edit the allowlist to suit their needs. Try out Canonical Kubernetes 1.30 beta We would love for you to try all the latest features in upstream Kubernetes through our beta. Get started by visiting http://documentation.ubuntu.com/canonical-kubernetes Besides getting a taste of the features I outlined above, you’ll be able to try exciting changes that will soon be included in the upcoming upstream GA release on 17 April 2024. Among others, CEL for admission controls will become stable, and the drop-in directory for Kubelet configuration files will go to the beta stage. Additionally, Contextual logging and CRDValidationRatcheting will graduate to beta and be enabled by default. There are also new metrics, such as image_pull_duration_seconds, which can tell you how much time the node spent waiting for the image. We want Canonical Kubernetes to be a great K8s for everyone, from developers to large-scale cluster administrators. Try it out and let us know what you think. We would love your feedback! You can find contact information on our community page. We’ll also be available at KubeCon in Paris, at booth E25 – if you are there, come and say hi. View the full article
  2. A new upstream Kubernetes release, 1.29, is generally available, with significant new features and bugfixes. Canonical closely follows upstream development, harmonising our releases to deliver timely and up-to-date enhancements backed by our commitment to security and support – which means that MicroK8s 1.29 is now generally available as well and Charmed Kubernetes 1.29 will join shortly. What’s new in Canonical Kubernetes 1.29 Canonical Kubernetes distributions, MicroK8s and Charmed Kubernetes, provide all the features available in the upstream Kubernetes 1.29. We’ve also added a number of new capabilities. For the complete list of changes and enhancements please refer to the MicroK8s and Charmed Kubernetes release notes. MicroK8s 1.29 highlights AI/ML at scale with NVIDIA integrations We have included the GPU and network NVIDIA operators in the new nvidia addon. The NVIDIA GPU operator automates the management of all NVIDIA software components needed to provision GPUs, such as kernel drivers or the NVIDIA Container Toolkit. The Network Operator works in tandem with the GPU operator and enables GPU-Direct RDMA on compatible systems. For more information please read the following blog post: Canonical Kubernetes enhances AI/ML development capabilities with NVIDIA integrations Usability and performance improvements for DQLite Much of the recent focus of the MicroK8s team has been on improving stability and efficiency of the default datastore shipped together with our Kubernetes distribution. Among others, you can find the following changes available in this MicroK8s version: DQlite node role reassignment in case of failure domain availability/changesOptional admission control to protect the performance of the datastoreHandling the out of disk storage casePerformance improvements related to static linking of DQlite and SQL query preparation Growing community and partner ecosystem We welcome the addition of three new addons offered by Canonical partners and community members: Falco: the cloud-native security tool that employs custom rules on kernel events to provide real-time alertsCloudNative PG Operator: Leveraging cloud native Postgres, EDB Postgres for Kubernetes adds speed, efficiency and protection for your infrastructure modernisationngrok: Ingress Controller which instantly adds connectivity, load balancing, authentication, and observability to your services Charmed Kubernetes 1.29 highlights Charmed Operator Framework (Ops) We’re pleased to announce the completion of the Charmed Kubernetes refactor that began earlier this year. Charms have moved from the reactive and pod-spec styles to the ops framework in order to enable access to common charm libraries, better Juju support, and a more consistent charming experience for community engagement. Out of the box monitoring enhancements The Canonical Observability Stack (COS) gathers, processes, visualises and alerts on telemetry signals generated by workloads running both within and outside of Juju. COS provides an out of the box observability suite relying on the best-in-class open-source observability tools. This release expands our COS integration so that it includes rich monitoring for the control plane and worker node components of Charmed Kubernetes. Container networking enhancements Kube-OVN 1.12 Charmed Kubernetes continues its commitment to advanced container networking with support for the Kube-OVN CNI. This release includes a Kube-OVN upgrade to v1.12. You can find more information about features and fixes in the upstream release notes. Tigera Calico Enterprise The calico-enterprise charm debuts as a new container networking option for Charmed Kubernetes in this release. This charm brings advanced Calico networking/network policy support and is offered as an alternative to the default Calico CNI. Component upgrades and fixes For a full list of component upgrades, features, and bug fixes for the Charmed Kubernetes 1.29 release go to the Launchpad milestone page. Notable changes in upstream Kubernetes 1.29 You can read the full changelog for defaults regarding features, deprecations and bug fixes included in 1.29 release. Here are the most significant changes. Sidecar Containers go beta and enabled by default This hugely popular pattern of running sidecar containers goes beta and slowly but surely makes it into first class citizenship. With explicitly defined sidecar containers, among others, you can start your logs grabbing sidecar before your main application or init container. No need to worry about service mesh availability on your app startup or pod termination for your job – sidecar containers have got you covered. This feature is entering beta stage, and starting with 1.29 it will be enabled by default. Common Expression Language (CEL) for Admission Control improvements Admission validation policies use the Common Expression Language (CEL) to declare admission policies for Kubernetes resources through simple expressions (for example, do not allow creating pods without a required label, or pods with privileged host path mounts). They are highly configurable and enable policy authors to define policies that can be parameterized and scoped to resources as needed by cluster administrators. CEL for Admission Control has been available since 1.26. It is disabled by default and available behind a ValidatingAdmissionPolicy feature flag. CRI-full Container and Pod stats go to alpha The monitoring of workloads is one of the most crucial aspects of running your cluster in production. After all, how else can you know what your containers and pods resource usage is? Right now, this information comes from both CRI and cAdvisor, which leads to duplication of work and sometimes unclear origin of metrics. The goal of this enhancement is to extend CRI API and implementations so they can provide all the metrics needed for proper observability of containers and pods. You can enable this feature with the PodAndContainerStatsFromCRI flag. Improvements for supporting User Namespaces in pods Currently, the container process user ID (UID) and group ID (GID) are the same inside the pod and on the host. As a result, it creates a particular security challenge when such a process is able to break out of the pod into the host – it still uses the same UID/GID. If there is any other container running with the same UID/GID, a rogue process could interfere with it. In the worst case scenario, such a process running as root inside the pod would still run as a root on the host. This enhancement proposes supporting User Namespaces, which enable running containers inside pods with different user and group IDs than on the host. If you would like to enable User Namespaces Support, it is still alpha in K8s 1.29 and is available behind a UserNamespacesSupport feature flag. Learn more about Canonical Kubernetes or talk to our team ubuntu.com/kubernetesmicrok8s.io#canonical-kubernetes and #microk8s on the Kubernetes SlackDiscourseMatrixTwitter – @canonical, @ubuntu View the full article
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...