Jump to content

Search the Community

Showing results for tags 'gitops'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOps Forum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes
    • Linux
    • Logging, Monitoring & Observability
    • Red Hat OpenShift
    • Security
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. What is GitOps ? What is GitOps GitOps is a modern approach to managing infrastructure and applications by treating their desired state as code stored in Git repositories. It leverages Git’s strengths of version control, collaboration, and auditability to ensure consistent and reliable deployments. Key Aspects of GitOps: Declarative configuration: You define the desired state of your system in manifests (e.g., YAML files) within a Git repository. Git as the single source of truth: All configuration changes happen through pull requests in Git, ensuring collaboration and traceability. Automated reconciliation: Tools like ArgoCD continuously compare the desired state with the actual state and automatically reconcile any discrepancies. Self-healing infrastructure: If the system deviates from the desired state, tools automatically bring it back into compliance. Top 10 use cases of GitOps? Top 10 Use Cases of GitOps: Simplified deployments: Streamline deployments and configuration changes through declarative manifests and automated workflows. Consistent configurations: Ensure consistency across development, testing, and production environments by managing the desired state centrally. Rollback capabilities: Easily revert to previous deployments if issues arise by leveraging Git’s version control features. Rollout strategies: Implement safe and controlled deployments with canary deployments, blue-green deployments, and rollbacks. Multi-cluster management: Manage applications across multiple Kubernetes clusters centrally for large-scale deployments. Self-service deployments: Empower developers with self-service capabilities to deploy and manage their applications through Git. CI/CD integration: Integrate GitOps with your CI/CD pipeline for automated deployments triggered by code changes. Enhanced security: Utilize Git’s access control and auditing features for secure configuration management. Declarative infrastructure management: Manage both application configurations and infrastructure alongside using GitOps principles. Customizable workflows: Tailor GitOps tools to your specific needs through plugins and extensions for advanced functionalities. Benefits of GitOps: Efficiency: Streamlined deployments and reduced manual configuration errors. Reliability: Ensures consistent configurations and self-healing capabilities. Flexibility: Customizable deployments and workflows adapting to diverse needs. Security: Improved security through Git’s access control and auditability. Scalability: Supports large-scale deployments and multi-cluster management. What are the feature of GitOps? While GitOps isn’t a software itself but rather a methodology, it allows for leveraging tools to implement its principles. As such, the features will depend on the specific tool you choose, but here are some common features generally associated with GitOps: Core Features: Declarative configuration: Define the desired state of your system (infrastructure, applications) in manifests (e.g., YAML files) stored in Git repositories. Git as the single source of truth: Manage all configuration changes through Git, ensuring collaboration, version control, and auditability. Automated reconciliation: Tools like ArgoCD, Flux, or Jenkins X continuously compare the desired state with the actual state and automatically apply changes needed to reconcile any discrepancies. Self-healing infrastructure: If the system deviates from the desired state, the tools automatically bring it back into compliance through automated deployments or configuration updates. Advanced Features: Rollout strategies: Implement safe and controlled deployments with canary deployments, blue-green deployments, and rollbacks. Multi-cluster management: Manage applications and infrastructure across multiple clusters for large-scale deployments. Self-service deployments: Empower developers with self-service capabilities to deploy and manage their applications through Git-based approvals and workflows. CI/CD integration: Integrate GitOps with your CI/CD pipeline for automated deployments triggered by code changes. Infrastructure as Code (IaC): Manage both application configurations and infrastructure alongside using GitOps principles for unified configuration. Security: Leverage Git’s access control and auditing features for secure configuration management and enhanced security posture. Monitoring and visualization: Tools often offer dashboards and integrations for monitoring deployments, infrastructure health, and audit logs. Additional Noteworthy Features: Extensibility and flexibility: Some tools offer plugins and extensions to customize workflows and adapt GitOps to specific needs. Community-driven: Many GitOps tools are open-source with active communities, promoting continuous development and support. This is not an exhaustive list, and features will vary depending on the chosen tool and its version. How GitOps works and Architecture? GitOps works and Architecture GitOps operates on a core principle: treating the desired state of your infrastructure and applications as code stored in Git repositories. Tools then help automate and reconcile this desired state with the actual state in your environment, ensuring consistency and reliability. Here’s a deeper dive into its workings and architecture: Workflow: Desired State Change: You modify configuration files (manifests) defining the desired state of your system within a Git repository. Trigger and Detection: GitOps tools like ArgoCD, Flux, or Jenkins X monitor the repository for changes and detect new commits. Manifest Parsing: The tool parses the manifests to understand the desired state of your infrastructure and applications. State Comparison: The tool compares the desired state with the actual state of your system, often by querying Kubernetes clusters or cloud providers. Discrepancy Reconciliation: If differences exist, the tool triggers actions to reconcile the states. This might involve deploying new resources, updating existing ones, or deleting outdated configurations. Automated Actions: Deployments, configurations, and even rollbacks can be automated based on pre-defined rules and workflows. Continuous Monitoring: The tool continuously monitors both the Git repository and the system state, ensuring ongoing reconciliation and self-healing capabilities. Architecture: Git Repository: Serves as the single source of truth for desired state definitions, ensuring version control and collaboration. GitOps Tool: (ArgoCD, Flux, Jenkins X) Acts as the central hub, monitoring Git repositories, parsing manifests, comparing states, and triggering reconciliation actions. Kubernetes Cluster (or other target environments): Where your infrastructure and applications reside, managed by the GitOps tool to match the desired state. Additional Components (optional): Depending on the tool and setup, there could be external Git servers, API servers, controllers, or integrations with CI/CD pipelines. Key Concepts: Declarative Configuration: You describe what you want, not how to achieve it, relying on the tool to handle the execution. Reconciliation Loop: The core mechanism continuously checking and correcting any discrepancies between desired and actual state. Self-Healing: Deviations from the desired state automatically trigger corrective actions for robust management. Version Control and Auditability: Git provides version control and audit trails for all configuration changes. How to Install GitOps it? GitOps itself isn’t software you install, but rather a methodology using specific tools to implement its principles. The installation process will depend on which tool you choose. Here’s a general overview for some popular options: 1. ArgoCD: Self-hosting: Manual installation: Requires technical expertise and involves installing various ArgoCD components on your Kubernetes cluster. Helm charts: Simpler method using pre-configured Helm charts, but offers less flexibility. Managed platforms: ArgoCD Cloud: Hosted ArgoCD service by the ArgoCD project, easy to set up but requires a subscription. Other managed Kubernetes platforms: Cloud providers and third-party services offer managed ArgoCD installations within their platforms. Explore options from AWS, Azure, Google Cloud, DigitalOcean, etc. 2. Flux: Self-hosting: Manual installation: Similar to ArgoCD, requires technical expertise for manual installation on your Kubernetes cluster. Helm charts: Available for simpler installation using Helm charts. Managed platforms: Flux is primarily focused on self-hosting, but some cloud providers offer limited managed options or integrations. 3. Jenkins X: Self-hosting: Jenkins X requires installing Jenkins and its X extensions on your Kubernetes cluster. Managed platforms: Limited options available due to its self-hosting focus. Explore community resources or cloud provider offerings for potential integrations. Choosing the right tool: Technical expertise: Self-hosting requires more technical knowledge, while managed platforms are easier to set up. Control and flexibility: Self-hosting offers more control and customization, while managed platforms prioritize ease of use. Cost: Self-hosting can be more cost-effective for certain scenarios, while managed platforms typically involve subscription fees. Infrastructure: Choose a tool compatible with your existing Kubernetes cluster and infrastructure setup. Basic Tutorials of GitOps: Getting Started Basic Tutorials of GitOps GitOps is a methodology for deploying and managing infrastructure and applications declaratively using Git as the source of truth. Here are some step-by-step basic tutorials to get you started: Prerequisites: A basic understanding of Git and Kubernetes A Kubernetes cluster (local or managed) Kubectl configured to access your cluster A chosen GitOps tool (e.g., ArgoCD, Flux, Jenkins X) Step 1: Simple Web Application Deployment with ArgoCD: Set up your Git repository: Create a new Git repository to store your application manifests (e.g., deployments, services). Add manifest files for your desired web application (e.g., a simple Nginx deployment). Install and configure ArgoCD: Follow the instructions provided in your chosen tool’s documentation (e.g., Helm chart for ArgoCD). Configure ArgoCD to connect to your Git repository. Connect your application to ArgoCD: In the ArgoCD web UI, add a new application and specify the URL of your Git repository. Configure other settings like namespace and environment (optional). Sync your application: Click the “Sync” button in ArgoCD to deploy your application based on the manifests in your Git repository. Verify that your web application is running in your Kubernetes cluster (e.g., using kubectl get pods). Step 2: Infrastructure Management with Flux: Set up your Git repository: Create a Git repository to store your infrastructure configuration files (e.g., Kubernetes manifests, Helm charts). Add manifest files for your desired infrastructure resources (e.g., deployments, services, namespaces). Install and configure Flux: Follow the instructions for your chosen platform (e.g., Helm chart for Flux on Kubernetes). Configure Flux to connect to your Git repository and target your Kubernetes cluster. Deploy your infrastructure: Flux automatically monitors your Git repository and applies changes to your Kubernetes cluster whenever you commit new configurations. Monitor the Flux logs and cluster resources to verify successful deployment. Note: These are simplified examples, and the specific steps may vary depending on your chosen tools and configurations. Consider security aspects when implementing GitOps in production environments. The post What is GitOps and use cases of GitOps? appeared first on DevOpsSchool.com. View the full article
  2. GitOps pioneer Weaveworks unravels after funding fabric fraysView the full article
  3. Quiz #17 was: You’re working in a GitOps environment where developers use Helm charts to manage Kubernetes deployments. One day, a developer makes a change to a Helm chart to adjust the replica count of a deployment. However, after the change is applied, you notice that the deployment’s pod template has also been unexpectedly modified, […]View the full article
  4. Software development methodologies and practices are constantly shaping the way teams collaborate and deliver value. Two terms that have gained significant traction in recent years are GitOps and DevOps. While both are geared toward enhancing collaboration and efficiency in software development, they differ in their approaches and key principles. In this article, we'll unravel these major differences. Let’s dive into our discussion of DevOps vs. GitOps. Understanding DevOps DevOps, short for Development and Operations, brings together development, IT operations, and QA teams to streamline and automate the software delivery process. DevOps services aim to break down silos, foster continuous communication, and automate manual processes to achieve faster and more reliable releases. View the full article
  5. Achieving agility, scalability, efficiency, and security is paramount in modern software development. While several cultural methodologies, tools, and approaches are sought after to achieve the above-mentioned, GitOps, Kubernetes, and Platform Engineering are keystones of this transformation. In this comprehensive guide, you will learn what GitOps, Kubernetes, and Platform Engineering are, unraveling their significance, working principles, and what makes this trio the powerhouse of modern DevOps. Revolutionizing Infrastructure Management With GitOps Understanding GitOps GitOps is a methodology that centers around the use of version control systems, with Git being the primary choice as the singular source of truth for both application code and infrastructure configurations. GitOps encourages the declaration of the desired state of applications and infrastructure within Git repositories. This approach makes it effortless to track changes, maintain version history, and foster seamless collaboration among team members. Furthermore, the use of pull requests and code reviews in GitOps ensures high code quality and security. Whenever changes are made to the Git repositories, automated processes ensure that the system's state remains aligned with the declared configuration. View the full article
  6. Most developers would be more familiar with DevOps and less familiar with GitOps despite using Git, the source of truth of GitOps, for managing repositories of the software development life cycle (SDLC). Hence, there is a need to study GitOps vs. DevOps to ascertain which is suitable for your software development, delivery, and deployment. While both are designed to improve software processes, the two concepts, GitOps and DevOps, are distinct in their approaches, guiding principles, and toolsets. Whether you are choosing GitOps or DevOps for your software processes, GitOps and DevOps are designed for various purposes and are best at their designated use cases. This article will guide you beyond the fundamental differences and similarities between GitOps vs. DevOps to provide a thorough understanding of how DevOps and GitOps methodology operate and what makes them uniquely valuable in modern software engineering. You would also learn how platform engineering portals like Atmosly make DevOps and GitOps better. View the full article
  7. In the dynamic world of software development, Developer Experience (DevEx or DX) is becoming increasingly vital. It's all about creating an environment where developers can thrive, blending ease of use, efficiency, and overall job satisfaction. Good DevEx is more than just a convenience; it's a strategic asset; It leads to higher productivity, quicker development timelines, and better-quality software. By fine-tuning the tools and methods used by developers and fostering a supportive work culture, DevEx directly influences a company's ability to innovate and stay ahead in the competitive tech industry. At Weaveworks, we know how a poor developer experience can cause frustration, misalignment between Development and Ops teams, and duplication of work. This is why we have built Weave GitOps Enterprise: a state-of-the-art GitOps platform powered by Flux CD and Flagger. It empowers developers and platform operators to build and manage an internal development platform easily. In this blog, we round up all the important resources, assets, and videos on DX and Weave GitOps. From whitepapers to blogs and on-demand webinars, we’re confident you’ll learn something new here. Expand Your Knowledge with Our Detailed Whitepapers Why Self-Service is Key to Developer Productivity Organizations can enable great developer experience with a self-service platform that serves the developers’ needs and eliminates causes of friction between Developers and Ops teams. You’ll learn the essential elements of good DX, the root causes of developer frustration, and how to leverage GitOps to implement a self-service developer platform. Download Whitepaper The GitOps Guide to Building & Managing Internal Platforms The way to foster an outstanding developer experience is to leverage the internal platform approach because it helps to get resources to developers on-demand and in a secure way. But, adopting the platform approach alone is not enough. Organizations need to think strategically about how they would build and maintain the platform. And looking ahead, how to build a new development culture around the platform that makes software delivery seamless. This paper explores how the GitOps framework can be used for building internal platforms at scale and how it enables continuous application delivery. Download Whitepaper The Weave GitOps’ Blueprint for Exceptional DX Weave GitOps Enterprise is a continuous operation solution that makes it easy to deploy and manage Kubernetes clusters and applications at scale in any environment. It boasts a number of distinct features that elevate DX and empower platform teams to build self-service platforms. These include: Platform Engineering & Self-Service Platforms: Platform engineering and internal developer platforms automate operational overhead and abstract infrastructure complexity for software teams. Learn more about the benefits of platform engineering and how Weave GitOps serves as an effective solution for building internal developer platforms. Progressive Application Delivery: Progressive delivery is a modern set of practices for application deployment, including canary releasing, blue-green deployments, A/B testing, and feature flagging. These strategies allow for the gradual rollout of new application features to a subset of users, enabling developers to smoothly release new features with minimal risk. Learn More.Security and Compliance Policy Guardrails: Weave Policy Engine enables platform operators to enforce security and compliance policies, ensuring resilient infrastructure and compliance framework across all Kubernetes deployments. Consequently, developers can concentrate on their primary tasks, free from the complexities of security and compliance. Read More.Self-Service Templates: Self-Service Templates and Profiles empower developers to create reusable configurations for applications. By saving configurations, this feature not only saves time, but also mitigates potential errors during the deployment process. Read More. CI/CD Automation with GitOps: While CI tools like Selenium and Jenkins have gone a long way to automate tests and builds, GitOps lets you automate every step of the CI/CD pipeline, including the deployment stage. Read More. View the full article
  8. Platform engineering is key to today's cloud-native world, as it ushers DevOps into a new era. The discipline revolves around an Internal Developer Platform (IDP), a unified toolset covering every operational need from coding to deployment. The focus here is to lessen cognitive friction for developers while giving operations a structured way to manage technology. This blog post will cut through the jargon, highlight platform engineering advantages over traditional methods, and offer actionable insights for integrating platform engineering into your tech strategy. Ready to dive in? Understanding Platform Engineering Platform engineering represents an advanced stage of DevOps methodologies developed to manage the escalating intricacies of IT operations. Its core objective is to establish a resilient internal platform that streamlines workflows and furnishes a self-service interface for developers. Imagine an operational environment where developers can independently provision resources and deploy applications, eliminating delays typically incurred through Ops team coordination. With platform engineering, this operational efficiency is no longer a distant aspiration, but a milestone within reach. Conventional IT operations come with complications, including manual interventions, fragmented team structures, and scalability issues. Platform engineering alleviates all these by bringing together elements of DevOps, Infrastructure as Code (IaC), and GitOps into a cohesive strategy. This amalgamated approach expedites the deployment process, minimizes errors, and enhances the agility of the software development lifecycle. The Payoffs of Platform Engineering Platform engineering does more than radically alleviate some of the most persistent issues in traditional IT. Here are a few major benefits it offers: Self-service DX: Self-service Developer Experience (DX) helps mitigate some of the most persistent challenges in traditional IT settings. Automating tasks through a self-service developer portal eliminates delays and errors while promoting developer autonomy and agility.Automation with GitOps: By weaving GitOps into its automation fabric, platform engineering dissolves team silos, fuels collaboration, and speeds up deployments.Version control: A hallmark of the GitOps approach, version control is indispensable for modern declarative ops. Incorporating version control into platform engineering means every change is tracked in a centralized repository. This is crucial for rollback scenarios and provides a historical context for all changes, making it easier to understand the system’s state at any given time.Consistency: Platform engineering standardizes the deployment and operational procedures, ensuring that each team within the organization is aligned in their practices. This consistency reduces errors and streamlines troubleshooting, making it easier to maintain high-quality service delivery.Scalability: Scalability in platform engineering is not just about handling more resources; it's about doing more with less and without manual intervention. As your organization grows, platform engineering allows you to easily adapt by automating the integration of new technologies and methodologies, thereby reducing operational overhead.Auditability: Platform engineering, especially when integrated with GitOps, provides a transparent environment where every change is logged and can be traced back to an individual. This makes it easier to comply with regulatory requirements and internal policies.Portability: The modular and containerized nature of platform engineering allows for high portability. Applications and services can be easily moved across different cloud environments or even back to an on-premises setup, providing flexibility in deployment choices. The cumulative impact of these benefits fosters a culture of operational excellence. Teams gain agility and efficiency while their objectives increasingly align with the broader organizational goals. View the full article
  9. Open-source software (OSS) is now a staple in nearly every company's technology stack. Recent trends show a significant surge in enterprise OSS adoption; the 2023 State of Open Source Report reveals that 80% of organizations have ramped up their use of OSS. However, this increasing reliance on OSS is not without its challenges. According to the report, the top 3 support challenges for companies using open-source software are security related. Some of these challenges include maintaining security policies and compliance, overcoming skill shortages, keeping abreast of frequent updates, and addressing the gap in technical support. Figure: OSS Support Challenges - Source Each of these challenges can have costly consequences, be it time to develop, time to secure or upgrade a new patch, or worse, an exposed system for hackers. This blog outlines OSS's top 4 security risks and how Weave GitOps Assured can help organizations mitigate them. First things first, let’s explain what Weave GitOps Assured is. What is Weave GitOps Assured? Weave GitOps Assured is a comprehensive solution designed for managing Kubernetes workloads, continuous and progressive delivery and policy. The subscription is a blend of 24/7/365 enterprise support and GitOps open-source software, including Flux CD, Flagger, Observability UI, Terraform Controller, Flamingo (Flux CD subsystem for Argo), Weave Policy Agent, and VSCode Plugin. The solution offers features such as assured builds of Flux CD, a Flux CD GUI for full cluster and deployment observations, alerts, and notifications, and further Flux CD extensions like Policy agent and Terraform controller. Teams will also have access to a catalog of supported templates, tools, and plugins like GitOps for Visual Studio. Weave GitOps Assured helps fortify the security of the GitOps toolkit components so that companies can confidently use OSS without full support from Weaveworks and minimal community reliance. Top 4 Security Risks for OSS Now let’s explore the top 4 security risks for open-source software and how Weave GitOps Assured can help fortify your products and services. Security Risk #1: Vulnerabilities in open source dependencies A key risk highlighted is the existence of security flaws in the open-source project and its external dependencies — other open-source elements it relies on. These vulnerabilities in dependencies have the potential to create severe problems in numerous major commercial software systems, similar to the unassuming Apache Log4j library, Common Vulnerabilities and Exposures (CVE)-2021-44228. Weave GitOps Assured Safeguards: Enhanced & Proactive Security: Weaveworks actively engages in the proactive remediation of CVEs and other security vulnerabilities. The Weave GitOps Assured package includes a certified distribution of all Flux CD, plus extensions and patches. Customers receive timely alerts for necessary system updates, facilitating the maintenance of current builds with the latest patches and updates across the entire Flux CD ecosystem. Security Risk #2: License compliance risks The second significant security risk lies in license compliance challenges associated with open-source applications and packages. Each of these comes with its unique usage license, which can present compatibility issues. There could be a mismatch between the license and the intended application use, or conflicting licenses among different components of the application. This becomes particularly problematic if a component violates legal or regulatory standards that the company must adhere to. Weave GitOps Assured Safeguards: Centralized Policy Enforcement With the Assured subscriptions, users can access the Weave Policy Engine, enabling automated security and compliance with organizational policies. This feature allows organizations to set and enforce policies governing access control, resource allocation, and other deployment aspects. Such centralized governance is instrumental in ensuring compliance, reducing the risk of errors, and preventing security breaches. View the full article
  10. Welcome to our recap of KubeCon Chicago 2023, where GitOps and cloud-native tech took center stage. The Flux team attended this year’s KubeCon and presented several talks. From mastering multi-tenancy in Kubernetes with Flux to navigating the complexities of large-scale operations with Argo CD & Flux, there’s something for everyone. We’re also including links to the videos for you to watch and learn at your own pace. We are thrilled to share two monumental achievements from this year's KubeCon that underscore our commitment and influence in the cloud-native sphere. Firstly, our team was honored with the prestigious “Small but Mighty” award, a testament to our significant impact within the cloud-native community. This recognition is not just an award; it's a symbol of our dedication, innovation, and the tangible difference we've made in this dynamic field. We have more on that below. Equally exhilarating is the widespread adoption of GitOps, a revolutionary term and framework pioneered by Weaveworks. This year, GitOps has ‘crossed the chasm and cleared the adoption threshold.” This marks a pivotal moment in our journey – seeing our brainchild evolve into a cornerstone technology reshaping global cloud-native practices. GitOps Goes Mainstream Coinciding with KubeCon Chicago was the release of the CNCF's 2023 GitOps Microsurvey report titled “Learning on the Job as GitOps Goes Mainstream.” The report provided insightful revelations about the state and adoption of GitOps in the cloud-native community. It confirmed that 100% of respondents plan to embrace GitOps within the next 6 months to 2 years. Additionally, 60% of respondents have been seriously using GitOps for over a year, demonstrating its increasing significance in operating cloud-native applications and Kubernetes environments. The survey explored the reasons behind adopting this methodology, the benefits it offers, and the challenges faced by the community, indicating a significant shift towards GitOps in the cloud-native ecosystem. Explore the results in depth and read the commentary by Alexis Richardson, who coined the term ‘GitOps’. GitOps Automation with Flux CD Backstage Plugin Weaveworks and the Flux CD Backstage plugin were featured in the new Backstage Marketplace launch sponsored by Spotify. Created to enhance the developer experience within Backstage, this plugin offers several features for application developers and platform teams including understanding Flux CD resources, tracking Flux CD deployments, and viewing Flux CD resources. Learn all about it here. This latest integration is a testament to Flux CD’s large and growing GitOps ecosystem. Many people use the popular GitOps tool without knowing because it’s embedded in other tools such as GitLab, AKS and Azure Arc, Tanzu and EKS Anywhere. Flux CD KubeCon Sessions Orchestrating Multi-Tenancy Kubernetes Environments with Flux Speaker: Priyanka (Pinky) Ravi, Developer Experience Developer, Weaveworks In the realm of modern software development, where quick and seamless delivery is paramount, Flux CD has become a pivotal GitOps open-source toolkit within the Kubernetes framework, aimed at streamlining and controlling deployments. The talk presented by Priyanka Ravi at the conference delved into how Flux CD plays a vital role in managing and scaling multi-tenant Kubernetes environments, a key factor in handling intricate application networks. The presentation offered a thorough exploration of the functionalities of Flux CD, focusing on its solutions for the complexities associated with multi-tenant environments. The audience was enriched with practical examples and insights, gaining a comprehensive perspective on leveraging Flux CD for secure, efficient, and consistent delivery of software across varied tenant workloads. Harnessing Argo & Flux: The Quest to Scale Add-Ons Beyond 10k Clusters Speakers: Joaquin Rodriguez, Microsoft and Priyanka "Pinky" Ravi, Weaveworks Flamingo, the Flux Subsystem for Argo (FSA), plays an important role in the GitOps world. Flamingo's purpose is to integrate Flux CD with Argo CD, two prominent tools in the GitOps space. This integration allows for a unified and efficient management of GitOps workflows. The session tackled the complexities of managing cluster add-ons across diverse environments such as private clouds, public clouds, and edge computing. It highlighted the challenges faced in large-scale operations, such as inefficiencies, increased costs, and security risks. The speakers delved into how leveraging Argo CD, Flux CD, and Flamingo can effectively scale operations beyond 10,000 clusters. This scaling addresses critical aspects like enhanced scale, efficient logging, and comprehensive monitoring. The discussion also covered how Flux and Flamingo play a role in the lifecycle management of cluster add-ons at this scale, and the integration of the Argo CD API into a cluster lifecycle management solution. View the full article
  11. It’s with great excitement that we announce the release of the Weave AI controllers for Large Language Models (LLMs). Weave AI controllers ease the adoption of open-sourced LLMs like Llama-2, Mistral, Zephyr, and Falcon in enterprise environments through GitOps automation. We made it simple and efficient for Machine Learning teams to deploy, manage and fine-tune LLMs on any Kubernetes infrastructure while ensuring strong security and governance. Recent CNCF surveys show GitOps is the standard operating model for Kubernetes based workloads. LLMs are on the rise but so are complexities in operating them The usage of LLMs has grown dramatically over the past several years and so has their rate of adoption. According to this survey, nearly one in ten (8.3%) machine learning teams have already deployed an LLM application into production and nearly half (43.3%) have production deployment plans within the next 12 months. However data privacy and the need to protect proprietary data are the largest roadblocks for production deployment. Many organizations are struggling with manual, ad-hoc methods of deploying downloaded LLMs that can lead to security risks and lack of governance and compliance. Versioning and updates as well as integration into existing infrastructure are also hurdles that need to be overcome. Weaveworks addresses these challenges with GitOps workflows, Flux-based AI Controllers, and a signing and verification process that enhances security and compliance even in regulated industries. The Weave AI Controllers will be shipped beginning December 2023 with our standard subscriptions in Weave GitOps Assured and Enterprise. Streamline ops and free up development time Weave AI controllers were designed to address two main use cases: Enabling a self-service platform of AI models, tools and applications for developers, and Facilitating fine-tuning of models with sensitive data for enterprise-grade efficiency, security, and reliability. Many ML teams have been exploring these models and their capabilities in development or on small scale production clusters, with Weave AI Controllers teams can now move into enterprise scale with the necessary security guardrails and deploy to production quickly. A Kubernetes based infrastructure and deployment pipeline using CRDs, YAMLs and GitOps can easily remedy most deployment challenges while provisioning monitoring and rollbacks. Weave GitOps and our AI controllers are leveraging cloud native technologies and the declarative management approach to build automated and streamlined workflows on prem, hybrid or in the cloud. We want data scientists to focus on the application and stop worrying about infrastructure tasks. View the full article
  12. The factory-precaching-cli tool is a containerized Go binary publicly available in the Telco RAN tools container image. This blog shows how the factory-precaching-cli tool can drastically reduce the OpenShift installation time when using the Red Hat GitOps Zero Touch Provisioning (ZTP) workflow. This approach becomes very significant when dealing with low bandwidth networks, either when connecting to a public or disconnected registry. View the full article
  13. In the vibrant atmosphere of PromCon during the last week of September, attendees were treated to a plethora of exciting updates from the Prometheus universe. A significant highlight of the event has been the unveiling of the Perses project. With its innovative approach of dashboard as code, GitOps, and Kubernetes native features, Perses promises a […]View the full article
  14. until
    About cdCon + GitOpsCon will foster collaboration, discussion, and knowledge sharing by bringing communities, vendors, and end users to meet, discuss, collaborate and start shaping the future of GitOps and CD together. Details https://events.linuxfoundation.org/cdcon-gitopscon/ Event Schedule https://events.linuxfoundation.org/cdcon-gitopscon/program/schedule/
  15. On the west coast of Canada, you will find Vancouver, British Columbia, home to the Canucks, breathtaking scenery, and the Granville Walk of Fame. You will also find the Vancouver Convention Center, which hosts some of the best views from any event space in the world. It was in this picturesque setting that the CD Foundation and OpenGitOps communities came together for a co-located event, cdCon + GitOpsCon 2023. These two communities are distinct but have aligned goals and visions for how DevOps needs to evolve. The CD Foundation acts as a host and incubator for open-source projects like Spinnaker and Jenkins, the newly graduated project Tekton, and the completely new cdEvents. They have a mission of defining continuous delivery best practices. OpenGitOps was started as a Cloud Native Computing Foundation working group with the goal of clearly defining a vendor-neutral, principle-led meaning of GitOps. View the full article
  16. GitOps has continued in its popularity and has become the standard way to manage Kubernetes cluster configuration and applications. Red Hat continues to see the widespread adoption of the GitOps methodology across our portfolio as customers look for ways to bring increased efficiency to their operations and development teams. View the full article
  17. At some point during the OpenShift deployment phase, a question about project onboarding comes up, "How can a new customer or tenant be onboarded so they can deploy their own workload onto the cluster(s)?" While there are different ways from a process perspective (Service Now, Jira, etc.), I focus on the Kubernetes objects that must be created on each cluster. In A Guide to GitOps and Argo CD with RBAC, I described setting up GitOps RBAC rules so tenants can work with their (and only their) projects. This article demonstrates another possibility for deploying per tenant and per cluster ... View the full article
  18. A GitOps tool like Argo CD can help centralize the automation, installation, and configuration of services onto multiple Kubernetes clusters. Rather than apply changes using a Kubernetes CLI or CI/CD, a GitOps workflow detects changes in version control and applies the changes automatically in the cluster. You can use a GitOps workflow to deploy and manage changes to a Consul cluster, while orchestrating the configuration of Consul service mesh for peering, network policy, and gateways. This approach to managing your Consul cluster and configuration has two benefits. First, a GitOps tool handles the order-of-operations and automation of cluster updates before configuration updates. Second, your Consul configuration uses version control as a source of truth that GitOps enforces across multiple Kubernetes clusters. This post demonstrates a GitOps workflow for deploying a Consul cluster, configuring its service mesh, and upgrading its server with Argo CD. Argo CD annotations for sync waves and resource hooks enable orchestration of Consul cluster deployment followed by service mesh configuration with Custom Resource Definitions (CRDs). Updating a Consul cluster on Kubernetes involves opening a pull request with changes to Helm chart values or CRDs and merging it. Argo CD synchronizes the configuration to match version control and handles the order of operations when applying the changes... View the full article
  19. Recently, I published the blog Provisioning OpenShift clusters using GitOps with ACM, explaining how to create OpenShift clusters with RHACM using GitOps with ArgoCD. The OpenShift installation type was IPI and valid for most platforms: Azure, AWS, GCP, vSphere, etc., but not for baremetal. If you've ever installed an OpenShift cluster in baremetal and disconnected, you know how different it is from any other installation. View the full article
  20. VMware today unveiled an orchestration tool, dubbed VMware Edge, that promises to simplify the management of edge computing services at scale.View the full article
  21. The fusion of DevSecOps and trusted application delivery can extend the GitOps pipeline and add business value.View the full article
  22. Cycloid's Bootstrap Stacks capability provides DevOps teams with the ability to build Git templates to define cloud computing environments.View the full article
  23. In this article, I will demonstrate the use of MicroShift and GitOps in a homelab environment and explore some of my learnings from this exercise. While this article is intended to be written as a "here's what I did" rather than as step by step instructions, I thought it'd be useful to list the software versions that were used. At the time of this writing, MicroShift is only supported on hardware that supports Red Hat Enterprise Linux (RHEL) 8.7... View the full article
  24. Introduction This is Part 3 in a series of blogs that demonstrates how to build an extensible and flexible GitOps system, based on a hub-and-spoke model to manage the lifecycles of Amazon Elastic Kubernetes Service (Amazon EKS) clusters, applications deployed to these clusters as well as their dependencies on other AWS managed resources. It’s recommended that you read Part 1 and Part 2 before proceeding. In Part 2, we discussed the mechanics of how Crossplane and Flux are used to implement a GitOps-based strategy to provision and bootstrap a fleet of Amazon EKS workload clusters. The focus of discussion in this post is how to onboard applications to workload clusters, which involves deploying applications to the target cluster as well as any AWS managed resources that they depend on. For applications deployed on clusters that need to make API requests to AWS services, the recommended security practice is to use AWS Identity and Access Management (AWS IAM) Roles for Service Accounts (IRSA) for granting the needed AWS IAM permissions. IRSA provides the benefits of least privilege, credential isolation and auditability. It may also be needed to install tools like monitoring agent and ingress controller on the clusters. IRSA should be setup for such tools that make API requests to AWS services. In this post, we also discuss how to automate the setup of IRSA in a multi-cluster GitOps system. Background Deploying an application to a workload cluster involves applying manifests for Kubernetes resources (e.g., Deployment, Service, and ConfigMap) and Crossplane Managed Resources that correspond to AWS services that the application depends on. In a fully decentralized model, each application is deployed from a separate repository, maintained by the respective application team. To fully automate this process using GitOps workflow, Flux on the target workload cluster must be configured to reconcile cluster state from the application repository. To configure an application to use IRSA, we must first create an AWS IAM OpenID Connect (OIDC) identity provider for the workload cluster. Second, to associate the AWS IAM role with a Kubernetes service account, the trust policy of the AWS IAM role must be defined. Both these steps require the OIDC identity provider ID of a workload cluster, which is not known in advance. Automating the GitOps workflow requires a solution to address this issue. Let’s get into the specifics of how the application onboarding is implemented, and how IRSA requirements are satisfied. Solution overview Application onboarding In Part 2, we discussed how to bootstrap a workload cluster with Flux and configure it to reconcile initial cluster state from the gitops-workloads repository. In order for Flux to reconcile application-specific artifacts from a separate application repository, a GitRepository resource that references the application repository has to be first applied into the cluster. Next, SSH keys to connect to this repository have to be provided in the form of a SealedSecret resource. Finally, a Flux Kustomization that references the GitRepository resource has to be applied to the cluster. The manifests for these resources are added to a cluster-specific folder under the gitops-workloads repository and synced to the workload cluster. Once Flux is configured to reconcile the new application repository, the application team has full control over what gets deployed to the workload cluster. The governance team, who owns the gitops-workloads repository, are only involved at the onboarding time. This approach improves agility by taking the governance team out of the loop for application deployment activities. But how can we make sure that the application team do not change the resources that belong to other applications on the same cluster, or make system level changes on the workload cluster? On multi-tenant clusters, Flux support for role-based access control can be used to ensure that an application team deploys their application artifacts only to specific namespaces in the workload cluster. In the application onboarding flow, an application team creates a namespace and a service account with a role binding that grants access only to that namespace. To enforce tenant isolation in the workload cluster, the Flux Kustomization responsible for reconciling the application repository is configured to impersonate this service account. This ensures that the reconciliation fails if an application team attempts to make any changes to objects in namespaces other than the one created for their application. The namespace, RBAC and service account manifests are applied to the cluster using familiar Git workflow of the application team creating a PR and the governance team reviewing and then merging it. The following diagram below depicts how the application onboarding flow would look like: Figure 1. Onboarding a new application whose manifests exist in a separate Git repo The flow starts with application team creating a Pull Request (PR) on gitops-workloads repository with the manifests needed to onboard the application — a template directory that comprises the complete set of manifests required for application onboarding is added to gitops-workloads repository to help the onboarding process. An application team can clone this directory using the helper script, specifying the application name, the target cluster name, the branch/directory in the application repository that needs to be reconciled into the workload cluster, and the SSH keys for connecting to the application repository, and create a PR. The governance team reviews the manifests that the PR comprises, approves and merges it. This triggers Flux to pull the application onboarding manifests from the cluster/application specific folder under the gitops-workloads repository (step 1), and apply them to the workload cluster (step 2). Next, Flux pulls the manifests from the application repository (step 3) and applies them to the workload cluster (step 4). Automating AWS IAM roles for service accounts (IRSA) setup in GitOps At a high level, IRSA configuration consists of two parts: IRSA prerequisite — an AWS IAM OIDC identity provider must be created for the workload cluster using its OpenID Connect (OIDC) issuer URL. This is a one-time setup carried out as part of the workload cluster provisioning and bootstrapping flow. IRSA setup for an app or a tool — this involves creating an AWS IAM role with the required permissions and configuring a Kubernetes service account to assume the IAM role. With regard to the IRSA prerequisites, Crossplane Managed Resource (MR) OpenIDConnectProvider is used to create the AWS IAM OIDC provider. The Crossplane Composition used to provision the workload cluster encapsulates an instance of this MR. Crossplane allows you to patch from one composed resource in a Composition to another, by using a Composite Resource (XR) as an intermediary. This feature is used to extract OIDC issuer URL from the Cluster MR that’s mapped to the workload cluster and use it to instantiate the OpenIDConnectProvider MR. To setup IRSA for an application or a tool, the AWS IAM roles and policies are created using the Crossplane MRs, namely, Role and Policy, respectively. To implement this step with a generalized approach, we must dynamically resolve the following parameters: account ID referenced by the Kubernetes annotation in the service account that associates it with an AWS IAM role OIDC provider URL of the cluster referenced by the trust policy of the AWS IAM role account ID referenced by the trust policy of the AWS IAM role To fully automate the creation and configuration of the service account and AWS IAM trust policy, the above parameters are exposed using a Kubernetes ConfigMap in the workload cluster. To create this ConfigMap, we make use of the Crossplane Kubernetes Provider, which is designed to enable deployment and management of arbitrary Kubernetes resources in clusters. This ConfigMap is one of the composed resources within the Composition used to provision the workload cluster. Crossplane patches are used to populate this ConfigMap with values from other composed resources. These values are then used to replace placeholder variables such as ACCOUNT_ID and OIDC_PROVIDER used in the manifests for IRSA related artifacts. This can be done during the GitOps workflow using Flux variable substitution, a feature that enables basic templating for Kubernetes manifests, providing a map of key/value pairs holding the placeholder variables to be substituted in the final YAML manifest. IRSA prerequisites The following diagram depicts how IRSA prerequisites are fulfilled as part of the workload cluster provisioning and bootstrapping flow: Figure 2. Setting up IRSA prerequisites First, Flux in the management cluster pulls the manifests from the gitops-system repository (step 1), which include the XR for provisioning a workload Amazon EKS cluster. Flux applies these manifests to the management cluster (step 2). This triggers the Crossplane AWS provider to create the workload cluster (steps 3–4). Post cluster-creation, this provider creates the OpenIDConnectProvider MR, which sets up the AWS IAM OIDC identity provider for the workload cluster (step 5). Finally, the Crossplane Kubernetes provider creates the ConfigMap in the workload cluster, exposing parameters such as OIDC URL and AWS account ID (step 6). This ConfigMap is used to configure IRSA for applications deployed to the workload cluster IRSA setup for applications Now, let’s see how the app-specific part of IRSA is taken care of in the application onboarding flow. The diagram below depicts the onboarding flow when the application has dependencies on cloud resources running outside the Amazon EKS cluster (e.g., Amazon DynamoDB table, Amazon SQS queue, etc.). Figure 3. Onboarding a new application that has dependencies on cloud resources running outside the Amazon EKS cluster Crossplane is installed on the workload cluster as part of the cluster provisioning and bootstrapping flow as discussed in Part 2. For creating the AWS IAM artifacts required to configure IRSA for an application, the corresponding Crossplane MRs are included in the application onboarding manifests, pulled by Flux (step 1), and applied to the workload cluster (step 2). During the reconciliation step, Flux substitutes the placeholders in the manifests such as ACCOUNT_ID and OIDC_PROVIDER with values from the ConfigMap. Subsequently, Crossplane in the workload cluster creates the AWS IAM artifacts. To provision any AWS-managed resources that the application depends on, the manifests of the corresponding Crossplane MRs are added by the application team to the application repository, pulled by Flux (step 3), and applied to workload cluster, along with the standard Kubernetes resources like Deployment, Service, etc. (step 4). Subsequently, Crossplane in the workload cluster provisions the cloud resources. IRSA setup for tools The IRSA configuration for Crossplane running on the workload cluster is a special case as Crossplane is not available yet to create the needed AWS IAM resources (i.e., a chicken or the egg problem). To solve this, the Crossplane in the management cluster is used for creating the AWS IAM resources needed for Crossplane running in the workload cluster. Conclusion In this post, we showed you how application onboarding can be addressed in a multi-cluster GitOps system with support for fully decentralized model, where each application team can bring their repository and get it reconciled into workload cluster. We showed how governance teams can control which application gets onboarded into which workload cluster through the Git PR process. We have also demonstrated how to align with Amazon EKS security best practices when accessing AWS APIs by configuring IRSA using Flux and Crossplane. In this 3-part series, we demonstrated how to build an extensible and flexible multi-cluster GitOps system based on a hub-and-spoke model that addresses the platform and application teams’ requirements. The series covered use cases, such as managing the lifecycle of Amazon EKS clusters, bootstrapping them with various tools, deploying applications to the provisioned clusters, and managing the lifecycle of associated managed resources such as Amazon SQS queues and Amazon DynamoDB tables while implementing security best practices. We highly recommend that you try Amazon EKS Multi-cluster GitOps workshop for hands-on experience. The full implementation of the solution outlined in this blog series is available in this GitHub repository. View the full article
  • Forum Statistics

    42.4k
    Total Topics
    42.2k
    Total Posts
×
×
  • Create New...