Jump to content

Search the Community

Showing results for tags 'hcp'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 9 results

  1. Introduction Since its first appearance on AWS in 2015, Red Hat OpenShift service on AWS (ROSA) has had a similar architecture. Regardless of it being OpenShift 3 or OpenShift 4, self-managed OpenShift Container Platform (OCP), or managed ROSA. All this time customers query the Control Plane existing within their AWS account and explore getting the most return-on-investment (ROI) to offset some of the related costs. Red Hat has released Hosted Control Planes (HCP) for OpenShift. In this post, we delve into the benefits of Hosted Control Planes for OpenShift. We examine recent modifications and compare them to the conventional architecture. Finally, we highlight the advantages these changes bring to users. ROSA classic architecture OpenShift on AWS has always combined the resilience model of AWS and that of Red Hat OpenShift itself. We have three master nodes or Amazon Elastic Cloud Compute (Amazon EC2) instances that cater for the Control Plane and OpenShift API, three infrastructure nodes that cater for the OpenShift routing layer and other cluster related functions, and a compute layer or the worker nodes. All of this exists in the customer account and would be spread across multiple Availability Zones (AZs). With ROSA being a managed service, we had the Red Hat Site Reliability Engineering (SRE) team that maintain and manage the OpenShift environment for the customer via an AWS PrivateLink connection to OpenShift within the customer account. Traditional OpenShift on AWS architecture Common questions asked by customers exploring ROSA include: Why is the Control Plane in my account but for other AWS services I only have the compute nodes? Which incentive programs and cost control options are best to reduce the resource cost of the Control Plane? What is causing inter AZ data transfer costs? ROSA Hosted Control Plane Red Hat has launched OpenShift Hosted Control Plane, which provides several customer benefits. Hosted Control Planes sees the OpenShift control plane nodes (i.e master nodes) move out of the customer account and into a service team account, which is similar to other AWS service offerings. This change results in a reduction of AWS service costs within the customer account – notable Amazon EC2 and Amazon Elastic Block Store (Amazon EBS). Due to the etcd database for the Kubernetes layer of OpenShift existing on the control plane (Master) nodes, any inter-AZ data transfer costs related to etcd replication for OpenShift resilience is also removed from the customer account. Red Hat SRE (Site reliability engineers) manage and maintain the OpenShift cluster directly from the Service team account. The AWS PrivateLink endpoint in the customer account is now used to connect the Compute (i.e., Worker) nodes to the Control Plane in the Service team account. The three OpenShift Infrastructure nodes are also removed from the customer account and services are either moved to the Control Plane (Master) nodes or to the Compute (Worker) nodes. Please note that the OpenShift Router layer is moved to the compute nodes. This strategy results in a reduction in AWS service costs for Amazon EC2 and Amazon EBS for all the Master nodes, Infrastructure nodes, and inter-AZ data transfer for related to etcd. The Hosted Control Plane has the added benefit of reduced provision times because the Control Plane nodes and Compute nodes are provisioned in parallel instead of in a linear approach. Deploying ROSA with an HCP cluster Since it isn’t possible to upgrade or convert existing ROSA clusters to a Hosted Control Planes architecture, you must create a new cluster to use ROSA with HCP functionality. Prerequisites Deploying a ROSA with HCP clusters becomes effortless when you use the default settings and let AWS Identity and Access Management (AWS IAM) resources be automatically created. You can initiate the cluster deployment using the ROSA CLI rosa . Prior to utilizing the rosa for creating a cluster with an HCP, ensure you’ve established the necessary account- wide roles and policies including operator roles. Let’s discuss about the important components that are required to create a ROSA with HCP cluster. To create a ROSA with HCP cluster, you must have the following items: A configured virtual private cloud (VPC) Account wide roles An ODIC Configuration Operator roles Virtual Private Cloud (VPC) Hosted Control Planes needs to be deployed into an existing Virtual Private Cloud (VPC). Most of the customers are managing their existing VPC through some sort of infrastructure-as-code. Please refer to this document for additional details. You can create a VPC manually or by using the Terraform template. Terraform is a tool that allows you to create various resources using a template. Here is a detailed documentation on how to use the Terraform template to build out the VPC. Account- wide STS roles and policies On the security side, there are changes with respect to AWS IAM policies attached to the different components of ROSA. We will see a step towards establishing the principle of least privilege, with an even tighter scope of AWS IAM policies. We will see new changes, like account-wide roles that need to be run again and more granular roles for each OpenShift operator. When utilizing a ROSA HCP cluster, it is necessary to establish the essential AWS IAM roles specifically designed for ROSA with HCP deployments. The cluster operator utilizes these operator roles to acquire temporary permissions for performing the cluster operations. ROSA with HCP clusters only support AWS Security Token Service (AWS STS) authentication. AWS provides AWS STS as a web service that enables you to request temporary, limited-privilege credentials for users. AWS PrivateLink In ROSA classic architecture, AWS PrivateLink which facilitates Red Hat SRE teams to connect to the OpenShift cluster and managed the environment on behalf of customer. Now the Red Hat SRE teams manage the customer environment from the service account and now AWS PrivateLink is used for the Worker nodes to interact with the Control Plane. Provisioning The below mentioned commands used to create the account roles, operator roles, Open ID configuration and deploy the cluster. To explore this in greater detail, read this documentation on Creating ROSA with HCP clusters using the default options section of the ROSA documentation. Account-wide roles: The following command is used to create the required AWS IAM account roles and policies. rosa create account-roles --force-policy-creation Operator roles: To create Operator roles run the following command. rosa create operator-roles --hosted-cp --prefix <prefix-name> --oidc-config-id <oidc-config-id> Open ID Configuration: Below command is used to create your OIDC configuration alongside the AWS resources. $ rosa create oidc-config --mode=auto --yes Deploy cluster: Create your ROSA with HCP cluster with one of the following commands: rosa create cluster --private —cluster-name=<cluster_name> --sts --mode=auto --hosted-cp --subnet-ids=<private-subnet-id> export REGION=<region_name> export ROSA_VERSION=<rosa_version> rosa create cluster --cluster-name <cluster_name> --multi-az --hosted-cp --mode=auto --sts --region $REGION --version $ROSA_VERSION --enable-autoscaling --min-replicas <minimum_replicas> --max-replicas <maximum_replicas> --compute-machine-type <instance_type> --host-prefix <host_prefix> --private-link --subnet-ids <subnet_id_1>,<subnet_id_2>,<subnet_id_3> For example: export REGION=eu-west-1 export ROSA_VERSION=4.13.13 rosa create cluster —cluster-name esdp-rosa --multi-az --hosted-cp --mode=auto --sts --region $REGION --version $ROSA_VERSION --enable-autoscaling --min-replicas 3 --max-replicas 3 --compute-machine-type m5.2xlarge --host-prefix 23 --private-link --subnet-ids subnet-1234566789ab,subnet-123456789cd,subnet-12345678 Cleaning up This documentation provides steps to delete a ROSA cluster and AWS STS resources. Migration paths for existing ROSA customers At this stage, it isn’t possible to perform an in-place upgrade from ROSA classic to ROSA HCP, because we need to provision a ROSA Hosted Control Plane cluster and then migrate the application workloads. Customer considering this option, should explore making use of the Red Hat Migration Toolkit for Containers, which is based on the upstream Konveyor project. Conclusion In this post, we discussed the benefits of Hosted Control Planes for OpenShift. We examined recent modifications and compare them to the conventional architecture. Hosted Control Planes (HCP) heralds a new era in the deployment and management of OpenShift on AWS. This innovative shift not only reduces costs, enhances the operational resiliency and security, and improves cluster provision times. We encourage you to explore the new possibilities and benefits of ROSA with HCP within your business. Additional resources For more information on how to install ROSA with HCP clusters, please refer to Install ROSA with HCP clusters. For information about ROSA quick start guide please refer ROSA quick start guide. For more information on architecture and networking please refer to the ROSA: Architecture and Networking. Ask an OpenShift administrator (E68) | (Migration toolkit for containers). View the full article
  2. Today at HashiConf, we are pleased to announce the alpha program for HashiCorp Cloud Platform (HCP) Vault Radar, HCP Vault Secrets general availability, secrets sync beta for Vault Enterprise, and HashiCorp Vault 1.15. These new capabilities help organizations secure their applications and services as they leverage a cloud operating model to power their shift to the cloud. Enabling a cloud operating model helps organizations cut costs, reduce risks, and increase the speed at which developers build and deploy secure applications. The new capabilities boost Vault’s focus on helping organizations use identity to achieve their security goals by: Centrally managing and enforcing access to secrets and systems based on trusted sources of application and user identity. Eliminating credential sprawl by identifying static secrets hardcoded throughout complex systems and tooling across your entire cloud estate. Reducing manual overhead and risk associated with managing access to infrastructure resources like SSH, VPNs, as well as applications and services. Automatically implementing authentication and authorization mechanisms to ensure only authorized services can communicate with one another. View the full article
  3. Today at HashiConf, we are pleased to announce the general availability of HCP Vault Secrets, a new software-as-a-service (SaaS) offering of HashiCorp Vault that focuses on secrets management. Released in beta earlier this year, HCP Vault Secrets lets users onboard quickly and is free to get started. The general availability release of HCP Vault Secrets builds on the beta release with production-ready secrets management capabilities, additional secrets sync destinations, and multiple consumption tiers. During the public beta period, we worked on improvements and additions to HCP Vault Secrets. Focused on secrets management for developers, these additions will help our users to: Boost security across clouds and machines: Centralize where secrets are stored and minimize context switching between multiple solutions to reduce the risk of breaches caused by human error. Increase productivity: Improve security posture without expending additional time and effort. Enhance visibility of secrets activity across teams: Understand when secrets are modified or accessed — including by whom, when, and from where — with advanced filtering and storage. Comply with security best practices: Eliminate manual upgrade requirements with fully managed deployment to keep your instance up to date and in line with security best practices. Last-mile secrets availability for developers: Centralize secrets in HCP Vault Secrets while syncing secrets to existing platforms and tools so developers can access secrets when and where they need them. View the full article
  4. Today at HashiConf, we are introducing a number of significant enhancements for HashiCorp Consul, our service networking solution that helps users discover and securely connect any application. We're also formally introducing HCP Consul Central, previously known as the management plane for HCP Consul. These new capabilities help organizations enhance workflow management, increase reliability and scale, and bolster security for operators as they leverage a cloud operating model for service networking. Some of the notable updates include: Multi-port support (beta): a new, simplified way to onboard modern distributed applications that require different ports for various traffic types for intricate client-server communication Locality-aware service mesh routing within a Consul datacenter: optimizes traffic routing within datacenters, prioritizing local instances for lower latency and reduced costs. Sameness groups (GA): simplifies multi-cluster operations, enhancing service reliability for enterprises. HCP Consul Central: introduces observability features for HashiCorp-managed and linked self-managed clusters, enhancing cluster health monitoring. Additionally, a global API simplifies integration with HCP Consul Central, allowing platform operators to streamline workflows and access cluster details. View the full article
  5. Why are enterprises and practitioners alike consuming more and more cloud services? This answer from the 2021 HashiCorp State of Cloud Strategy Survey sums it up well: “Cloud services offer better ROI than running it ourselves.” That sentiment is fueling demand for cloud services and it’s a big reason why we featured so many new developments for the HashiCorp Cloud Platform (HCP) at HashiConf Europe this week. This blog post highlights some of HCP’s newest enhancements, including new beta services for HCP Boundary, HCP Waypoint, and HCP Consul on Microsoft Azure. It also summarizes new features for HashiCorp Terraform Cloud, like Drift Detection and Run Tasks. View the full article
  6. Today at HashiConf Europe, we introduced a number of significant enhancements for HashiCorp Consul, our service networking solution that helps users discover and securely connect any application. These updates include HashiCorp Cloud Platform (HCP) Consul becoming generally available on Microsoft Azure, general availability of Consul API Gateway version 0.3, and tech previews of the upcoming Consul 1.13 and AWS Lambda support, scheduled for release later this year. Here’s a closer look at all three announcements… View the full article
  7. We are pleased to announce the general availability of Consul-Terraform-Sync (CTS) 0.6. This release marks another step in the maturity of our larger Network Infrastructure Automation (NIA) solution. CTS combines the functionality of HashiCorp Terraform and HashiCorp Consul to eliminate manual ticket-based systems across on-premises and cloud environments. Its capabilities can be broken down into two parts: For Day 0 and Day 1, teams use Terraform to quickly deploy network devices and infrastructure in a consistent and reproducible manner. Once established, teams manage Day 2 networking tasks by integrating Consul’s catalog to register services into the system via CTS. Whenever a change is recorded to the service catalog, CTS triggers a Terraform run that uses partner ecosystem integrations to automate updates and deployments for load balancers, firewall policies, and other service-defined networking components. This post covers the evolution of CTS and highlights the new features in CTS 0.6… View the full article
  8. We are pleased to announce that our first HashiCorp Cloud Platform (HCP) service — HCP Consul — is now in public beta. HCP Consul enables a team to provision HashiCorp-managed Consul clusters directly through the HCP portal and easily leverage Consul’s multi-platform service mesh capabilities within their Amazon EKS, ECS, and EC2 application environments. To learn more about HashiCorp Cloud Platform, please visit our web page. If you are new to HashiCorp Consul, please visit the Consul Learn documentation for an introduction. View the full article
  9. We are excited to announce the private beta for HashiCorp Vault running on the HashiCorp Cloud Platform (HCP), which is a fully managed cloud offering to automate the deployment of HashiCorp products. HCP Vault allows organizations to get up and running quickly, providing immediate access to Vault’s best-in-class secrets management and encryption capabilities, with the platform providing the resilience and operational excellence needed so you do not have to manage Vault yourself... View the full article
  • Forum Statistics

    42.7k
    Total Topics
    42.5k
    Total Posts
×
×
  • Create New...