Jump to content

Search the Community

Showing results for tags 'red hat'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 3 results

  1. On Friday March 29, Microsoft employee Andres Freund shared that he had found odd symptoms in the xz package on Debian installations. Freund noticed that ssh login was requiring a lot of CPU and decided to investigate leading to the discovery. The vulnerability has received the maximum security ratings with a CVS score of 10 and a Red Hat Product Security critical impact rating. Red Hat assigned the issue CVE-2024-3094 but based on the severity and a previous major bug being named Heartbleed, the community has cheekily named the vulnerability a more vulgar name and inverted the Heartbleed logo. Luckily the vulnerability has been caught early Red Hat wrote: "Malicious code was discovered in the upstream tarballs of xz, starting with version 5.6.0. Through a series of complex obfuscations, the liblzma build process extracts a prebuilt object file from a disguised test file existing in the source code, which is then used to modify specific functions in the liblzma code. This results in a modified liblzma library that can be used by any software linked against this library, intercepting and modifying the data interaction with this library." The malicious injection can be found only in the tarball download package of xz versions 5.6.0 and 5.6.1 libraries. The Git distribution does not include the M4 Macro that triggers the code. The second-stage artifacts are present in the Git repository for the injection during the build time, if the malicious M4 macro is present. Without the merge into the build, the 2nd-stage file is innocuous. You are recommended to check for xz version 5.6.0 or 5.6.1 in the following distributions and downgrade to 5.4.6. If you cannot you should disable public facing SSH servers. More from TechRadar Pro Best managed VPS serversCheck out our top picks for best managed WordPressScalaHosting review View the full article
  2. Introduction Since its first appearance on AWS in 2015, Red Hat OpenShift service on AWS (ROSA) has had a similar architecture. Regardless of it being OpenShift 3 or OpenShift 4, self-managed OpenShift Container Platform (OCP), or managed ROSA. All this time customers query the Control Plane existing within their AWS account and explore getting the most return-on-investment (ROI) to offset some of the related costs. Red Hat has released Hosted Control Planes (HCP) for OpenShift. In this post, we delve into the benefits of Hosted Control Planes for OpenShift. We examine recent modifications and compare them to the conventional architecture. Finally, we highlight the advantages these changes bring to users. ROSA classic architecture OpenShift on AWS has always combined the resilience model of AWS and that of Red Hat OpenShift itself. We have three master nodes or Amazon Elastic Cloud Compute (Amazon EC2) instances that cater for the Control Plane and OpenShift API, three infrastructure nodes that cater for the OpenShift routing layer and other cluster related functions, and a compute layer or the worker nodes. All of this exists in the customer account and would be spread across multiple Availability Zones (AZs). With ROSA being a managed service, we had the Red Hat Site Reliability Engineering (SRE) team that maintain and manage the OpenShift environment for the customer via an AWS PrivateLink connection to OpenShift within the customer account. Traditional OpenShift on AWS architecture Common questions asked by customers exploring ROSA include: Why is the Control Plane in my account but for other AWS services I only have the compute nodes? Which incentive programs and cost control options are best to reduce the resource cost of the Control Plane? What is causing inter AZ data transfer costs? ROSA Hosted Control Plane Red Hat has launched OpenShift Hosted Control Plane, which provides several customer benefits. Hosted Control Planes sees the OpenShift control plane nodes (i.e master nodes) move out of the customer account and into a service team account, which is similar to other AWS service offerings. This change results in a reduction of AWS service costs within the customer account – notable Amazon EC2 and Amazon Elastic Block Store (Amazon EBS). Due to the etcd database for the Kubernetes layer of OpenShift existing on the control plane (Master) nodes, any inter-AZ data transfer costs related to etcd replication for OpenShift resilience is also removed from the customer account. Red Hat SRE (Site reliability engineers) manage and maintain the OpenShift cluster directly from the Service team account. The AWS PrivateLink endpoint in the customer account is now used to connect the Compute (i.e., Worker) nodes to the Control Plane in the Service team account. The three OpenShift Infrastructure nodes are also removed from the customer account and services are either moved to the Control Plane (Master) nodes or to the Compute (Worker) nodes. Please note that the OpenShift Router layer is moved to the compute nodes. This strategy results in a reduction in AWS service costs for Amazon EC2 and Amazon EBS for all the Master nodes, Infrastructure nodes, and inter-AZ data transfer for related to etcd. The Hosted Control Plane has the added benefit of reduced provision times because the Control Plane nodes and Compute nodes are provisioned in parallel instead of in a linear approach. Deploying ROSA with an HCP cluster Since it isn’t possible to upgrade or convert existing ROSA clusters to a Hosted Control Planes architecture, you must create a new cluster to use ROSA with HCP functionality. Prerequisites Deploying a ROSA with HCP clusters becomes effortless when you use the default settings and let AWS Identity and Access Management (AWS IAM) resources be automatically created. You can initiate the cluster deployment using the ROSA CLI rosa . Prior to utilizing the rosa for creating a cluster with an HCP, ensure you’ve established the necessary account- wide roles and policies including operator roles. Let’s discuss about the important components that are required to create a ROSA with HCP cluster. To create a ROSA with HCP cluster, you must have the following items: A configured virtual private cloud (VPC) Account wide roles An ODIC Configuration Operator roles Virtual Private Cloud (VPC) Hosted Control Planes needs to be deployed into an existing Virtual Private Cloud (VPC). Most of the customers are managing their existing VPC through some sort of infrastructure-as-code. Please refer to this document for additional details. You can create a VPC manually or by using the Terraform template. Terraform is a tool that allows you to create various resources using a template. Here is a detailed documentation on how to use the Terraform template to build out the VPC. Account- wide STS roles and policies On the security side, there are changes with respect to AWS IAM policies attached to the different components of ROSA. We will see a step towards establishing the principle of least privilege, with an even tighter scope of AWS IAM policies. We will see new changes, like account-wide roles that need to be run again and more granular roles for each OpenShift operator. When utilizing a ROSA HCP cluster, it is necessary to establish the essential AWS IAM roles specifically designed for ROSA with HCP deployments. The cluster operator utilizes these operator roles to acquire temporary permissions for performing the cluster operations. ROSA with HCP clusters only support AWS Security Token Service (AWS STS) authentication. AWS provides AWS STS as a web service that enables you to request temporary, limited-privilege credentials for users. AWS PrivateLink In ROSA classic architecture, AWS PrivateLink which facilitates Red Hat SRE teams to connect to the OpenShift cluster and managed the environment on behalf of customer. Now the Red Hat SRE teams manage the customer environment from the service account and now AWS PrivateLink is used for the Worker nodes to interact with the Control Plane. Provisioning The below mentioned commands used to create the account roles, operator roles, Open ID configuration and deploy the cluster. To explore this in greater detail, read this documentation on Creating ROSA with HCP clusters using the default options section of the ROSA documentation. Account-wide roles: The following command is used to create the required AWS IAM account roles and policies. rosa create account-roles --force-policy-creation Operator roles: To create Operator roles run the following command. rosa create operator-roles --hosted-cp --prefix <prefix-name> --oidc-config-id <oidc-config-id> Open ID Configuration: Below command is used to create your OIDC configuration alongside the AWS resources. $ rosa create oidc-config --mode=auto --yes Deploy cluster: Create your ROSA with HCP cluster with one of the following commands: rosa create cluster --private —cluster-name=<cluster_name> --sts --mode=auto --hosted-cp --subnet-ids=<private-subnet-id> export REGION=<region_name> export ROSA_VERSION=<rosa_version> rosa create cluster --cluster-name <cluster_name> --multi-az --hosted-cp --mode=auto --sts --region $REGION --version $ROSA_VERSION --enable-autoscaling --min-replicas <minimum_replicas> --max-replicas <maximum_replicas> --compute-machine-type <instance_type> --host-prefix <host_prefix> --private-link --subnet-ids <subnet_id_1>,<subnet_id_2>,<subnet_id_3> For example: export REGION=eu-west-1 export ROSA_VERSION=4.13.13 rosa create cluster —cluster-name esdp-rosa --multi-az --hosted-cp --mode=auto --sts --region $REGION --version $ROSA_VERSION --enable-autoscaling --min-replicas 3 --max-replicas 3 --compute-machine-type m5.2xlarge --host-prefix 23 --private-link --subnet-ids subnet-1234566789ab,subnet-123456789cd,subnet-12345678 Cleaning up This documentation provides steps to delete a ROSA cluster and AWS STS resources. Migration paths for existing ROSA customers At this stage, it isn’t possible to perform an in-place upgrade from ROSA classic to ROSA HCP, because we need to provision a ROSA Hosted Control Plane cluster and then migrate the application workloads. Customer considering this option, should explore making use of the Red Hat Migration Toolkit for Containers, which is based on the upstream Konveyor project. Conclusion In this post, we discussed the benefits of Hosted Control Planes for OpenShift. We examined recent modifications and compare them to the conventional architecture. Hosted Control Planes (HCP) heralds a new era in the deployment and management of OpenShift on AWS. This innovative shift not only reduces costs, enhances the operational resiliency and security, and improves cluster provision times. We encourage you to explore the new possibilities and benefits of ROSA with HCP within your business. Additional resources For more information on how to install ROSA with HCP clusters, please refer to Install ROSA with HCP clusters. For information about ROSA quick start guide please refer ROSA quick start guide. For more information on architecture and networking please refer to the ROSA: Architecture and Networking. Ask an OpenShift administrator (E68) | (Migration toolkit for containers). View the full article
  3. The Five Pillars of Red Hat OpenShift Observability It is with great pleasure that we announce additional Observability features coming up as part of the OpenShift Monitoring 4.14, Logging 5.8, and Distributed Tracing 2.9 releases. Red Hat OpenShift Observability’s plan continues to move forward: as our teams tackle key data collection, storage, delivery, visualization, and analytics features with the goal of turning your data into answers. View the full article
  • Forum Statistics

    43.3k
    Total Topics
    42.8k
    Total Posts
×
×
  • Create New...