Jump to content

Search the Community

Showing results for tags 'openshift'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. What is OpenShift? What is OpenShift OpenShift is a family of containerization software products made by Red Hat. The most popular offering is OpenShift Container Platform, a hybrid cloud platform as a service (PaaS) built around Linux containers. This platform utilizes Kubernetes for container orchestration and management, with Red Hat Enterprise Linux as the foundation. Key features of OpenShift include: Automated deployment and scaling: Streamlines app development and deployment across different environments. Integrated security: Provides built-in security features for workloads and infrastructure. Multi-cloud and on-premise support: Deploy applications on various cloud platforms (AWS, Azure, GCP) or on-premises infrastructure. Developer-friendly tools: Offers various tools for development, CI/CD pipelines, and application monitoring. Large ecosystem of partners and integrations: Extends functionalities with numerous tools and technologies. Top 10 use cases of OpenShift? Top 10 Use Cases of OpenShift: Modernizing legacy applications: Refactor and containerize existing applications for improved scalability and portability. Building cloud-native microservices: Develop and deploy applications composed of interconnected, independent services. Continuous integration and continuous delivery (CI/CD): Automate build, test, and deployment processes for faster development cycles. Edge computing: Deploy applications closer to data sources for faster processing and reduced latency. Data science and machine learning: Develop and manage data pipelines and machine learning models. Internet of Things (IoT): Build and manage applications for connected devices and sensors. High-performance computing (HPC): Run resource-intensive scientific and engineering applications. Internal developer platforms: Create centralized platforms for internal application development within organizations. Software supply chain management: Securely manage and track software builds and deployments. Containerized DevOps environments: Establish consistent and secure environments for development and operations teams. These are just some of the many use cases for OpenShift. It’s a versatile platform that can be adapted to various needs and industries. What are the feature of OpenShift? OpenShift boasts a wide range of features that cater to developers, operators, and businesses alike. Here are some of the key capabilities: Developer-Centric Features: Integrated CI/CD Pipelines: Seamlessly automate building, testing, and deploying applications with Tekton and other CI/CD tools. Multi-Language Support: Develop with various languages like Java, Python, Node.js, Go, and Ruby. Command-Line and IDE Integrations: Work comfortably with tools like Git, VS Code, and Red Hat CodeReady Studio. Source-to-Image Building: Simplify container image creation directly from your application code. Built-in Monitoring and Logging: Gain insights into application performance and health with pre-configured monitoring and logging tools. Operational Features: Automated Installation and Upgrades: Streamline infrastructure management with automated setups and updates. Centralized Policy Management: Enforce consistent security and governance across application deployments. Multi-Cluster Management: Efficiently manage deployments across multiple OpenShift clusters. Self-Service Environments: Empower developers with on-demand access to approved resources. Operator Framework: Extend functionality with pre-built operators for databases, networking, and more. Security and Compliance Features: Integrated Security Scanning: Scan container images for vulnerabilities before deployment. Role-Based Access Control (RBAC): Granularly control user access to resources. Network Policies and Security Context Constraints: Enforce specific security configurations on applications. Compliance Support: Align deployments with compliance frameworks like HIPAA, PCI DSS, and SOC 2. Red Hat Support: Benefit from industry-leading support for deployments. Additional Features: Scalability: Easily scale applications up or down based on demand. High Availability: Ensure application uptime with disaster recovery and failover mechanisms. Portability: Deploy applications across diverse environments, including public clouds, private clouds, and on-premises infrastructure. Large Ecosystem: Leverage a vast ecosystem of tools, integrations, and partner solutions. This list is not exhaustive, and the specific features available may vary depending on the OpenShift version you choose. How OpenShift works and Architecture? OpenShift works and Architecture OpenShift boasts a wide range of features that cater to developers, operators, and businesses alike. Here are some of the key capabilities: Developer-Centric Features: Integrated CI/CD Pipelines: Seamlessly automate building, testing, and deploying applications with Tekton and other CI/CD tools. Multi-Language Support: Develop with various languages like Java, Python, Node.js, Go, and Ruby. Command-Line and IDE Integrations: Work comfortably with tools like Git, VS Code, and Red Hat CodeReady Studio. Source-to-Image Building: Simplify container image creation directly from your application code. Built-in Monitoring and Logging: Gain insights into application performance and health with pre-configured monitoring and logging tools. Operational Features: Automated Installation and Upgrades: Streamline infrastructure management with automated setups and updates. Centralized Policy Management: Enforce consistent security and governance across application deployments. Multi-Cluster Management: Efficiently manage deployments across multiple OpenShift clusters. Self-Service Environments: Empower developers with on-demand access to approved resources. Operator Framework: Extend functionality with pre-built operators for databases, networking, and more. Security and Compliance Features: Integrated Security Scanning: Scan container images for vulnerabilities before deployment. Role-Based Access Control (RBAC): Granularly control user access to resources. Network Policies and Security Context Constraints: Enforce specific security configurations on applications. Compliance Support: Align deployments with compliance frameworks like HIPAA, PCI DSS, and SOC 2. Red Hat Support: Benefit from industry-leading support for deployments. Additional Features: Scalability: Easily scale applications up or down based on demand. High Availability: Ensure application uptime with disaster recovery and failover mechanisms. Portability: Deploy applications across diverse environments, including public clouds, private clouds, and on-premises infrastructure. Large Ecosystem: Leverage a vast ecosystem of tools, integrations, and partner solutions. How to Install OpenShift it? Installing OpenShift can be done in several ways, depending on your needs and environment. Here are the three main options: 1. OpenShift Local: Pros: Quick and easy to set up, ideal for individual developers and learning. Cons: Not suitable for production use, limited resources. Installation Steps: Download the crc tool: Go to the Red Hat Console official site and create a free Red Hat Developer account. Download the crc tool for your system. Set up the virtual machine: Run crc setup and follow the instructions. This downloads and configures a virtual machine that will host your OpenShift cluster. Start the cluster: Run crc start. Access the cluster: You can access the OpenShift web console at https://127.0.0.1:8443/console. 2. User-Provisioned Infrastructure: Pros: More control over the infrastructure, suitable for small-scale production use. Cons: Requires technical expertise to manage the infrastructure. Installation Steps: Prepare your infrastructure: Set up servers with the required operating system and network configuration. Download the installation program: Get the appropriate installer from the OpenShift Cluster Manager site. Generate installation manifests: Run the installer with options specific to your infrastructure and desired configuration. Deploy the cluster: Follow the generated instructions to provision and deploy the OpenShift cluster on your infrastructure. 3. Managed OpenShift: Pros: No infrastructure management required, easiest and quickest to set up. Cons: Less control over the environment, potential costs involved. Options: OpenShift Online: Managed OpenShift service from Red Hat. Amazon Red Hat OpenShift (ARO): Managed OpenShift service on AWS. Other cloud providers: Many cloud providers offer similar managed OpenShift services (e.g., Microsoft Azure Red Hat OpenShift). Installation Steps: Choose a provider: Select the desired managed OpenShift service based on your needs and budget. Create an account: Register for an account with the chosen provider. Provision the cluster: Follow the provider’s specific instructions to create a new OpenShift cluster. Access the cluster: The provider will provide access details to your managed OpenShift cluster. Notes: The specific installation steps and options may vary depending on your chosen platform and version of OpenShift. Consider your technical expertise, project requirements, and budget when choosing an installation method. Basic Tutorials of OpenShift: Getting Started Basic Tutorials of OpenShift OpenShift offers various installation methods, and the approach you choose will depend on your needs and technical expertise. Here are some different options with step-by-step tutorials: 1. OpenShift Local (developer sandbox): Pros: Quick and easy setup, ideal for learning and individual developers. Cons: Not suitable for production use, limited resources. Steps: Setup: Create a free Red Hat Developer account from the Red Hat official site. Download and install the crc tool based on your operating system. Start the cluster: Run crc setup and follow the instructions to download and configure a virtual machine for your OpenShift cluster. Run crc start to launch the cluster. Access the cluster: Access the OpenShift web console at https://127.0.0.1:8443/console. 2. Minishift (local Kubernetes for OpenShift development): Pros: Lightweight, good for learning OpenShift development workflows. Cons: Simulates OpenShift on a single node, not suitable for production. Steps: Setup: Install Minishift based on your operating system: [<invalid URL removed>] Configure Minishift with desired memory and storage allocations. Start the cluster: Run minishift start to create and start a local Kubernetes cluster with OpenShift features. Access the cluster: Open the Kubernetes dashboard at https://localhost:8443/console. 3. User-Provisioned Infrastructure: Pros: More control over the infrastructure, suitable for small-scale production use. Cons: Requires technical expertise to manage the infrastructure. Steps: Prepare your infrastructure: Set up servers with the required operating system and network configuration. Download the installation program: Get the installer from the OpenShift Cluster Manager site. Generate installation manifests: Run the installer with options specific to your infrastructure and desired configuration. Deploy the cluster: Follow the generated instructions to provision and deploy the OpenShift cluster on your infrastructure. 4. Managed OpenShift: Pros: No infrastructure management required, easiest and quickest to set up. Cons: Less control over the environment, potential costs involved. Options: Other cloud providers: Many cloud providers offer similar managed OpenShift services (e.g., Microsoft Azure Red Hat OpenShift). Steps: Choose a provider: Select the desired managed OpenShift service based on your needs and budget. Generate an account: Register for an account with the chosen provider. Provision the cluster: Follow the provider’s specific instructions to create a new OpenShift cluster. Access the cluster: The provider will provide access details to your managed OpenShift cluster. The post What is OpenShift and use cases of OpenShift? appeared first on DevOpsSchool.com. View the full article
  2. Introduction Since its first appearance on AWS in 2015, Red Hat OpenShift service on AWS (ROSA) has had a similar architecture. Regardless of it being OpenShift 3 or OpenShift 4, self-managed OpenShift Container Platform (OCP), or managed ROSA. All this time customers query the Control Plane existing within their AWS account and explore getting the most return-on-investment (ROI) to offset some of the related costs. Red Hat has released Hosted Control Planes (HCP) for OpenShift. In this post, we delve into the benefits of Hosted Control Planes for OpenShift. We examine recent modifications and compare them to the conventional architecture. Finally, we highlight the advantages these changes bring to users. ROSA classic architecture OpenShift on AWS has always combined the resilience model of AWS and that of Red Hat OpenShift itself. We have three master nodes or Amazon Elastic Cloud Compute (Amazon EC2) instances that cater for the Control Plane and OpenShift API, three infrastructure nodes that cater for the OpenShift routing layer and other cluster related functions, and a compute layer or the worker nodes. All of this exists in the customer account and would be spread across multiple Availability Zones (AZs). With ROSA being a managed service, we had the Red Hat Site Reliability Engineering (SRE) team that maintain and manage the OpenShift environment for the customer via an AWS PrivateLink connection to OpenShift within the customer account. Traditional OpenShift on AWS architecture Common questions asked by customers exploring ROSA include: Why is the Control Plane in my account but for other AWS services I only have the compute nodes? Which incentive programs and cost control options are best to reduce the resource cost of the Control Plane? What is causing inter AZ data transfer costs? ROSA Hosted Control Plane Red Hat has launched OpenShift Hosted Control Plane, which provides several customer benefits. Hosted Control Planes sees the OpenShift control plane nodes (i.e master nodes) move out of the customer account and into a service team account, which is similar to other AWS service offerings. This change results in a reduction of AWS service costs within the customer account – notable Amazon EC2 and Amazon Elastic Block Store (Amazon EBS). Due to the etcd database for the Kubernetes layer of OpenShift existing on the control plane (Master) nodes, any inter-AZ data transfer costs related to etcd replication for OpenShift resilience is also removed from the customer account. Red Hat SRE (Site reliability engineers) manage and maintain the OpenShift cluster directly from the Service team account. The AWS PrivateLink endpoint in the customer account is now used to connect the Compute (i.e., Worker) nodes to the Control Plane in the Service team account. The three OpenShift Infrastructure nodes are also removed from the customer account and services are either moved to the Control Plane (Master) nodes or to the Compute (Worker) nodes. Please note that the OpenShift Router layer is moved to the compute nodes. This strategy results in a reduction in AWS service costs for Amazon EC2 and Amazon EBS for all the Master nodes, Infrastructure nodes, and inter-AZ data transfer for related to etcd. The Hosted Control Plane has the added benefit of reduced provision times because the Control Plane nodes and Compute nodes are provisioned in parallel instead of in a linear approach. Deploying ROSA with an HCP cluster Since it isn’t possible to upgrade or convert existing ROSA clusters to a Hosted Control Planes architecture, you must create a new cluster to use ROSA with HCP functionality. Prerequisites Deploying a ROSA with HCP clusters becomes effortless when you use the default settings and let AWS Identity and Access Management (AWS IAM) resources be automatically created. You can initiate the cluster deployment using the ROSA CLI rosa . Prior to utilizing the rosa for creating a cluster with an HCP, ensure you’ve established the necessary account- wide roles and policies including operator roles. Let’s discuss about the important components that are required to create a ROSA with HCP cluster. To create a ROSA with HCP cluster, you must have the following items: A configured virtual private cloud (VPC) Account wide roles An ODIC Configuration Operator roles Virtual Private Cloud (VPC) Hosted Control Planes needs to be deployed into an existing Virtual Private Cloud (VPC). Most of the customers are managing their existing VPC through some sort of infrastructure-as-code. Please refer to this document for additional details. You can create a VPC manually or by using the Terraform template. Terraform is a tool that allows you to create various resources using a template. Here is a detailed documentation on how to use the Terraform template to build out the VPC. Account- wide STS roles and policies On the security side, there are changes with respect to AWS IAM policies attached to the different components of ROSA. We will see a step towards establishing the principle of least privilege, with an even tighter scope of AWS IAM policies. We will see new changes, like account-wide roles that need to be run again and more granular roles for each OpenShift operator. When utilizing a ROSA HCP cluster, it is necessary to establish the essential AWS IAM roles specifically designed for ROSA with HCP deployments. The cluster operator utilizes these operator roles to acquire temporary permissions for performing the cluster operations. ROSA with HCP clusters only support AWS Security Token Service (AWS STS) authentication. AWS provides AWS STS as a web service that enables you to request temporary, limited-privilege credentials for users. AWS PrivateLink In ROSA classic architecture, AWS PrivateLink which facilitates Red Hat SRE teams to connect to the OpenShift cluster and managed the environment on behalf of customer. Now the Red Hat SRE teams manage the customer environment from the service account and now AWS PrivateLink is used for the Worker nodes to interact with the Control Plane. Provisioning The below mentioned commands used to create the account roles, operator roles, Open ID configuration and deploy the cluster. To explore this in greater detail, read this documentation on Creating ROSA with HCP clusters using the default options section of the ROSA documentation. Account-wide roles: The following command is used to create the required AWS IAM account roles and policies. rosa create account-roles --force-policy-creation Operator roles: To create Operator roles run the following command. rosa create operator-roles --hosted-cp --prefix <prefix-name> --oidc-config-id <oidc-config-id> Open ID Configuration: Below command is used to create your OIDC configuration alongside the AWS resources. $ rosa create oidc-config --mode=auto --yes Deploy cluster: Create your ROSA with HCP cluster with one of the following commands: rosa create cluster --private —cluster-name=<cluster_name> --sts --mode=auto --hosted-cp --subnet-ids=<private-subnet-id> export REGION=<region_name> export ROSA_VERSION=<rosa_version> rosa create cluster --cluster-name <cluster_name> --multi-az --hosted-cp --mode=auto --sts --region $REGION --version $ROSA_VERSION --enable-autoscaling --min-replicas <minimum_replicas> --max-replicas <maximum_replicas> --compute-machine-type <instance_type> --host-prefix <host_prefix> --private-link --subnet-ids <subnet_id_1>,<subnet_id_2>,<subnet_id_3> For example: export REGION=eu-west-1 export ROSA_VERSION=4.13.13 rosa create cluster —cluster-name esdp-rosa --multi-az --hosted-cp --mode=auto --sts --region $REGION --version $ROSA_VERSION --enable-autoscaling --min-replicas 3 --max-replicas 3 --compute-machine-type m5.2xlarge --host-prefix 23 --private-link --subnet-ids subnet-1234566789ab,subnet-123456789cd,subnet-12345678 Cleaning up This documentation provides steps to delete a ROSA cluster and AWS STS resources. Migration paths for existing ROSA customers At this stage, it isn’t possible to perform an in-place upgrade from ROSA classic to ROSA HCP, because we need to provision a ROSA Hosted Control Plane cluster and then migrate the application workloads. Customer considering this option, should explore making use of the Red Hat Migration Toolkit for Containers, which is based on the upstream Konveyor project. Conclusion In this post, we discussed the benefits of Hosted Control Planes for OpenShift. We examined recent modifications and compare them to the conventional architecture. Hosted Control Planes (HCP) heralds a new era in the deployment and management of OpenShift on AWS. This innovative shift not only reduces costs, enhances the operational resiliency and security, and improves cluster provision times. We encourage you to explore the new possibilities and benefits of ROSA with HCP within your business. Additional resources For more information on how to install ROSA with HCP clusters, please refer to Install ROSA with HCP clusters. For information about ROSA quick start guide please refer ROSA quick start guide. For more information on architecture and networking please refer to the ROSA: Architecture and Networking. Ask an OpenShift administrator (E68) | (Migration toolkit for containers). View the full article
  3. I have a Tekton pipeline that will be posting to a rest API. I need to pass credentials to this, but as far as I can tell from this documentation, the only options for a pipeline are Git and Docker authentication? How would I securely store username/password credentials that I can pass into a pipeline to ultimately convert to Basic Auth for the rest request?
  4. The Five Pillars of Red Hat OpenShift Observability It is with great pleasure that we announce additional Observability features coming up as part of the OpenShift Monitoring 4.14, Logging 5.8, and Distributed Tracing 2.9 releases. Red Hat OpenShift Observability’s plan continues to move forward: as our teams tackle key data collection, storage, delivery, visualization, and analytics features with the goal of turning your data into answers. View the full article
  5. We’re pleased to announce Red Hat OpenShift 4.14 is now generally available. Based on Kubernetes 1.27 and CRI-O 1.27, this latest version accelerates modern application development and delivery across the hybrid cloud while keeping security, flexibility and scalability remain at the forefront. View the full article
  6. Red Hat OpenShift Cluster Manager (OCM) is a managed service where you can install, modify, operate, and upgrade your Red Hat OpenShift clusters. This service allows you to work with all of your organization’s clusters across the hybrid cloud landscape from a single dashboard... View the full article
  7. Being able to enforce airtight application security at the cluster-wide level has been a popular ask from cluster administrators. Key admin user stories include ... View the full article
  8. With the recent release of the official Red Hat Cloud Services Provider for Terraform customers can now automate the provisioning Red Hat OpenShift Service on AWS clusters (ROSA) with Terraform. Previously, automating the creation of a ROSA cluster required using the OpenShift Command Line Interface (CLI), either wrapping it in code or using additional tools to automate the necessary CLI commands. Now customers using Terraform can integrate ROSA cluster creation into their existing pipelines. In addition to the Red Hat Cloud Services (RHCS) Provider, Red Hat has made available the ROSA STS Terraform module. This gives customers the option to automate ROSA prerequisites, like operator IAM roles, policies, and identity providers as a distinct step... View the full article
  9. The Flink Operator is a control plane that deploys and manages the entire lifecycle of Apache Flink applications. The goal of the Flink Operator is to manage applications as a human operator would. It handles cluster startup, deploys jobs, updates apps, and resolves prevalent problems. It can automate operational tasks and comprehensively manage Apache Flink applications. View the full article
  10. Red Hat OpenShift Service on AWS (ROSA) is a fully managed turnkey application platform. It is jointly engineered and supported by Red Hat and AWS through Site Reliability Engineers so customers don’t have to worry about the complexity of infrastructure management. As an application platform running on AWS, a common use case is to connect an application to an AWS managed database. View the full article
  11. Red Hat OpenShift Data Science is an open source machine learning (ML) platform for the hybrid cloud. As you can probably guess from the title of this post, that’s not what I will be discussing here. OpenShift Container Platform supports a rich ecosystem of AI/ML solutions from very simple to complex, including both free open source as well as vendor-supported options. View the full article
  12. GitOps has continued in its popularity and has become the standard way to manage Kubernetes cluster configuration and applications. Red Hat continues to see the widespread adoption of the GitOps methodology across our portfolio as customers look for ways to bring increased efficiency to their operations and development teams. View the full article
  13. In this post, I will discuss how to utilize Azure Key Vault (AKV) with Azure Red Hat OpenShift (ARO) cluster. I will explain the relevant terms and their definitions from the architectural standpoint and how the flow works at a glance, and I will give an example of how to deploy this in the ARO cluster. The objective of this article is to enable you to store and retrieve secrets stored in AKV from your ARO cluster. View the full article
  14. Backup is defined as the process of creating copies of data and storing them in separate locations or mediums, while restore is defined as the process of retrieving the backed-up data and returning it to its original location or system or to a new one. In other words, backup is akin to data preservation, and restore is in essence data retrieval. View the full article
  15. In this article, I will demonstrate how to monitor Ansible Automation Platform(AAP) running on OpenShift, using user-workload-monitoring with Prometheus and Grafana... View the full article
  16. At some point during the OpenShift deployment phase, a question about project onboarding comes up, "How can a new customer or tenant be onboarded so they can deploy their own workload onto the cluster(s)?" While there are different ways from a process perspective (Service Now, Jira, etc.), I focus on the Kubernetes objects that must be created on each cluster. In A Guide to GitOps and Argo CD with RBAC, I described setting up GitOps RBAC rules so tenants can work with their (and only their) projects. This article demonstrates another possibility for deploying per tenant and per cluster ... View the full article
  17. Recently, I published the blog Provisioning OpenShift clusters using GitOps with ACM, explaining how to create OpenShift clusters with RHACM using GitOps with ArgoCD. The OpenShift installation type was IPI and valid for most platforms: Azure, AWS, GCP, vSphere, etc., but not for baremetal. If you've ever installed an OpenShift cluster in baremetal and disconnected, you know how different it is from any other installation. View the full article
  18. OpenShift Virtualization is Red Hat's solution for companies trending toward modernization by adopting a containerized architecture for their applications, but find virtualization remains a necessary part of their data center deployment strategy. View the full article
  19. Creating virtual machines (VMs) from golden images is a common practice. It minimizes the deployment time for new VMs and provides a familiar environment for the VM's owner. The admin benefits from creating golden images in an automated manner because it reflects the current configuration. View the full article
  20. In today’s fast-paced digital landscape, containerization has become the norm, and Kubernetes has emerged as the de facto standard for container orchestration. However, with the increasing complexity of Kubernetes deployments, it has become more critical than ever to monitor and secure those environments. View the full article
  • Forum Statistics

    43.2k
    Total Topics
    42.5k
    Total Posts
×
×
  • Create New...