Jump to content

Private integration of your Kubernetes Cluster with Amazon S3 File Gateway


Recommended Posts

Introduction

As applications with increasing complexity are getting migrated to the cloud, these applications can end up being hosted across different platforms and require many integration points. One such example is a Windows-based application that needs to share its storage with services hosted on a Kubernetes platform.

Sharing storage privately between your Kubernetes clusters and Windows Amazon Elastic Compute Cloud (Amazon EC2) instances can be a challenge, especially if you need to take into account keeping the connectivity private and ensuring services are highly available for critical applications.

In this post, we look at the challenges presented with this requirement, options to consider, and provide a solution that uses a highly available AWS Storage Gateway using Amazon Simple Storage Service (Amazon S3) File Gateway to integrate with Red Hat OpenShift Service on AWS (ROSA).

We walkthrough the configuration required and step wise guidance on creating Network File System (NFS) storage using private Amazon S3 File gateway in High Availability (HA) mode and mount the same to ROSA container for read-write purpose.

Solution overview

Sharing storage across services or platforms on AWS can be considered using different approaches, which may include Amazon FSx (Windows), Amazon Elastic File System (Amazon EFS) (Linux); however, in certain cases, a Windows-based application leveraging a Windows operating system (OS)-based Network File System (NFS) needs to communicate with a Kubernetes platform needs special care in how we approach the solution.

First, let’s look at Amazon EFS. This is a good option for the Linux-based AWS ROSA platform since we can use Container Storage Interface (CSI) operators and consumes it directly by the containers. On the Windows side, it requires the NFS client to be installed and Windows should be supported only on the version 3 version of the client (version 4.x).

Amazon FSx is a managed redundant service that runs the Windows Server Message Block (SMB) and is compatible with server applications designed for Windows Server environments. This is good for the Windows stack; however, it won’t allow us to integrate with the Linux-based Kubernetes platform (i.e., ROSA) due to the Server Message Block (SMB) driver requirement.

To meet this requirement, the Amazon S3 File Gateway can provide us with the capability needed to share an Amazon S3 backed storage across both platforms, in a secure way.

Walkthrough

Figure 1 shows the architecture of Amazon S3 File Gateway private integration with ROSA Cluster.

The solution we provide in this post is based on provisioning two private Amazon S3 File Gateways behind a Network Load Balancer (NLB) for high availability (HA) across two Availability Zones (AZ). We then create a Persistent Volume (PV) with the NLB endpoint as the target on the AWS ROSA cluster. Finally, we mount this volume on the application container for read-write purpose as well as the Windows Amazon EC2 instances that need to share storage.

The solution is implemented using the key steps shown below:

  • Create an Amazon Virtual Private Cloud (VPC) for the setup of the Amazon S3 File Gateway on the Amazon EC2 instances.
  • Create AWS Storage Gateway VPC endpoint and Amazon S3 Gateway VPC endpoint for private access to Amazon S3 bucket.
  • (Optional) Setup AWS Systems Manager (SSM) connectivity for AWS Storage gateway VPC.
  • (Optional) Setup VPC connectivity to another VPC for access storage gateway using VPC peering.
  • (Optional) Provision an additional private Amazon EC2 instance to connect the AWS Storage Gateway EC2 instances using SSM.
  • Create activation key for AWS Storage Gateway EC2 instances.
  • Setup AWS Storage Gateway for activation key.
  • Provision a Network Load Balancer (NLB) for Amazon S3 File Gateway instances.
  • Mount your NFS Storage to Containers running on ROSA Cluster.
  • Mount your NFS Storage to Windows EC2.

Prerequisites

For this walkthrough, you should have the following prerequisites:

The following are the building blocks for this solution.

Red Hat OpenShift Service on AWS (ROSA) : ROSA is a fully managed, turnkey application platform that allows you to focus on delivering value to your customers by building and deploying applications.

Amazon S3 File Gateway: Amazon S3 File Gateway supports a file interface into Amazon Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance. By using this combination, you can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB).

Network Load Balancer: Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, in one or more AZs. It monitors the health of its registered targets, and routes traffic only to the healthy targets. Elastic Load Balancing scales your load balancer as your incoming traffic changes over time. It can automatically scale to most workloads.

Solution deployment

To deploy the solution, you perform the following steps.

Setting up environment variable

Export the ROSA cluster and Windows workload VPC information into environment parameter. We would refer this in later steps.

export ROSA_VPC_CIDR=10.0.0.0/16
export ROSA_CLUSTER_VPC_ID=vpc-081ec835f3ROSAVPC
export WINDOWS_VPC_CIDR=16.0.0.0/16
export WINDOWS_VPC_ID=vpc-081ec835f3WINDOWSVPC

(Optional) Setup AWS Storage Gateway VPC and subnet for Setup Storage Gateway

You can ignore this section if you already have VPC with private subnets in place. Here we are creating a VPC with two private subnets to support multi-AZ and to run everything in private network.

  1. Create a VPC using the following create-vpc The command returns the ID of the new VPC. We are using us-east-1 region and 10.0.0.0/16 Classless Inter-Domain Routing (CIDR) range for VPC. You may use any other CIDR range and AWS Region. The command returns the ID of the new VPC. Export the VPC Id for later steps.
export AWS_REGION=us-east-1
export VPC_CIDR=10.0.0.0/16
aws ec2 create-vpc --cidr-block $VPC_CIDR \
--query Vpc.VpcId \
--output text
export VPC_ID=vpc-081ec835f3EXAMPLE
  1. We need to enable DNS hostname for attaching VPC endpoints to VPC in later section. Modify your VPC created from the previous step for enable DNS hostnames using the following modify-vpc-attribute
aws ec2 modify-vpc-attribute --vpc-id $VPC_ID \
--enable-dns-hostnames "{\"Value\":true}"
  1. Create a private subnet in your VPC with a 0.1.0/24 CIDR block using the following create-subnet command. We are using us-east-1a & us-east-1b AZs. You may use any other CIDR range and other AZ of your AWS Region. Export the value of SubnetId for later steps.
export AZ1=use1-az1
export PRIVATE_SUBNET1_CIDR=10.0.1.0/24
aws ec2 create-subnet --vpc-id $VPC_ID \
--cidr-block $PRIVATE_SUBNET1_CIDR \
--availability-zone-id=$AZ1
export PRIVATE_SUBNET1_ID=subnet-0bEXAMPLESUBNETID1
  1. Create a second private subnet in your VPC with a 0.0.0/24 CIDR block. You may use any other CIDR range and other AZ of your AWS Region. Export the SubnetId and RouteTableId for later steps.
export AZ2=use1-az2
export PRIVATE_SUBNET2_CIDR=10.0.2.0/24
aws ec2 create-subnet --vpc-id $VPC_ID \
--cidr-block $PRIVATE_SUBNET2_CIDR \
--availability-zone-id=$AZ2
export PRIVATE_SUBNET2_ID=subnet-0bEXAMPLESUBNETID2
export PRIVATE_SUBNET_ROUTE_TABLE_ID=rtb-0bEXAMPLEROUTETABLE

Setup Storage Gateway VPC Endpoint and Amazon S3 Gateway Endpoint for private access

  1. We need to create a security group for the AWS Storage Gateway VPC endpoint, which allows connectivity to Amazon S3 File Gateway. Create a Security Group for storage gateway VPC endpoint for your VPC using the following create-security-group The command returns the Security Group Id of the new security group. Export the security group Id for later steps.
aws ec2 create-security-group \
--group-name StorageGatewayEndpointSecurityGroup \
--description "Storage Gateway Endpoint Security Group" \
--vpc-id $VPC_ID
export SGId=sg-1234567890EXAMPLEf0
  1. Amazon S3 File Gateway requires multiple ports for its operation. You may refer Port Requirements for more details. Configure security group inbound rules for the Security Group from the previous step for your VPC CIDR range using the following authorize-security-group-ingress The command returns Return true with security group rule added for the security group.
aws ec2 authorize-security-group-ingress --group-id $SGId \
--protocol tcp \
--port 443 \
--cidr $VPC_CIDR

Repeat the above command for port 1026, 1027, 1028, 1031, 2222 & 80.

  1. Create Storage gateway VPC endpoint for your VPC using create-vpc-endpoint Use your VPC Id, private subnet Ids and storage gateway endpoint security group. The command returns VpcEndpoint information with pending state which will turn to Active state in few minutes. Export the VPC Endpoint Id for later steps.
aws ec2 create-vpc-endpoint \
 --vpc-id $VPC_ID \
 --vpc-endpoint-type Interface \
 --service-name com.amazonaws.$AWS_REGION.storagegateway \
 --subnet-ids $PRIVATE_SUBNET1_ID $PRIVATE_SUBNET2_ID \
 --security-group-id $SGId
export STORAGE_GATEWAY_VPC_ENDPOINT_ID=vpce-0a4ee0EXAMPLE
  1. We need to create an Amazon S3 Gateway endpoint to establish private connectivity to Amazon S3 bucket from VPC. Create an Amazon S3 Gateway endpoint for your VPC using create-vpc-endpoint Use your VPC Id , private subnet Ids and storage gateway endpoint security group. The command returns VpcEndpoint information with pending state, which changes to the Active state in few minutes. Export the VPC Endpoint Id for later steps.
aws ec2 create-vpc-endpoint \
 --vpc-id $VPC_ID \
 --vpc-endpoint-type Gateway \
 --service-name com.amazonaws.$AWS_REGION.s3 \
--route-table-ids $PRIVATE_SUBNET_ROUTE_TABLE_ID
export S3_GATEWAY_VPC_ENDPOINT_ID=vpce-0a4ee0EXAMPLE

(Optional) Setup SSM connectivity for Storage Gateway VPC

Ignore this step if you are able to connect AWS Storage Gateway instances from on-premises or other VPC via AWS Transit Gateway. We need to send activation command via curl on port 80 to storage gateway instances. We are creating SSM endpoints for managing AWS Storage Gateway EC2 instances via SSM. You may refer Manage Private EC2 without Internet access section for more information.

  1. Create a Security Group for SSM VPC endpoint for your VPC using the following create-security-group The command returns the Security Group Id of the new security group. Export the security group Id for later steps.
aws ec2 create-security-group --group-name SSMEndpointSecurityGroup \
--description "Storage Gateway Endpoint Security Group" \
--vpc-id $VPC_ID
export SSMSgId=sg-1234567890abcdef0
  1. Configure security group inbound rules for the Security Group from the previous step for your VPC CIDR range using the following authorize-security-group-ingress The command returns Return true with security group rule added for the security group.
aws ec2 authorize-security-group-ingress --group-id $SSMSgId -\
-protocol tcp \
--port 443 \
--cidr $VPC_CIDR
  1. Create SSM VPC endpoint for your VPC using create-vpc-endpoint Use your VPC Id, private subnet Ids and storage gateway endpoint security group. The command returns VpcEndpoint information with pending state which will turn to Active state in few minutes. Export the VPC Endpoint Id for later steps.
aws ec2 create-vpc-endpoint \
 --vpc-id $VPC_ID \
 --vpc-endpoint-type Interface \
 --service-name com.amazonaws.$AWS_REGION.ssm \
 --subnet-ids $PRIVATE_SUBNET1_ID $PRIVATE_SUBNET2_ID \
 --security-group-id $SSMSgId
export SSM_VPC_ENDPOINT_ID=vpce-0a4ee0EXAMPLE
  1. Create SSM Messages VPC endpoint for your VPC using create-vpc-endpoint Use your VPC Id, private subnet Ids and storage gateway endpoint security group. The command returns VpcEndpoint information with pending state, which changes to Active state in few minutes. Export the VPC Endpoint Id for later steps.
aws ec2 create-vpc-endpoint \
 --vpc-id $VPC_ID \
 --vpc-endpoint-type Interface \
 --service-name com.amazonaws.$AWS_REGION.ssmmessages \
 --subnet-ids $PRIVATE_SUBNET1_ID $PRIVATE_SUBNET2_ID \
 --security-group-id $SSMSgId
export SSM_MESSAGES_VPC_ENDPOINT_ID=vpce-0b4ee0EXAMPLE
  1. Create Amazon EC2 Messages VPC endpoint for your VPC using create-vpc-endpoint Use your VPC Id, private subnet Ids and storage gateway endpoint security group. The command returns VpcEndpoint information with pending state, which changes to Active state in few minutes. Export the VPC Endpoint Id for later steps.
aws ec2 create-vpc-endpoint \
 --vpc-id $VPC_ID \
 --vpc-endpoint-type Interface \
 --service-name com.amazonaws.$AWS_REGION.ec2messages \
 --subnet-ids $PRIVATE_SUBNET1_ID $PRIVATE_SUBNET2_ID \
 --security-group-id $SSMSgId
export EC2_MESSAGES_VPC_ENDPOINT_ID=vpce-0c4ee0EXAMPLE

(Optional) Setup VPC connectivity to another VPC for access Storage Gateway using VPC peering

You can ignore this section in case if you like to use AWS Transit Gateway to connect storage gateway VPC from other VPCs and on-premises. We are creating peering connection for our Storage Gateway VPC to ROSA cluster VPC.

  1. Create VPC peering connection from your application VPC to storage gateway VPC using create-vpc-peering-connection Use your VPC Id, private subnet Ids and storage gateway endpoint security group. The command returns VpcPeeringConnection information. Export the VpcPeeringConnectionId for later steps.
aws ec2 create-vpc-peering-connection --vpc-id $ROSA_CLUSTER_VPC_ID \
--peer-vpc-id $VPC_ID
export VPC_PEERING_CONNECTION_ID=pcx-1a2b3c4d
  1. Accept VPC peering connection request using accept-vpc-peering-connection Use VpcPeeringConnectionId generated from previous step.
 aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id $VPC_PEERING_CONNECTION_ID
  1. Add route to both private subnets of AWS Storage Gateway VPC to use Peering Connection for ROSA Cluster and Windows Workload CIDR using create-route The command returns Return true.
aws ec2 create-route --route-table-id $PRIVATE_SUBNET_ROUTE_TABLE_ID \
--destination-cidr-block $ROSA_VPC_CIDR \
--vpc-peering-connection-id $VPC_PEERING_CONNECTION_ID

Repeat the above step to update route table in ROSA cluster private subnet VPC and Windows workloads VPC for connectivity to AWS Storage Gateway VPC.

Setup the private Amazon S3 File Gateway on Amazon EC2 instances

  1. We need to enable firewall inbound rules to allow connectivity to Storage Gateway instance within VPC and from ROSA cluster VPC and Windows Workload VPC. Create a Security Group for storage gateway EC2 instance using the following create-security-group The command returns the Security Group Id of the new security group. Export the security group Id for later steps.
aws ec2 create-security-group --group-name EC2SecurityGroup \
--description "EC2 Security Group" \
--vpc-id $VPC_ID
export EC2SgId=sg-1234567890EXAMPLE0
  1. Configure security group inbound rules for the Security Group from the previous step for your ROSA Cluster and Windows Workloads CIDR ranges using the following authorize-security-group-ingress The command returns Return true with security group rule added for the security group.
aws ec2 authorize-security-group-ingress --group-id $EC2SgId \
--protocol tcp \
--port 2049 \
--cidr $ROSA_VPC_CIDR
aws ec2 authorize-security-group-ingress --group-id $EC2SgId \
--protocol tcp \
--port 111 \
--cidr $ROSA_VPC_CIDR
aws ec2 authorize-security-group-ingress --group-id $EC2SgId \
--protocol tcp \
--port 20048 \
--cidr $ROSA_VPC_CIDR
  1. Configure security group inbound rules for the Security Group from the previous step for your AWS Storage Gateway VPC using the following authorize-security-group-ingress The command returns Return true with security group rule added for the security group.
aws ec2 authorize-security-group-ingress --group-id $EC2SgId \
--protocol tcp \
--port 80 \
--cidr $VPC_CIDR
aws ec2 authorize-security-group-ingress --group-id $EC2SgId \
--protocol tcp \
--port 111 \
--cidr $VPC_CIDR
aws ec2 authorize-security-group-ingress --group-id $EC2SgId \
--protocol tcp \
--port 2049 \
--cidr $VPC_CIDR
aws ec2 authorize-security-group-ingress --group-id $EC2SgId \
--protocol tcp \
--port 20048 \
--cidr $VPC_CIDR
  1. For running Amazon EC2 instance as an Amazon S3 File Gateway, we need to add at least one Amazon EBS volume for cache storage with a size of at least 150 GiB, in addition to the Root Create a mapping.json file with following content to add Amazon EBS volume for Storage Gateway Instance. For increased performance, we recommend allocating multiple Amazon EBS volumes for cache storage with at least 150 GiB each.
[
    {
        "DeviceName": "/dev/sdh",
        "Ebs": {
            "VolumeSize": 150
        }
    }
]
  1. Retrieve the latest AMI Id for storage gateway for your region using SSM get-parameter The command will return Parameter Object. Export value of Value parameter into AMI_ID for later steps.
aws ssm get-parameter --name /aws/service/storagegateway/ami/FILE_S3/latest
export AMI_ID=ami-01234Example1234567
  1. Launch an Amazon EC2 instance for AMI Id for previous step using run-instances We recommend using the m5.xlarge instance type for AWS Storage Gateway EC2 instance. The command returns the Instance information. Export value of INSTANCE1_ID for later steps.
aws ec2 run-instances \
--image-id $AMI_ID \
--instance-type m5.xlarge \
--subnet-id $PRIVATE_SUBNET1_ID \
--security-group-ids $EC2SgId \
--block-device-mappings file://mapping.json
export INSTANCE1_ID=i-0ce89be26EXAMPLE  

Repeat the above step to launch another instance in another private subnet and export the instanceId into INSTANCE2_ID variable for further steps.

(Optional) Provision an additional private Amazon EC2 instance to connect Storage Gateway EC2 instances using SSM

You can ignore this step if you are able to connect Storage Gateway instances from on-premises or other VPC via AWS Transit Gateway. We need to send activation command via curl on port 80 to Storage Gateway instances. We are managing Storage Gateway instances via SSM hence creating an Amazon EC2 instance for meantime just for posting activation request. We will terminate this instance in next section after successful activation key generation.

  1. Create a text file named instance-profile.json. Paste the following content, which allows Amazon EC2 to work on your behalf:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
  1. Create an Amazon EC2 instance profile role using create-role
aws iam create-role --role-name SSMEC2Role \
--assume-role-policy-document file://instance-profile.json
  1. Add AWS managed policy AmazonSSMManagedInstanceCore to role created in previous step using attach-role-policy
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore \
--role-name SSMEC2Role
  1. Run create-instance-profile command followed by the add-role-to-instance-profile command to create an AWS IAM instance profile named SSM-EC2-Instance-Profile for EC2. The instance profile allows Amazon EC2 to pass the AWS IAM role named SSMEC2Role to an Amazon EC2 instance when the instance is first launched:
aws iam create-instance-profile --instance-profile-name SSM-EC2-Instance-Profile
aws iam add-role-to-instance-profile --instance-profile-name SSM-EC2-Instance-Profile \
--role-name SSMEC2Role
  1. Create a Security Group for additional Amazon EC2 instance using the following create-security-group The command returns the Security Group Id of the new security group. Export the security group Id for later steps.
aws ec2 create-security-group --group-name EC2SecurityGroup \
--description "Temp EC2 Security Group" \
--vpc-id $VPC_ID
export TEMPInstanceSgId=sg-1234567890EXAMPLE0
  1. Launch an micro Amazon EC2 instance using run-instances command with instance profile role. Use any Amazon Linux AMI Id for your Region.
aws ec2 run-instances \
--image-id ami-026b57f3c383c2eec \
--instance-type t2.micro \
--subnet-id $PRIVATE_SUBNET1_ID \
--security-group-ids $TEMPInstanceSgId \
--iam-instance-profile arn:aws:iam::756355345204:instance-profile/SSM-EC2-Instance-Profile
export TEMP_INSTANCE_ID=i-0cb0fdEXAMPLE

Create activation key for AWS Storage Gateway EC2 instances

  1. To get an activation key for your gateway, you need to make a web request to the gateway virtual machine (VM) and it returns a redirect that contains the activation key. This activation key is passed as one of the parameters to the ActivateGateway API action to specify the configuration of your gateway. Connect to Amazon EC2 instance created above if you are connecting via SSM or any of your instance through Transit Gateway which can access Private IP of AWS Storage Gateway EC2 instances. Generate an activation key by following curl The command will return 302 HTTP code.  Export the value of activationKey from Location Header.
export PRIVATE_IP_OF_GATEWAY_INSTANCE=12.0.2.183
export AWS_REGION=us-east-1
export STORAGE_GATEWAY_VPC_ENDPOINT_ID_OR_IP=vpce-0b4ee0ab4de2e8e3c  #Try with Storage Gateway VPC Endpoint Subnet IPs in case of curl command failed with VPCEndpointId.
curl -v "${PRIVATE_IP_OF_GATEWAY_INSTANCE}/?activationRegion=${AWS_REGION}&gatewayType=FILE_S3&vpcEndpoint=${STORAGE_GATEWAY_VPC_ENDPOINT_ID_OR_IP}&endpointType=STANDARD"
export ACTIVATION_KEY=12345-ABCDE-AB1CD-EFGHI-2JK34

Repeat the above step for another AWS Storage Gateway EC2 instance.

  1. (Optional)Terminate the Amazon EC2 Instance and delete security group created for SSM connectivity as it no longer required for the setup.
# Delete Security Group 
aws ec2 delete-security-group --group-id $TEMPInstanceSgId \
# Delete Terminate Instance
aws ec2 terminate-instances \
--instance-ids $TEMP_INSTANCE_ID

Setup AWS Storage Gateway for activation key

  1. Activate the AWS Storage Gateway using activate-gateway The command will return GatewayARN.
    1. Export the value of GatewayARN for later steps.
    2. you can update the time zone to your specific Region
aws storagegateway activate-gateway --gateway-region $AWS_REGION \
--activation-key $ACTIVATION_KEY \
--gateway-name Instance1Gateway\
 --gateway-timezone GMT-5:00 \
--gateway-type FILE_S3
export GatewayARN=arn:aws:storagegateway:<region>:<account-id>:gateway/sgw-SAMPLE
  1. Run list-local-disks command to fetch DiskId for the AWS Storage Gateway. The command returns Disks object with DiskId. Export the value of DiskId for later steps.
aws storagegateway list-local-disks --gateway-arn $GatewayARN
export DISK_ID=/dev/nvme1n1
  1. Run add-cache command to add Cache to Storage gateway. The command returns GatewayARN.
aws storagegateway add-cache --gateway-arn $GatewayARN \
--disk-ids $DISK_ID
  1. We need to create an Amazon S3 bucket to store objects from AWS Storage Gateway EC2 instances. Create an Amazon S3 bucket using create-bucket The command will return Bucket name.
export S3BUCKET_NAME=xyz-example
aws s3api create-bucket --bucket $S3BUCKET_NAME
  1. Create a text file named storage-gateway-trust-policy.json with following content.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "storagegateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
  1. Create AWS IAM role using create-role command for File Gateway to assume an AWS IAM
    export STORAGE_GATEWAY_S3_BUCKET_ROLE=StorageGatewayS3BucketRole
    aws iam create-role --role-name $STORAGE_GATEWAY_S3_BUCKET_ROLE \
    --assume-role-policy-document file://storage-gateway-trust-policy.json
  2. Create a text file named storage-gateway-s3-access-policy.json with following content. Update bucket name to your bucket in json file.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:GetAccelerateConfiguration",
                "s3:GetBucketLocation",
                "s3:GetBucketVersioning",
                "s3:ListBucket",
                "s3:ListBucketVersions",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": "arn:aws:s3:::$S3BUCKET_NAME",
            "Effect": "Allow"
        },
        {
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:DeleteObjectVersion",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:GetObjectVersion",
                "s3:ListMultipartUploadParts",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::$S3BUCKET_NAME/*",
            "Effect": "Allow"
        }
    ]
}
  1. Add policy to StorageGatewayS3BucketRole for accessing Amazon S3 bucket from Storage Gateway using put-role-policy Export the policy ARN for later steps.
export S3_ACCESS_POLICY=s3-access-policy
aws iam put-role-policy --role-name StorageGatewayS3BucketRole \
--policy-name $S3_ACCESS_POLICY \
--policy-document file://storage-gateway-s3-access-policy.json
  1. Create NFS File Share using create_nfs_file_share The command returns FileShareARN. Export the file-share-name into NFS_STORAGE_NAME because it is required during NFS storage mounting.
export NFS_STORAGE_NAME=nfsStorage
aws storagegateway create-nfs-file-share --client-token StorageGatewayToken \
--role $S3_ACCESS_ROLE \
--location-arn $S3_BUCKET_ARN \
--gateway-arn $GatewayARN \
--client-list $ROSA_VPC_CIDR \
--file-share-name $NFS_STORAGE_NAME \
--cache-attributes "{\"CacheStaleTimeoutInSeconds\":300}"

Rerun the above steps for setup AWS Storage Gateway for another instance.

Provision a Network Load Balancer (NLB) for Amazon S3 File Gateway instances

  1. We are adding Network Load Balancer (NLB) as front for our AWS Storage Gateway Instance. Provision a NLB using create-load-balancer The command returns Load Balancers object. Export the LoadBalancerArn and DNSName for later steps.
aws elbv2 create-load-balancer --name storage-gateway-load-balancer \
--type network \
--subnets $PRIVATE_SUBNET1_ID $PRIVATE_SUBNET2_ID \
--scheme internal
export LOAD_BALANCER_ARN=arn:aws:elasticloadbalancing:<region>:<account-id>:loadbalancer/net/storage-gateway-load-balancer/B123EXAMPLE134567
export LOAD_BALANCER_DNS_NAME=storage-gateway-load-balancer-b12CEXAMPLE123456.elb.us-east-1.amazonaws.com
  1. Create a target group using create-target-group The command returns TargetGroups. Export the TargetGroupArn for later steps.
export PORT=2049
aws elbv2 create-target-group --name StorageGatewayTargetGroup2049 \
--protocol TCP \
--port $PORT \
--target-type instance \
--vpc-id $VPC_ID
export TARGET_GROUP_ARN=arn:aws:elasticloadbalancing:<region>:<account-id>:targetgroup/StorageGatewayTargetGroup/22c4e2e071fef6e4
  1. Create Listener to Network Load Balancer (NLB) using create-listener The command returns Listener’s object.
aws elbv2 create-listener \
--load-balancer-arn $LOAD_BALANCER_ARN \
--protocol TCP \
--port $PORT \
--default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_ARN
  1. Add AWS Storage Gateway instances as target to listener using register-targets The command returns Listener’s object.
aws elbv2 register-targets \
--target-group-arn $TARGET_GROUP_ARN \
--targets Id=$INSTANCE1_ID Id=$INSTANCE2_ID

Repeat the steps 2, 3, and 4 for port 111 (TCP) & 20048 (NFSv3) for the same NLB.

Mount your NFS Storage to Containers running on ROSA Cluster

We have successfully completed AWS Storage Gateway setup and exposed it using NLB. Now is the time to mount this as NFS storage to our application containers.

  1. Login to ROSA Cluster and create a Persistent Volume (PV) with Network Load Balancer DNS as NFS server. Update NFS_STORAGE_NAME & LOAD_BALANCER_DNS_NAME as per your environment.
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001 
spec:
  capacity:
    storage: 5Gi 
  accessModes:
  - ReadWriteOnce 
  nfs:
    path: /$NFS_STORAGE_NAME
    server: $LOAD_BALANCER_DNS_NAME 
  persistentVolumeReclaimPolicy: Retain
  1. Create a Persistent Volume Claim (PVC) for Persistent Volume (PV) created previously.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-claim1
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  volumeName: pv0001
  storageClassName: ""
  1. Create a sample application pod to mount Persistent Volume Claim (PVC).
apiVersion: v1
kind: Pod
metadata:
 name: test-app
spec:
 volumes:
   - name: nfs-storage-vol
     persistentVolumeClaim:
       claimName: nfs-claim1
 containers:
   - name: test-app
     image: centos:latest
     command: [ "/bin/bash", "-c", "--" ]
     args: [ "while true; do touch /mnt/helloworld.txt && echo 'Hello NFS from S3 File Gateway' >> /mnt/helloworld.txt && sleep 60; done;" ]
     volumeMounts:
       - mountPath: "/mnt"
         name: nfs-storage-vol
  1. Once created, Pod status will change to Running and txt would be successfully created on Storage Gateway S3 bucket.

Mount your NFS Storage to Windows Amazon EC2

Let’s mount the same storage to Windows Amazon EC2 instance running in Windows Workloads VPC to share data with the ROSA containers.

  1. Login to your Window EC2 instance running on another VPC.
  2. Open a Windows PowerShell terminal with administrator privilege.
  3. Run net use command to mount NFS storage to Windows Amazon EC2 instance. Update NFS_STORAGE_NAME & LOAD_BALANCER_DNS_NAME as per your environment.
net use y: $LOAD_BALANCER_DNS_NAME:/$NFS_FILE_SHARE_NAME
  1. Once completed, Y drive is created and you can read & write files on this drive which will be added on Amazon S3 bucket location.

That’s it! You have successfully mounted the Amazon S3 File Gateway shared folder on the ROSA Linux containers and Windows instance.

Cleanup

Run the following commands to cleanup all resources for this post.

# Delete Sample application pod, Persistent Volume Claim
# Delete Network Load Balancer
aws elbv2 delete-load-balancer --load-balancer-arn $LOAD_BALANCER_ARN
# Delete Target Group for Network Load Balancer. 
# Make sure you also delete your target group for Port 2049, 20048 & 111
aws elbv2 delete-target-group --target-group-arn $TARGET_GROUP_ARN
# Delete Storage Gateway 
aws storagegateway delete-gateway --gateway-arn $GatewayARN
# Delete StorageGatewayS3BucketRole Policy
aws iam delete-role-policy --role-name $STORAGE_GATEWAY_S3_BUCKET_ROLE \
--policy-name $S3_ACCESS_POLICY
# Delete StorageGatewayS3BucketRole Role
aws iam delete-role --role-name $STORAGE_GATEWAY_S3_BUCKET_ROLE
# Empty the content of S3 bucket before deletion
aws s3 rm s3://$S3BUCKET_NAME --recursive
# Delete S3 bucket
aws s3api delete-bucket --bucket $S3BUCKET_NAME
# Terminate Storage Gateway Instances
aws ec2 terminate-instances --instance-ids $INSTANCE1_ID $INSTANCE2_ID
# Delete Peering Connection
# Also delete route of Peering Connection from ROSA & Windows Workloads VPC
aws ec2 delete-vpc-peering-connection --vpc-peering-connection-id $VPC_PEERING_CONNECTION_ID
# Delete VPC Endpoints for the Storage Gateway VPC
aws ec2 delete-vpc-endpoints --vpc-endpoint-ids $SSM_VPC_ENDPOINT_ID \
$SSM_MESSAGES_VPC_ENDPOINT_ID \
$EC2_MESSAGES_VPC_ENDPOINT_ID \
$STORAGE_GATEWAY_VPC_ENDPOINT_ID \
$S3_GATEWAY_VPC_ENDPOINT_ID
# Delete Security Group 
aws ec2 delete-security-group --group-id $EC2SgId
aws ec2 delete-security-group --group-id $SSMSgId
aws ec2 delete-security-group --group-id $SGId 
# Delete Private Subnets
aws ec2 delete-subnet --subnet-id $PRIVATE_SUBNET1_ID
aws ec2 delete-subnet --subnet-id $PRIVATE_SUBNET2_ID
# Delete VPC 
aws ec2 delete-vpc --vpc-id $VPC_ID

Conclusion

In this post, we showed you how Amazon S3 File Gateway can provide the capability needed to share an Amazon S3 backed storage across Windows-based Amazon EC2 servers and Kubernetes platforms, such as ROSA. This solution ensures the connectivity remains over the private network and is highly available by ensuring services are deployed across AWS AZs within the AWS Region.

For more details you can reference the AWS ROSA documentation as well as the Amazon S3 File Gateway.

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...