Jump to content

25 Free Question on AWS Certified SAP on AWS – Specialty Exam (PAS-C01)


Whizlabs

Recommended Posts

Preparing for AWS Certified SAP on AWS Specialty Exam (PAS-C01) exam? Here we provide a list of free AWS Certified SAP on AWS Specialty exam questions and answers to well-prepare for the exam.

These sample PAS-C01 practice exam questions were found similar to the real exam format. In this blog we list out our newly updated 25+ FREE questions on the AWS Certified SAP on AWS exam.

What do AWS SAP Professionals do?

AWS SAP Professionals engaged in designing, deploying, operating and migrating the workloads of SAP on AWS platform. They can help in improving the workloads, scaling process and reduction of time by constructing  flexible infrastructure.

On the basis of best practices defined by AWS Well-Architecture Framework, the AWS SAP professionals design SAP solutions and thus help to achieve optimized SAP on AWS. 

What to expect in AWS Certified SAP on AWS Specialty Exam (PAS-C01) exam questions?

This exam main focus is to design, implement and manage the workloads of SAP while moving to the AWS platform.

pas-c01 aws

AWS SAP Professionals exam questions helps to assess your abilities to:

  • Creation of SAP solution that operates in AWS cloud in accordance with the AWS architecture framework
  • Develop SAP solution that operates in AWS cloud and maintain the standards for certification of SAP and support
  • Deployment of new workloads of SAP on AWS
  • Migration of the existing workloads of the SAP into AWS
  • Running the workloads of SAP on aws infrastructure

How difficult is the AWS Certified SAP on AWS Specialty (PAS-C01) Exam?

If you plan to take then AWS Certified SAP on AWS – Specialty Exam (PAS-C01), you will start to think about the difficulties of the exam. AWS Certified SAP on AWS – Specialty Exam (PAS-C01) found as quite difficult and thus it requires lot of preparation for passing the exam.

Here are some of the tips to get well-prepared for the exam:

  1. Understand exam objectives and domains. Before initiating the study process, ensure you have thorough knowledge on what to focus on the studies and what to expect.
  2. Use study resources. There exists various detailed study resources and sample questions and make use of it.
  3. Take a lot of AWS Certified SAP on AWS Specialty Exam (PAS-C01) practice tests and do many hands-on exercises. If you practice more, then you’ll get familiarity with the exam formats. 
  4. Take care of your health and take rest. Ensure you get sufficient rest before appearing for the exam. It can help to think in a clear way and aids in delivery of best results in exams.
  5. Be positive and believe in yourself. Make time to encourage yourself when you feel down. If you have proper preparation, you can definitely pass the  AWS Certified SAP exam.

FREE Questions on AWS Certified SAP on AWS Specialty Exam (PAS-C01)

These free questions on the AWS Certified SAP on AWS Specialty exam can help to assess the ability of a candidate whether they are ready or not to take the real exam. Spend some time to take a look at these  AWS Certified SAP exam free questions and try it out before appearing for the exam.

Domain : Design of SAP workloads on AWS

Question 1 : An EMEA region customer is planning to run their SAP workloads on AWS. The customer’s SAProuter is currently running on-premise in their network’s demilitarized Zone(DMZ). They are looking for a similar solution to set up SAProuter in the AWS cloud. 
Which of the following combinations of steps can help meet the customer’s requirement? (Select TWO)

A. Launch the instance that the SAProuter software will be installed on into a public subnet of the VPC and assign it an Elastic IP address
B. Launch the instance that the SAProuter software will be installed on into a private subnet of the VPC and assign it an Elastic IP address
C. Create and configure a security group for the SAProuter instance, which allows the inbound and outbound access from SAP provide IP address along with TCP port 3299
D. Create and configure a security group for the SAProuter instance, which allows the inbound and outbound access from SAP provide IP address along with TCP port 3600
E. Create and configure a security group for the SAProuter instance, which allows inbound and outbound access from the internet along with TCP port 3600

Correct Answers: A and C

Explanation: 

SAProuter is support software that provides a remote connection between our customer’s network and SAP. This means that SAProuter always needs to be able to access SAP’s support network and at the same time provide a secure connection to the SAP systems. 

Therefore, SAProuter needs to be installed in a public subnet. Also, only the inbound and outbound access from SAP-provided IP addresses should be allowed along with TCP port 3299. 

Option A is CORRECT because SAProuter software needs to be installed in a public subnet. 

Option B is incorrect because SAProuter software should not be installed in a private subnet. 

Option C is CORRECT because TCP port 3299 along with only SAP-provided IP is the correct choice. 

Option D is incorrect because TCP port 3600 is an incorrect choice. TCP port 3600 is used to connect to the SAP message server. 

Option E is incorrect because TCP port 3600 is an incorrect choice and access to the internet should not be allowed (only to SAP-provided IP address).

References: https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-support.htmlhttps://docs.aws.amazon.com/sap/latest/general/overview-router-solman.html

 

Domain : Design of SAP workloads on AWS

Question 2 : A US-based retail company is running its SAP workloads on AWS. They are using multiple VPCs that are communicating with each other using the VPC peering method. After a recent merger and acquisition, the company expects its accounts and VPCs to grow as more SAP systems will be on-boarded on the AWS cloud. The company is looking for an AWS-managed solution that works on a hub and spoke model to ensure communication between all the VPCs across the company’s accounts.
Which of the following solutions can help meet the company’s requirements? 

A. Use an AWS Virtual Private Gateway to connect the company’s VPCs. The Virtual Private Gateway can be shared through AWS Resource Access Manager (RAM) across the company’s AWS accounts
B. Use an AWS Transit Gateway to connect the company’s VPCs. The Transit Gateway can be shared through AWS Resource Access Manager (RAM) across the company’s AWS accounts
C. Use an AWS Virtual Private Gateway to connect the company’s VPCs. The Virtual Private Gateway can be shared through AWS Control Tower across the company’s AWS accounts
D. Use an AWS Transit Gateway to connect the company’s VPCs. The Transit Gateway can be shared through AWS Control Tower across the company’s AWS accounts

Correct Answer: B

Explanation: 

The correct choice here is to use a Transit Gateway. AWS Transit Gateway connects the Amazon Virtual Private Clouds (VPCs) or on-premises networks through a central hub. AWS Resource Access Manager (RAM) can be used to share the resources across accounts. In this case, an AWS Resource Access Manager (RAM) can be used to share the transit gateway with other AWS accounts, thus facilitating VPC communication between accounts as well. 

AWS Virtual Private Gateway (VPG) is the VPN endpoint on the Amazon side of a Site-to-Site VPN connection. It cannot be used for connecting multiple VPCs. 

AWS Control Tower is used to automate the process of setting up a new baseline multi-account AWS environment that is secure, well-architected, and ready to use.

Option A is incorrect because Virtual Private Gateway cannot connect multiple VPCs.

Option B is CORRECT because AWS Transit Gateway can connect multiple VPCs. AWS RAM can share the transit gateway across accounts.

Option C is incorrect because Virtual Private Gateway cannot connect multiple VPCs.

Option D is incorrect because AWS Control Tower cannot share an account’s resources with another.

References: https://aws.amazon.com/transit-gateway/faqs/https://aws.amazon.com/ram/faqs/https://docs.aws.amazon.com/sap/latest/sap-hana/sap-oip-configuration-steps-for-aws-transit-gateway.html 

 

Domain : Design of SAP workloads on AWS

Question 3 : A company is running its SAP S/4 HANA production system on the AWS cloud. Both the SAP database, ABAP SAP Central Services (ASCS), and Primary Application Server (PAS) EC2 instances are in the same private subnet. After an OS hardening activity on a weekend, when the SAP engineers try to start the SAP application they get the error message – Database is not available via R3trans – Database must be started first
When they log in to the database EC2 instance they notice that the database is already running. Considering that the HANA database instance number is 00, What should the SAP engineers do to troubleshoot this issue ? 

A. Restart the database and the database EC2 instance again. Try starting the SAP application once the database restart has finished
B. Ensure that the security group of the database EC2 instances allows communication from the application server. Check if the port range 30015 – 39915 is allowed from the private IP address of the application server
C. Ensure that the security group of the database EC2 instances allows communication from the application server. Check if the port range 3600 – 3699 is allowed from the private IP address of the application server
D. Ensure that the Network Access Control lists (NACLs) of the private subnet allow communication from the application server

Correct Answer: B

Explanation: 

SAP application server uses the HANA database client to connect to the HANA database server. It is important to understand the ports required by the HANA database server to allow connection from the HANA client. The port range 30015-39915 is used by the SAP application server to connect to the HANA database. As the question mentions OS hardening activity was carried out, we should ensure that security groups allow proper communication. 

Port range 3600-3699 is used by SAPGUI to connect to SAP applications. Checking the Network Access Control List (NACL) is not required since all the EC2 instances are on the same subnet. Restarting the database and the database EC2 instance will not solve the issue.

Option A is incorrect because restarting the database and the database EC2 instance will not solve the issue.

Option B is CORRECT because port range 30015-39915 needs to be maintained in the security group to allow communication from the SAP application server. 

Option C is incorrect because port range 3600-3699 is an incorrect choice.

Option D is incorrect because checking Network Access Control lists (NACLs) is not required since all the EC2 instances are in the same subnet.

References: https://docs.aws.amazon.com/quickstart/latest/sap-hana/app-c.htmlSecurity groups in AWS Launch Wizard for SAP 

 

Domain : Design of SAP workloads on AWS

Question 4 : A Singapore based Financial customer that has been running their SAP workloads on-premise in their datacenter for the last 30 years is considering migrating to AWS cloud. Their SAP team is worried about the security of SAP applications on the AWS cloud.
Which of the following is true about the security in the AWS cloud? (Select TWO)

A. AWS is responsible for “Security of the cloud”. It involves protecting the infrastructure that runs the services offered in the AWS Cloud
B. Customer is responsible for “Security in the cloud”. It involves protecting the infrastructure that runs the services offered in the AWS Cloud
C. AWS is responsible for “Security of the cloud”. It involves protecting the data owned by customers along with Identity and access management in the AWS cloud
D. Customer is responsible for “Security in the cloud”. It involves protecting the data owned by customers along with Identity and access management in the AWS cloud

Correct Answers: A and D

Explanation: 

Here the understanding of the AWS Shared Responsibility Model is important. AWS is responsible for “Security of the cloud” and the customer is responsible for “Security in the cloud”. “Security of the cloud” means protecting the infrastructure that runs the services offered in the AWS cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.  “Security in the cloud” means the data, platform, identity, and access management, etc. It depends on the services which the customer selects to run in the AWS cloud. 

Option A is CORRECT because AWS is responsible for “Security of the cloud”.

Option B is incorrect because the definition of “Security in the cloud” here is wrong.

Option C is incorrect because the definition of “Security of the cloud” here is wrong.

Option D is CORRECT because the Customer is responsible for “Security in the cloud”.

References: Shared Responsibility Model – Amazon Web Services (AWS), Best Practice 5.1 – Define security roles and responsibilities – SAP Lens

 

Domain : Design of SAP workloads on AWS

Question 5 : A customer is running their SAP workloads on their on-premise environment. The customer wants to connect their on-premise datacenter to the AWS cloud. They are looking for a networking solution that provides encryption during data transfer and are not worried about the cost of setup. 
Which of the following AWS services meets the customer’s requirement?

A. AWS Site-to-Site VPN
B. AWS Direct Connect
C. AWS Client VPN
D. AWS Customer Gateway

Correct Answer: A

Explanation: 

AWS Site-to-Site VPN and AWS Direct Connect are two options that provide connectivity of customer datacenter to the AWS cloud. Out of these AWS Site-to-Site VPN is both cost-effective and provides encryption by default. AWS Direct Connect is fast but it is costly and does not provide encryption for data-in-transit. 

AWS Client VPN is a managed client-based VPN service that is used by users to connect to either the AWS network or the customer’s network. AWS customer gateway is a physical or software appliance that a customer manages in their on-premises network for Site-to-Site VPN.
Option A is CORRECT because Site-to-Site VPN provides encryption by default.

Option B is incorrect because AWS Direct connect does not provide encryption by default.

Option C is incorrect because AWS Client VPN does not connect to the customer’s on-premise datacenter.

Option D is incorrect because AWS customer gateway is a component used in Site-to-Site VPN. 

References: https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.htmlhttps://docs.aws.amazon.com/wellarchitected/latest/sap-lens/security.html

 

Domain : Design of SAP workloads on AWS

Question 6 : A US-based Banking and Insurance company is running their SAP workloads on-premise. They have been maintaining their SAP backup data for the last 15 years for regulatory compliance requirements in their on-premise datacenter. The company sometimes needs to access this backup data once or twice a year during March or December month-end. The company is looking for a low-cost durable solution to store these backups on the AWS cloud.
Which of the following solutions can help meet the company’s requirements with minimum cost? 

A. Use Amazon S3 Standard-Infrequent Access to store the SAP backups
B. Use Amazon S3 Glacier Instant Retrieval to store the SAP backups
C. Use Amazon S3 Glacier Deep Archive to store the SAP backups
D. Use Amazon S3 Standard to store the SAP backups

Correct Answer: C

Explanation: 

Here the understanding of various Amazon S3 Storage Classes is important. 

Out of all the available options, Amazon S3 Glacier Deep Archive is the lowest-cost option. 

Also, it meets the requirement of data retrieval only once or twice a year. S3 Glacier Deep Archive is more suitable for customers that are aware of when they need the data and put a retrieval request beforehand.
Amazon S3 Standard-Infrequent Access is for data that is accessed less frequently, but requires rapid access when needed. Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. Amazon S3 Standard is the costliest option of the Amazon S3 storage family. 

Option A is incorrect because S3 Standard-Infrequent Access is an incorrect choice.

Option B is incorrect because S3 Glacier Instant Retrieval is an incorrect choice.

Option C is CORRECT because S3 Glacier Deep Archive is a correct choice.

Option D is incorrect because S3 Standard is an incorrect choice.

References: https://aws.amazon.com/s3/storage-classes/, https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/best-practice-19-6.html 

 

Domain : Design of SAP workloads on AWS

Question 7 : A company is planning to deploy their SAP BW/4HANA workloads on AWS. They are planning for a scale-out implementation of the HANA database with 4 nodes. The company is also planning to set up a HANA Auto host failover where three nodes will be acting as worker nodes and the fourth node will be a standby node.
The company is looking for a low network latency solution that is required for internode communication in a scale-out deployment. 
Which of the following solutions will meet the company’s requirements?

A. Deploy the HANA nodes in a cluster placement group across multiple availability zones
B. Deploy the HANA nodes in a spread placement group across a single availability zone
C. Deploy the HANA nodes in a cluster placement group across a single availability zone
D. Deploy the HANA nodes in a spread placement group across multiple availability zones

Correct Answer: C

Explanation: 

To meet the SAP certification for internode communication in an SAP HANA scale-out deployment, it is necessary to use a cluster placement group. A cluster placement group can only be deployed in a single availability zone.
A Cluster placement group is recommended for applications that benefit from low network latency, high network throughput, or both. Whereas, a spread placement group is a group of instances that are each placed on distinct hardware.
Option A is incorrect because a cluster placement group cannot span availability zones.

Option B is incorrect because a spread placement group is not the correct solution. 

Option C is CORRECT because a cluster placement group is required to meet the HANA internode low network latency requirements.

Option D is incorrect because a spread placement group is not the correct solution.

Reference: Best Practice 13.4 – Choose Regions and Availability Zones to minimize latency – SAP Lens

 

Domain : Design of SAP workloads on AWS

Question 8 : A US-based Financial company is running their SAP S/4 HANA system in the us-east-1 region on AWS. They are running the SAP system in high availability mode within two different availability zones. They plan to have a multi-region disaster recovery(DR) solution for their S/4 HANA system. The company is also looking for a database native solution for data replication.
Which of the following is the most optimal solution for the company’s DR requirements?

A. Set up the disaster recovery system in us-west-1. Use HANA System Replication in an asynchronous (async) mode for data replication
B. Set up the disaster recovery system in us-west-1. Use HANA System Replication in synchronous (sync) mode for data replication
C. Set up the disaster recovery system in us-west-1. Use S3 Cross Region Replication (CRR) for backup replication across regions
D. Set up the disaster recovery system in us-west-1. Use HANA System Replication in synchronous in-memory (syncmem) mode for data replication

Correct Answer: A 

Explanation: 

Here the understanding of Multi-Region architecture patterns for SAP HANA is important. The architect also needs to know different HANA system replication modes. For disaster recovery (DR) solutions, the asynchronous mode of SAP HANA System Replication is recommended because of the increased latency between the AWS regions. 

A synchronous (sync) or synchronous in-memory (syncmem) mode waits for the operation to be completed on the secondary side and then commits the transaction in the database on the primary side. 

S3 Cross Region Replication (CRR) is not a database native solution. 

Option A is CORRECT because the asynchronous (async) mode of HANA system replication is the correct choice as the latency between the AWS regions is comparatively higher than availability zones. 

Option B is incorrect because the synchronous (sync) mode of HANA system replication is not the correct choice due to latency issues.

Option C is incorrect because S3 Cross Region Replication (CRR) is not a database native solution.

Option D is CORRECT because the synchronous in-memory (syncmem) mode of HANA system replication is not the correct choice due to latency issues.

References: https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-patterns-multi.html, SAP on AWS: Build for availability and reliability

 

Domain : Implementation of SAP workloads on AWS

Question 9 : A US-based banking customer is running their SAP workloads on AWS. They have set up a high availability and disaster recovery solution for their production SAP system in AWS. The highly available systems are running in the us-east-1 region in two availability zones AZ-1 & AZ-2. The disaster recovery systems are placed in the us-west-1 region.
The customer is looking for a solution against logical data loss that could happen due to malicious activity or human error. 
Which of the following solutions is recommended by AWS to meet the customer’s requirement?

A. To protect against logical data loss, it is recommended that regular copies of the data are backed up to an Amazon S3 bucket. This bucket should be replicated to another Amazon S3 bucket owned by a separate AWS account in either us-east-1 or us-west-1 using Same-Region Replication (SRR) or Cross-Region Replication (CRR) respectively
B. To protect against logical data loss, it is recommended that regular copies of the data are backed up to an Amazon S3 bucket. This bucket is replicated using Cross-Region replication (CRR) to another Amazon S3 bucket in us-west-1
C. To protect against logical data loss, it is recommended that regular copies of the data are backed up to an Amazon S3 bucket. This bucket is replicated using Same-Region replication (SRR) to another Amazon S3 bucket in us-east-1
D. To protect against logical data loss, it is recommended that regular copies of the data are backed up to an Amazon S3 bucket. This bucket should have Lifecycle rules enabled so that data can be periodically moved to AWS Glacier

Correct Answer: A

Explanation:  

Here the understanding of what is a ‘logical data loss’ is important. Data becoming corrupted or lost due to human error should also be considered in good architecture. To protect against logical data loss, AWS recommends replicating the S3 bucket in another account, it does not matter if it’s in the same region or another. This ensures that data loss due to malicious activity within the AWS account or due to human error can be recovered. 

Using S3 Same-Region replication (SRR) or CRR is not a valid solution if the S3 buckets are replicated under the same AWS accounts.

AWS Glacier is used for archiving old backups or old data. It cannot protect against logical data loss. 

Option A is CORRECT because replicating to S3 buckets in another account is the correct choice.  

Option B is incorrect because unless data is not replicated to S3 buckets in another account it is not protected against logical data loss.

Option C is incorrect because unless data is not replicated to S3 buckets in another account it is not protected against logical data loss.

Option D is incorrect because AWS Glacier is not the correct choice here. 

Reference: Failure scenarios – General SAP Guides 

 

Domain : Implementation of SAP workloads on AWS

Question 10 : A US-based OTT platform company is running their SAP workloads on-premise.
To reduce the overhead administration of infrastructure, the company has decided to move the non-production SAP workloads to AWS. This includes the sandbox, development, and quality environment. The company on-premises is connected to AWS via a site-to-site VPN connection. 
The company has decided to keep running SAProuter and SAP Solution Manager systems in their on-premise environment.
Which of the following statements is true regarding this architecture? (Select TWO)

A. Set up support connectivity for the SAP systems on AWS a change in saprouttab files
B. Do not set up support connectivity for the SAP systems on AWS a change in saprouttab files
C. A new connection to SAP Support Backbone is required from the Solution Manager system
D. A new connection to SAP Support Backbone is not required from the Solution Manager system

Correct Answers: A and D

Explanation:

The only change that is required is to add IPs of the new SAP system in the saprouttab file. So, a change in saprouttab files is required. Secondly, unless we are not adding a managed system to the Solution Manager ( which is not mentioned in this question) no change in Solution Manager is required. 

A new connection to SAP Support Backbone is also not required as the existing connection of the SAP solution manager will work just as fine. 

Option A is CORRECT because a change in saprouttab files is required to add the changed IPs.

Option B is incorrect because a change in saprouttab files is required

Option C is incorrect because a new SAP Support Backbone connection is not required as the existing connection of the SAP solution manager will work just as fine.  

Option D is CORRECT because it is true that a new SAP Support Backbone connection is not required.

Reference: SAProuter and SAP Solution Manager – General SAP Guides 

 

Domain : Implementation of SAP workloads on AWS

Question 11 : A customer who is running their SAP workloads on-premise wants to deploy the SAP HANA database on the AWS cloud. The customer wants to first try a Proof of Concept (POC) architecture and use HANA Quick Start Reference Deployment in a Single-AZ, single-node deployment scenario. The customer wants to monitor the Quick Start deployment progress.
Where can the customer check relevant deployment logs? 

A. The customer can find the deployment logs in the /root/install/ folder of the SAP HANA instance. The name of the log file is install.log 
B. The customer can find the deployment logs in the /root/install/ folder of the SAP HANA instance. The name of the log file is deploy.log
C. The customer can find the deployment logs in the /root/deployment/ folder of the SAP HANA instance. The name of the log file is install.log
D. The customer can find the deployment logs in the /root/deployment/ folder of the SAP HANA instance. The name of the log file is deploy.log

Correct Answer: A

Explanation: 

The deployment logs for Quick Start reference deployment for HANA are located in /root/install/ folder. The name of the log file is install.log.

There is no such folder called /root/deployment or file called deploy.log

Option A is CORRECT because the deployment log file, install.log is in the /root/install/ folder.

Option B is incorrect because there is no such file called deploy.log.

Option C is incorrect because there is no such folder called /root/deployment/

Option D is incorrect because there is no such file called deploy.log or folder called  /root/deployment/

References: Troubleshooting – SAP HANA on AWSArchitecture – SAP HANA on AWS 

 

Domain : Migration of SAP workloads to AWS

Question 12 : A European manufacturing company wants to migrate one of their production SAP HANA databases from on-premise environment to AWS cloud. The company’s on-premise datacenter is connected via an AWS Direct Connect connection to AWS Cloud. Due to the tight project deadlines, the company’s SAP solution architect would like to set up the target HANA database in a single EC2 instance in the shortest amount of time possible.
Which of the following solutions can meet the requirement? (Select TWO)

A. Use AWS CloudFormation script to provision the HANA database on the AWS cloud
B. Use AWS Quick Start to provision the HANA database on the AWS cloud
C. Use SAP Cloud Appliance Library (CAL) to provision the HANA database on AWS cloud
D. Use AWS Launch Wizard to provision the HANA database on the AWS cloud
E. Use Amazon S3 Transfer Acceleration to provision the HANA database on the AWS cloud

Correct Answers: B and D

Explanation: 

AWS Quick Start and AWS Launch Wizard are two ways in which we can automate the deployment of the SAP HANA database on the AWS cloud. Thus these two options will take the shortest amount of time possible. 

SAP Cloud Appliance Library (CAL) is used for test and demo systems. The instance types available with CAL are not sufficient to run a production workload. AWS CloudFormation script will take time to build and secondly it will only provision the infrastructure but not install the HANA database. Amazon S3 Transfer Acceleration is not for provisioning the required infrastructure. 

Option A is incorrect because AWS CloudFormation scripts take time to develop and therefore are not the fastest option.

Option B is CORRECT because AWS Quick Start is the correct choice as SAP HANA is a supported deployment option.

Option C is incorrect because SAP Cloud Appliance Library (CAL) cannot provision production size instances.

Option D is CORRECT because AWS Launch Wizard is the correct choice as SAP HANA is a supported deployment option.

Option E is incorrect because Amazon S3 Transfer Acceleration cannot be used for provisioning EC2 instances. 

References: Supported deployments and features of AWS Launch WizardMigration Tools and Methodologies – SAP HANA on AWS

 

Domain : Migration of SAP workloads to AWS

Question 13 : A customer is running their SAP workloads in a Hybrid cloud model. The non-production systems are hosted on the AWS cloud and production systems are running on customers’ on-premise datacenter. There is a Direct Connect connectivity between the on-premises datacenter and AWS cloud.
The customer is planning to create a new HANA sandbox system (SBX) on the AWS cloud from the data of the production system (PRD) based on HANA 2.0 SPS 4, using the backup-restore method. 
Which of the following combination of steps should the customer perform to achieve the requirement? (Select TWO)

A. Launch the EC2 instance hosting the SBX system in the public subnet of VPC. Install a HANA database with a version same or higher than the version of the PRD database on this EC2 instance
B. Launch the EC2 instance hosting the SBX system in the private subnet of VPC. Install a HANA database with a version same or higher than the version of the PRD database on this EC2 instance
C. Launch the EC2 instance hosting the SBX system in the private subnet of VPC. Install a HANA database of any version on this EC2 instance
D. Create a backup of the tenant database of the PRD system and transfer this backup to an S3 bucket that is accessible by the SBX EC2 instance. Restore the backup to the newly created SBX database
E. Create a backup of the SYSTEMDB and tenant database of the PRD system and transfer this backup to an S3 bucket that is accessible by the SBX EC2 instance. Restore the backup to the newly created SBX database

Correct Answers: B and D

Explanation: 

Here the understanding of SAP Homogeneous copy along with SAP Note 1844468 is important. The version of the target database should always be the same or higher than the source database and a database EC2 instance should always be launched in a private subnet.  The database EC2 instance should have access to the S3 bucket to restore the database.
Backup of only tenant DB is required for restoring the HANA database on the target system. SYSTEMDB backup is not required.

Option A is incorrect because a database EC2 instance is not recommended to be launched in a public subnet. 

Option B is CORRECT because a database EC2 instance is recommended to be launched in a private subnet. The version of the target database should be the same or higher than the version of the source database. 

Option C is incorrect because the version of the target database cannot be lower than the source version.

Option D is CORRECT because only tenant database backup is required and the EC2 instance should have access to the S3 bucket where backups will be stored. 

Option E is incorrect because only tenant database backup is required

Reference: https://docs.aws.amazon.com/sap/latest/sap-hana/migrating-hana-hana-to-aws.html 

 

Domain : Migration of SAP workloads to AWS

Question 14 : A company is running their SAP workloads on-premise. They are planning to migrate their SAP BW Netweaver 7.5 landscape to AWS cloud, which is running on an Oracle database on AIX. The customer does not plan to change the underlying database for the SAP BW environment.
What are the steps and best practices the company needs to perform in order to migrate the SAP BW landscape to AWS? (Select THREE)

A. Migrate the SAP BW non-production systems first to ensure that migration is successful  and there are no issues with running BW workloads on AWS
B. Migrate all SAP BW systems together to ensure that downtime required is less and the migration project timeline can be shortened 
C. Perform a DB migration using Software Provisioning Manager (SWPM) to change the database to HANA as Oracle is not supported on AWS for SAP
D. Perform an OS migration using Software Provisioning Manager (SWPM) to change the operating system to Oracle Linux as AIX is not supported on AWS for SAP
E. Perform an OS migration using Software Provisioning Manager (SWPM) to change the operating system to SUSE Linux Enterprise Server (SLES) as AIX is not supported on AWS for SAP
F. Ensure to generate a migration key from the SAP Support Portal for the migration using Software Provisioning Manager (SWPM)
G. Ensure to generate a Hardware key from the SAP Support Portal for the migration using Software Provisioning Manager (SWPM)

Correct Answers: A, D and F

Explanation

It is important to have an understanding of the AWS Well-Architected Framework for SAP as well as the supported OS/DB combination for SAP on AWS. AWS recommends always migrating a non-production environment first to ensure that there are no issues in running workloads on the AWS cloud. Also, it helps to streamline project timelines, migration tasks and recognize any additional issues to expect during a production migration.
For SAP on Oracle workloads on AWS, only Oracle Linux is the supported operating system. 

The customer also needs a migration key when performing a heterogeneous migration to AWS. 

A Hardware key is not generated from the SAP Support Portal but an SAP license is generated using the hardware key. The hardware key can be found on the host where the message server is running.

Option A is CORRECT because AWS Well-Architected Framework recommends moving non-production workloads first.

Option B is incorrect because There is no requirement to reduce the project timeline. 

Option C is incorrect because the statement is incorrect. Oracle is a supported database on AWS.

Option D is CORRECT because Oracle Linux is the only supported Operating system for running Oracle on AWS for SAP.

Option E is incorrect because SUSE Linux Enterprise Server (SLES) is not a supported operating system for running Oracle on AWS for SAP.

Option F is CORRECT because a migration key is required for performing OS/DB migrations.

Option E is incorrect because a hardware key is not generated from the SAP support portal. 

References: https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/best-practice-2-4.htmlhttps://launchpad.support.sap.com/#/notes/1656099 

 

Domain : Migration of SAP workloads to AWS

Question 15 : A company that is migrating their SAP workloads on AWS is looking for an option that can be used to resolve the IP address and hostname in the VPC. In their on-premise environment, they are using a highly available domain name system (DNS) server. The company is looking for a similar AWS-managed reliable option in the AWS cloud. 
Which of the following is the most optimal option that can help the customer meet their requirement?

A. Use Amazon Route53 as a DNS service. It provides inherent high availability as part of its design
B. Set up a DNS server in the EC2 instance. Ensure high availability of this EC2 instance
C. Maintain /etc/hosts files in each EC2 instance and ensure high availability for these instances
D. Use Amazon CloudFront as a DNS service. It provides inherent high availability as part of its design

Correct Answer: A

Explanation:  

Amazon Route53 is the correct choice as it is an AWS managed DNS service. Also, the reliability pillar of AWS well-architected framework suggests using AWS services that have inherent availability where applicable. Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. Route 53 can connect user requests to internet applications running on AWS or on-premises. 

Setting up a DNS server in an EC2 instance will put the administration overhead on the customer. Maintaining /etc/hosts is not an optimal approach and the administration still lies with the customer.
Amazon CloudFront is not a DNS service but a content delivery network (CDN) service. 

Option A is CORRECT because Amazon Route53 is an AWS-managed, highly available DNS service.

Option B is incorrect because a DNS server on an EC2 instance is not AWS managed option

Option C is incorrect because  the EC2 instance is not AWS managed option.

Option D is incorrect because Amazon CloudFront is not a DNS service.

References: https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/best-practice-11-2.htmlhttps://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html

 

Domain : Design of SAP workloads on AWS

Question 16 : A US-based customer is planning to deploy 10 TB of SAP HANA database with high availability option in AWS cloud. The primary and secondary SAP HANA databases are running in separate private subnets in different Availability Zones within an AWS Region. The VPC CIDR range of this setup is 10.0.0.0/16.
The customer is using SUSE Linux Enterprise High Availability Extension as the clustering solution. The Overlay IP assigned for the SAP HANA database cluster is 192.168.0.54.
Which of the following solutions can the customer use for routing the Overlay IP address? (select TWO)

A. The customer can use an AWS Transit Gateway that serves as a central hub to facilitate network connection to the Overlay IP address
B. The customer can use an AWS Virtual Private Gateway that serves as a central hub to facilitate network connection to the Overlay IP address
C. The customer can use an AWS Network Load Balancer that enables network access to the Overlay IP address
D. The customer can use an AWS Application Load Balancer that enables network access to the Overlay IP address
E. The customer can use the clustering capabilities of SUSE Linux Enterprise High Availability Extension to facilitate network connection to the Overlay IP address

Correct Answers: A and C

Explanation: 

An Overlay IP address is a private IP address that is outside of the VPC CIDR range. To route the traffic to both primary and secondary databases either an AWS Transit Gateway can be used or an AWS Network Load Balancer can be used. 

An AWS Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act like spokes.
Similarly, when an AWS Network Load Balancer receives a connection request, it selects a target from the Network Load Balancer target group to route network connection requests to a destination address which can be an overlay IP address.
AWS Virtual Private Gateway is the VPN endpoint on the Amazon side of your Site-to-Site VPN connection. It is not capable of routing network traffic. Similarly, an AWS Application Load Balancer works on an application layer (HTTP/HTTPS) and is not capable of providing load balancing at the TCP/IP layer. SUSE Linux Enterprise High Availability Extension is just a  clustering solution and does not have the capability to enable network access to an Overlay IP address.

Option A is CORRECT because An AWS Transit Gateway acts as a hub and the connected networks act as spokes. The source and destinations are maintained in the Transit Gateway route tables.

Option B is incorrect because a Virtual Private Gateway (VPG) is used to connect a VPN connection or AWS Direct Connect to the VPC. 

Option C is CORRECT because Network Load Balancer can be used for routing the Overlay IP address.

Option D is incorrect because an AWS Application Load Balancer does not work at the fourth layer of the Open Systems Interconnection (OSI) model.

Option E is incorrect because a cluster solution such as SUSE Linux Enterprise Server High Availability Extension does not enable network access to an Overlay IP address.

Reference: https://docs.aws.amazon.com/sap/latest/sap-hana/sap-ha-overlay-ip.html

 

Domain : Design of SAP workloads on AWS

Question 17 : A SAP Solution architect in a Pharma company is designing a high availability solution for their SAP S/4 HANA production system on AWS cloud. They are planning to use a clustering solution for automatic switchover and failover scenarios. This highly available system will be deployed in two different Availability Zones in different subnets in a single AWS Region.
Which of the following services of SAP S/4 HANA systems are considered Single Points of Failure (SPOFs) where the architect needs to configure high availability?

A. The Application Servers and database are considered Single Points of Failures (SPOFs) in standard SAP architecture where high availability is required
B. The ABAP SAP Central Service (ASCS) and database are considered Single Points of Failures (SPOFs) in standard SAP architecture where high availability is required
C. The ABAP SAP Central Service (ASCS) and Application Servers are considered Single Points of Failures (SPOFs) in standard SAP architecture where high availability is required
D. The database only is considered a Single Point of Failure (SPOF) in standard SAP architecture where high availability is required

Correct Answer: B

Explanation:  

It is both ABAP SAP Central Service (ASCS) and database that are considered as Single Points of Failures (SPOFs) in standard SAP architecture. The ABAP SAP Central Service (ASCS) consists of enqueue, and message services. These along with the database cannot be made redundant by configuring multiple instances of them on different host machines.

Additional SPOFs in an SAP installation are Network File System (NFS) (for UNIX-based application hosts) and file shares (for Microsoft Windows-based application hosts). If a Domain Name Service (DNS) is used, then DNS is also considered a single point of failure.

Application Servers can be made redundant by configuring them as multiple instances on different hosts. Therefore they are not Single Points of Failures (SPOFs).

Option A is incorrect because Application Servers are not Single Points of Failures (SPOFs).

Option B is CORRECT because both ABAP SAP Central Service (ASCS) and the database are Single Points of Failures (SPOFs). 

Option C is incorrect because Application Servers are not Single Points of Failures (SPOFs).

Option D is incorrect because  ABAP SAP Central Service (ASCS), common file shares, and DNS if used also Single Points of Failure (SPOF).

References: https://aws.amazon.com/blogs/awsforsap/deploying-highly-available-sap-systems-using-sios-protection-suite-on-aws/System Failure (SAP NetWeaver AS) (SAP Library – SAP High Availability). https://aws.amazon.com/sap/docs/ 

 

Domain : Design of SAP workloads on AWS

Question 18 : A European OTT platform company is planning to deploy 24TB of SAP HANA database as a highly available system on AWS cloud. The primary and secondary SAP HANA databases are running in separate private subnets in different Availability Zones within an AWS Region. 
The company is looking for a database native solution for high availability. Which of the following options will provide the lowest possible recovery time objective (RTO)? 

A. Use SAP HANA system replication in synchronous mode with the preload option for data replication between primary and secondary. Use a smaller EC2 instance for the secondary database than the primary
B. Use SAP HANA system replication in a synchronous mode without the preload option for data replication between primary and secondary. Use a smaller EC2 instance for the secondary database than the primary
C. Use SAP HANA system replication in synchronous mode with the preload option for data replication between primary and secondary. Use the same sized EC2 instance for the secondary database as the primary
D. Use SAP HANA system replication in a synchronous mode without the preload option for data replication between primary and secondary. Use a same-sized EC2 instance for the secondary database as primary

Correct Answer: C

Explanation: 

Here we have 2 requirements, first is to use a database native solution which is SAP HANA system replication. This is provided in all options. Second, we have to choose the option with the lowest RTO. A HANA system replication with preload option enabled provides the lowest RTO, provided that the primary and secondary EC2 instances are sized equally.

A SAP HANA system replication without preload option enabled needs more time to launch the database instance as tables are loaded in the memory first. This increases the RTO.

Also, a smaller secondary instance during failover needs to be resized to the same size as the primary instance. This also increases the overall RTO.

Option A is incorrect because a smaller EC2 instance for the secondary database will increase the RTO.

Option B is incorrect because SAP HANA system replication in a synchronous mode without the preload option does not provide the lowest possible RTO. Also, a smaller secondary instance increases the RTO further. 

Option C is CORRECT because SAP HANA system replication in synchronous mode with the preload option and same-sized primary and secondary instances provides the lowest possible RTO.

Option D is incorrect because  SAP HANA system replication in a synchronous mode without the preload option does not provide the lowest possible RTO.

Reference: https://d1.awsstatic.com/enterprise-marketing/SAP/sap-hana-on-aws-high-availability-disaster-recovery-guide.pdf

 

Domain : Design of SAP workloads on AWS

Question 19 : A Singapore – based public sector company has deployed their SAP workloads on AWS in ap-southeast-1 region, which is the only available AWS region in the country. As per the company’s policy, the data must reside within the country. 
They are looking for a solution that will ensure High Availability(HA) and Disaster Recovery (DR). Which of the following options meets the company’s requirements? 

A. Set up High Availability for SAP workloads in AZ-1 & AZ-2 of the ap-southeast-1 region. Set up DR for SAP workloads in AZ1 of the ap-south-1 region
B. Set up High Availability for SAP workloads in AZ-1 & AZ-2 of the ap-southeast-1 region. Set up DR for SAP workloads in AZ-3 of the ap-southeast-1 region
C. Set up High Availability for SAP workloads in AZ-1 of the ap-southeast-1 region. Set up DR for SAP workloads in AZ-2 of the ap-southeast-1 region
D. Set up High Availability for SAP workloads in AZ-1 & AZ-2 of the ap-southeast-1 region. Set up DR for SAP workloads in AZ-1 & AZ-2 of the ap-south-1 region

Correct Answer: B

Explanation: 

It is important to understand the single region and multi-region deployment patterns of SAP workloads on AWS. Here the requirement is very clear, that the data should not leave the country with the only AWS region as ap-southeast-1. Therefore, ap-south-1 is not an option to deploy HA or DR setup. 

Most of the AWS regions have 3 Availability Zones, so we need to deploy the HA in AZ-1 and AZ-2 and use AZ-3 for disaster recovery (DR) setup. 

Option A is incorrect because ap-south-1 is not a correct region for DR setup as the data leaves the country in this case.

Option B is CORRECT because using AZ-3 of the ap-southeast-1 region is a valid choice. 

Option C is incorrect because a high availability (HA) setup in a single AZ is not a valid option as it does not protect against AZ failures.

Option D is incorrect because ap-south-1 is not a correct region for DR setup as the data leaves the country in this case. 

References: https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-patterns.html, https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-patterns-single.html 

 

Domain : Design of SAP workloads on AWS

Question 20 : A customer is running their SAP workloads on AWS. Their SAP landscape includes SAP S/4 HANA, SAP Adobe Document Services (ADS), and SAP Solution Manager system. The ADS and SAP solution manager are running on the Oracle database.
They are looking for a solution that can fulfill their disaster recovery (DR) needs without the administrative overhead of using multiple solutions for data replication.
Which of the following solutions meets the customer’s requirements?  

A. Use CloudEndure disaster recovery for data replication 
B. Use HANA System Replication (HSR) for data replication
C. Use Oracle DataGuard for data replication
D. Use AWS DataSync for data replication

Correct Answer:  A

Explanation: 

CloudEndure Disaster Recovery is the correct choice here because it provides replication at the block level. CloudEndure Disaster Recovery can be used for protecting critical databases, including Oracle, HANA, MySQL, and Microsoft SQL Server, as well as enterprise applications such as SAP.

HANA System Replication (HSR) for data replication will provide data replication for only HANA databases.
Similarly, Oracle DataGuard provides data replication only for Oracle databases.
AWS DataSync is a data transfer service that moves and replicates data between on-premises storage systems and AWS storage services over the internet or AWS Direct Connect. It is not used for database replication. 

Option A is CORRECT because CloudEndure Disaster Recovery provides block-level replication. It is independent of database type and is used at the storage level.

Option B is incorrect because HANA System Replication (HSR) for data replication will only support HANA databases.

Option C is incorrect because Oracle DataGuard for data replication will only support Oracle databases.

Option D is incorrect because AWS DataSync is a file transfer service and is not a disaster recovery or data replication solution.

References: https://aws.amazon.com/blogs/awsforsap/sap-disaster-recovery-solution-using-cloudendure-part-1-failover/, https://docs.cloudendure.com/#Home.htm%3FTocPath%3DNavigation%7C_____1 

 

Domain : Implementation of SAP workloads on AWS 

Question 21 : A customer is running their SAP workloads on-premise. The landscape consists of multiple SAP systems running on SAP Adaptive Server Enterprise (ASE) database on Linux operating systems. The customer is looking for a method to directly backup the database on the AWS cloud in an S3 bucket using the NFS protocol.
Which of the following is a valid solution that meets the customer’s requirement? 

A. Create an Amazon S3 File Gateway using AWS Storage Gateway. Create an NFS file share and connect it to Amazon S3. Create a mount point on a database host and mount the NFS file share
B. Create an Amazon S3 Volume Gateway using AWS Storage Gateway. Create an NFS file share and connect it to Amazon S3. Create a mount point on a database host and mount the NFS file share
C. Create an Amazon S3 Tape Gateway using AWS Storage Gateway. Create an NFS file share and connect it to Amazon S3. Create a mount point on a database host and mount the NFS file share
D. Create an Amazon Transit Gateway. Create an NFS file share and connect it to Amazon S3. Create a mount point on a database host and mount the NFS file share

Correct Answer: A

Explanation: 

Amazon S3 File Gateway is the correct choice here as it supports the NFS protocol and most often used to transfer the direct backups to S3 using NFS or SMB protocols. Amazon S3 Volume Gateway and Amazon S3 Tape Gateway support iSCSI and iSCSI VTL protocols respectively. They do not support NFS protocols.
Amazon Transit Gateway is not a storage gateway service. It provides a hub and spoke model for connecting VPCs and on-premises networks as a fully managed service

Option A is CORRECT because Amazon S3 File Gateway supports NFS protocol.

Option B is incorrect because Amazon S3 Volume Gateway supports only the iSCSI protocol and does not support the NFS protocol.

Option C is incorrect because Amazon S3 Tape Gateway supports only the iSCSI VTL protocol and does not support the NFS protocol.

Option D is incorrect because Amazon Transit Gateway is not a storage gateway service.

References: Integrate an SAP ASE database to Amazon S3 using AWS Storage Gateway, AWS Storage Gateway | Amazon Web Services  

 

Domain : Implementation of SAP workloads on AWS 

Question 22 : A US-based Pharma company is planning to deploy the SAP HANA database on the AWS cloud. The database size is around 10 TB. They will be deploying the database in a private subnet of a VPC. They are looking for an operating system to host the HANA database, which provides inherent high availability capabilities. 
Which of the following operating systems, available in AWS Marketplace meets the company’s requirements? 

A. SUSE Linux Enterprise Server (SLES) 15.1 for SAP
B. SUSE Linux Enterprise Server (SLES) 15.1
C. Red Hat Enterprise Linux (RHEL) 8.1
D. Microsoft Windows Server 2016 

Correct Answer: A

Explanation: 

SUSE and RedHat provide additional benefits in their SAP edition of operating systems. SUSE Linux Enterprise Server (SLES) for SAP and Red Hat Enterprise Linux (RHEL) for SAP comes with benefits like extended support, high availability and tuning packages for SAP applications, etc. Hence, SUSE Linux Enterprise Server (SLES) for SAP is the correct choice here.
The regular SUSE Linux Enterprise Server (SLES) and Red Hat Enterprise Linux (RHEL) operating systems do not have high availability packages by default and they have to be installed separately. Microsoft Windows Server 2016 is not a supported operating system for the SAP HANA database. 

Option A is CORRECT because SUSE Linux Enterprise Server (SLES) 15.1 for SAP provides additional benefits like extended support, high availability and SAP tuning packages. 

Option B is incorrect because SUSE Linux Enterprise Server (SLES) does not have high availability packages by default.

Option C is incorrect because  Red Hat Enterprise Linux (RHEL) does not have high availability packages by default.

Option D is incorrect because Microsoft Windows Server 2016 is not a supported operating system for the SAP HANA database. 

Reference: https://docs.aws.amazon.com/sap/latest/sap-hana/planning-the-deployment.html 

 

Domain : Implementation of SAP workloads on AWS 

Question 23 : A customer is planning for a greenfield implementation of the SAP S/4 HANA system on the AWS cloud. They have performed the sizing of the HANA database using the SAP Quick Sizer report and have the required value for SAPS (SAP Application Performance Standard). The next step is to select an EC2 instance for the HANA database. 
Which of the following sources can the customer refer to choose an appropriate EC2 instance for the HANA database on the AWS cloud? (select TWO)

A. The customer can refer to the SAP Community Network (SCN) blog for selecting the EC2 instance
B. The customer can refer to the AWS blog for selecting the EC2 instance
C. The customer can refer to the ‘SAP Certified and Supported SAP HANA Hardware Directory’ page for selecting the EC2 instance
D. The customer can refer to ‘SAP Note 1656099 – SAP Applications on AWS: Supported DB/OS and Amazon EC2 products’ for selecting the EC2 instance
E. The customer can refer to the SAP Product Availability Matrix (PAM) for selecting the EC2 instance

Correct Answers: C and D 

Explanation: 

SAP and AWS work together to test and certify Amazon EC2 instance types for SAP on AWS solutions. SAP Note ‘1656099 – SAP Applications on AWS: Supported DB/OS’ and SAP page ‘SAP Certified and Supported SAP HANA Hardware Directory’ are the only single source of truth for selecting the EC2 instance for the HANA database. Documentation or a SAP or AWS blog may mention EC2 instances, but they are not to be treated as official sources of information. 

SAP Product Availability Matrix (PAM) provides information about SAP software releases: release types, maintenance durations, planned availability, etc. It does not provide the required details for selecting EC2 instances.

Option A is CORRECT because SAP Community Network (SCN) blogs are not the official source of information for certified EC2 instances. 

Option B is incorrect because AWS blogs are not the official source of information for certified EC2 instances.

Option C is CORRECT because the ‘SAP Certified and Supported SAP HANA Hardware Directory’ page is a valid source of information.

Option D is CORRECT because  SAP Note ‘1656099 – SAP Applications on AWS: Supported DB/OS’ is a valid source of information.

Option E is incorrect because the SAP Product Availability Matrix (PAM) does not provide the required details for selecting EC2 instances. 

References: https://docs.aws.amazon.com/sap/latest/general/ec2-instance-types-sap.htmlCertified and Supported SAP HANA® Hardware Directory 

 

Domain :  Implementation of SAP workloads on AWS

Question 24 : A customer is deploying SAP S/4 HANA landscape on AWS Cloud. The landscape consists of development (DEV), quality(QAS) and production(PRD) systems. The SAP applications are running on the Windows Server 2016  operating system. The DEV and QAS systems are located in a single AWS Account and the PRD system is in a different AWS Account. 
The customer is looking for a storage solution for the \usr\sap\trans directory which is scalable and highly available. 
Which of the following AWS storage services meet the customer requirement? 

A. Amazon Elastic File System (Amazon EFS)
B. Amazon Elastic Block Store (Amazon EBS) 
C. Amazon S3
D. Amazon FSx

Correct Answer: D

Explanation: 

Amazon EFS and FSx can be used as a shared file system for \usr\sap\trans directory. Both Amazon EFS and FSx are highly available and scalable storage types. Both are AWS managed. However, EFS is only supported for Linux based operating systems. For Windows, SAP recommends using Amazon FSx.

Elastic Block Store (EBS) is a block level storage and is not recommended for file shares. Amazon S3 is object based storage and is also not recommended to use as file shares in the SAP context.

Option A is incorrect because Amazon Elastic File System (Amazon EFS) is recommended for Linux based operating systems

Option B is incorrect because Amazon Elastic Block Store (Amazon EBS) is a block level storage and cannot be used as file shares.

Option C is incorrect because Amazon S3 is object based storage and cannot be used as file shares.

Option D is CORRECT because Amazon FSx is the correct choice and recommended storage type for shared file systems on Windows. 

References: Windows on AWS | AWS for SAP, How to setup SAP Netweaver on Windows MSCS for SAP ASCS/ERS on AWS using Amazon FSx  

 

Domain : Implementation of SAP workloads on AWS

Question 25 : A US-based financial company is planning to deploy its SAP S/4 HANA workloads on the AWS cloud. The HANA database will be launched in an EC2 instance with SUSE Linux as an operating system. For /hana/data and /hana/log directory they will be using EBS volumes. The company’s SAP solution architect wants to understand the encryption for EBS volumes. 
Which of the following statements are TRUE for encrypted EBS volume in the AWS cloud? (Select THREE)

A. Data at rest inside the volume is encrypted
B. All data moving between the volume and S3 storage is encrypted
C. All data moving between the volume and the instance is encrypted
D. All snapshots created from the encrypted volume are encrypted
E. All data moving between the volume and EFS storage is encrypted

Correct Answers: A, C and D

Explanation:

In an encrypted EBS volume the data at rest inside the volume is encrypted using an encryption key. The only data in transit that is encrypted is data moving between the volume and instance it is attached to. Also, the snapshot of an encrypted volume is also encrypted. 

Data moving from EBS volume to S3 or any EFS storage is not encrypted unless we use data-in-transit encryption such as TLS or SSL.

Option A is CORRECT because Data inside the volume is encrypted.

Option B is incorrect because data moving between the volume and S3 is not encrypted by default.

Option C is CORRECT because data moving between the volume and the instance is encrypted.

Option D is CORRECT because an encrypted volume’s snapshots are also encrypted.

Option E is incorrect because data moving between the volume and EFS storage is not encrypted by default.

References: Security and Compliance – SAP NetWeaver on AWSAmazon EBS volumes – Amazon Elastic Compute Cloud 

Summary

Hope this blog gained you knowledge on what questions need to be focused more for the AWS Certified SAP on AWS Specialty exam. Definitely, this set of SAP on AWS Specialty Exam (PAS-C01) practice questions can be found helpful to prepare for passing the real exam.

The AWS Certified SAP on AWS exam is suggested mainly for the professionals who dealt with the SAP services. And thus it is quite difficult to find reliable and authentic learning resources. We at Whizlabs provide you the PAS-C01 exam training resources such as video courses, practice tests and hands-on labs for the real time experiment to pass the AWS Certified SAP on AWS – Specialty Exam (PAS-C01). 

If you have any further thoughts on this PAS-C01 exam questions, feel free to comment us!

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...