New Year Sale - Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70dumps

SAA-C03 Questions and Answers

Question # 6

A transaction-processing company has weekly batch jobs that run on Amazon EC2 instances in an Auto Scaling group. Transaction volume varies, but CPU utilization is always at least 60% during the batch runs. Capacity must be provisioned 30 minutes before the jobs begin.

Engineers currently scale the Auto Scaling group manually. The company needs an automated solution but cannot allocate time to analyze scaling trends.

Which solution will meet these requirements with the least operational overhead?

A.

Create a dynamic scaling policy based on CPU utilization at 60%.

B.

Create a scheduled scaling policy. Set desired, minimum, and maximum capacity. Set recurrence weekly. Set the start time to 30 minutes before the jobs run.

C.

Create a predictive scaling policy that forecasts CPU usage and pre-launches instances 30 minutes before the jobs run.

D.

Create an EventBridge rule that invokes a Lambda function when CPU reaches 60%. The Lambda function increases the Auto Scaling group size by 20%.

Full Access
Question # 7

An ecommerce company hosts an analytics application on AWS. The company deployed the application to one AWS Region. The application generates 300 MB of data each month. The application stores the data in JSON format. The data must be accessible in milliseconds when needed. The company must retain the data for 30 days. The company requires a disaster recovery solution to back up the data.

A.

Deploy an Amazon OpenSearch Service cluster in the primary Region and in a second Region. Enable OpenSearch Service cluster replication. Configure the clusters to expire data after 30 days. Modify the application to use OpenSearch Service to store the data.

B.

Deploy an Amazon S3 bucket in the primary Region and in a second Region. Enable versioning on both buckets. Use the Standard storage class. Configure S3 Lifecycle policies to expire objects after 30 days. Configure S3 Cross-Region Replication from the bucket in the primary bucket to the backup bucket.

C.

Deploy an Amazon Aurora PostgreSQL global database. Configure cluster replication between the primary Region and a second Region. Use a replicated cluster endpoint during outages in the primary Region.

D.

Deploy an Amazon RDS for PostgreSQL cluster in the same Region where the application is deployed. Configure a read replica in a second Region as a backup.

Full Access
Question # 8

A company has a single AWS account. The company runs workloads on Amazon EC2 instances in multiple VPCs in one AWS Region. The company also runs workloads in an on-premises data center that connects to the company's AWS account by using AWS Direct Connect.

The company needs all EC2 instances in the VPCs to resolve DNS queries for the internal.example.com domain to the authoritative DNS server that is located in the on-premises data center. The solution must use private communication between the VPCs and the on-premises network. All route tables, network ACLs, and security groups are configured correctly between AWS and the on-premises data center.

Which combination of actions will meet these requirements? (Select THREE.)

A.

Create an Amazon Route 53 inbound endpoint in all the workload VPCs.

B.

Create an Amazon Route 53 outbound endpoint in one of the workload VPCs.

C.

Create an Amazon Route 53 Resolver rule with the Forward type configured to forward queries for internal.example.com to the on-premises DNS server.

D.

Create an Amazon Route 53 Resolver rule with the System type configured to forward queries for internal.example.com to the on-premises DNS server.

E.

Associate the Amazon Route 53 Resolver rule with all the workload VPCs.

F.

Associate the Amazon Route 53 Resolver rule with the workload VPC with the new Route 53 endpoint.

Full Access
Question # 9

A solutions architect needs to connect a company's corporate network to its VPC to allow on-premises access to its AWS resources. The solution must provide encryption of all trafficbetween the corporate network and the VPC at the network layer and the session layer. The solution also must provide security controls to prevent unrestricted access between AWS and the on-premises systems.

Which solution meets these requirements?

A.

Configure AWS Direct Connect to connect to the VPC. Configure the VPC route tables to allow and deny traffic between AWS and on premises as required.

B.

Create an IAM policy to allow access to the AWS Management Console only from a defined set of corporate IP addresses Restrict user access based on job responsibility by using an IAM policy and roles

C.

Configure AWS Site-to-Site VPN to connect to the VPC. Configure route table entries to direct traffic from on premises to the VPC. Configure instance security groups and network ACLs to allow only required traffic from on premises.

D.

Configure AWS Transit Gateway to connect to the VPC. Configure route table entries to direct traffic from on premises to the VPC. Configure instance security groups and network ACLs to allow only required traffic from on premises.

Full Access
Question # 10

Question:

A healthcare company uses an Amazon EMR cluster to process patient data. The data must be encrypted in transit and at rest. Local volumes in the cluster also need to be encrypted. Which solution will meet these requirements?

Options:

A.

Create Amazon EBS volumes. Enable encryption. Attach the volumes to the existing EMR cluster.

B.

Create an EMR security configuration that encrypts the data and the volumes as required.

C.

Create an EC2 instance profile for the EMR instances. Configure the instance profile to enforce encryption.

D.

Create a runtime role that has a trust policy for the EMR cluster.

Full Access
Question # 11

A company is developing software that uses a PostgreSQL database schema. The company needs to configure development environments and test environments for its developers.

Each developer at the company uses their own development environment, which includes a PostgreSQL database. On average, each development environment is used for an 8-hour workday. The test environments will be used for load testing that can take up to 2 hours each day.

Which solution will meet these requirements MOST cost-effectively?

A.

Configure development environments and test environments with their own Amazon Aurora Serverless v2 PostgreSQL database.

B.

For each development environment, configure an Amazon RDS for PostgreSQL Single-AZ DB instance. For the test environment, configure a single Amazon RDS for PostgreSQL Multi-AZ DB instance.

C.

Configure development environments and test environments with their own Amazon Aurora PostgreSQL DB cluster.

D.

Configure an Amazon Aurora global database. Allow developers to connect to the database with their own credentials.

Full Access
Question # 12

A company uses an Amazon RDS MySQL database to store data for several applications. The company wants to understand use patterns for the database so the company can identify oppor-tunities to optimize costs.

A solutions architect needs to analyze the RDS DB instance to identify right-sizing opportuni-ties.

Which solution will meet these requirements with the LEAST effort?

A.

Enable AWS CloudTrail data events. Use Amazon Athena to query CloudTrail events. Right-size the RDS DB instance based on the number of transactions.

B.

Enable Performance Insights for the RDS DB instance. Right-size the RDS DB instance based on the maximum CPU utilization.

C.

Enable AWS X-Ray to understand the transactions that run on the RDS DB instance. Right-size the RDS DB instance based on the number of transactions.

D.

Enable Amazon CloudWatch Logs for the applications. Aggregate the data from Cloud-Watch Logs for all the applications. Right-size the RDS DB instance based on the aggregated logs.

Full Access
Question # 13

A company has an application that serves clients that are deployed in more than 20.000 retail storefront locations around the world. The application consists of backend web services that are exposed over HTTPS on port 443 The application is hosted on Amazon EC2 Instances behind an Application Load Balancer (ALB). The retail locations communicate with the web application over the public internet. The company allows each retail location to register the IP address that the retail location has been allocated by its local ISP.

The company's security team recommends to increase the security of the application endpoint by restricting access to only the IP addresses registered by the retail locations.

What should a solutions architect do to meet these requirements?

A.

Associate an AWS WAF web ACL with the ALB Use IP rule sets on the ALB to filter traffic Update the IP addresses in the rule to Include the registered IP addresses

B.

Deploy AWS Firewall Manager to manage the ALB. Configure firewall rules to restrict traffic to the ALB Modify the firewall rules to include the registered IP addresses.

C.

Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization function on the ALB to validate that incoming requests are from the registered IP addresses.

D.

Configure the network ACL on the subnet that contains the public interface of the ALB Update the ingress rules on the network ACL with entries for each of the registered IP addresses.

Full Access
Question # 14

A company has a VPC with multiple private subnets that host multiple applications. The applications must not be accessible to the internet. However, the applications need to access multiple AWS services. The applications must not use public IP addresses to access the AWS services.

Which solution will meet these requirements MOST cost-effectively?

A.

Configure interface VPC endpoints for the required AWS services. Route traffic from the private subnets through the interface VPC endpoints.

B.

Deploy a NAT gateway in each private subnet. Route traffic from the private subnets through the NAT gateways.

C.

Deploy internet gateways in each private subnet. Route traffic from the private subnets through the internet gateways.

D.

Set up an AWS Direct Connect connection between the private subnets. Route traffic from the private subnets through the Direct Connect connection.

Full Access
Question # 15

A company has multiple Amazon RDS DB instances that run in a development AWS account. All the instances have tags to identify them as development resources. The company needs the development DB instances to run on a schedule only during business hours.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Create an Amazon CloudWatch alarm to identify RDS instances that need to be stopped Create an AWS Lambda function to start and stop the RDS instances.

B.

Create an AWS Trusted Advisor report to identify RDS instances to be started and stopped. Create an AWS Lambda function to start and stop the RDS instances.

C.

Create AWS Systems Manager State Manager associations to start and stop the RDS instances.

D.

Create an Amazon EventBridge rule that invokes AWS Lambda functions to start and stop the RDS instances.

Full Access
Question # 16

A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure. The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application must be able to store petabytes of data.

Which combination of storage and caching should the solutions architect use?

A.

Amazon S3 with Amazon CloudFront

B.

Amazon S3 Glacier Deep Archive with Amazon ElastiCache

C.

Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront

D.

AWS Storage Gateway with Amazon ElastiCache

Full Access
Question # 17

A company is developing an application using Amazon Aurora MySQL. The team will frequently make schema changes to test new features without affecting production. After testing, changes must be promoted to production with minimal downtime.

Which solution meets these requirements?

A.

Create a staging Aurora cluster based on the existing cluster. Test schema changes on the staging cluster.

B.

Create a read replica, modify its schema, and then promote it to primary.

C.

Create an Aurora MySQL blue/green deployment. Make schema changes in the staging environment and switch traffic after testing.

D.

Replicate the Aurora database to DynamoDB, apply schema changes, and switch the application to DynamoDB.

Full Access
Question # 18

A company's solutions architect is building a static website to be deployed in Amazon S3 for a production environment. The website integrates with an Amazon Aurora PostgreSQL database by using an AWS Lambda function. The website that is deployed to production will use a Lambda alias that points to a specific version of the Lambda function.

The company must rotate the database credentials every 2 weeks. Lambda functions that the company deployed previously must be able to use the most recent credentials.

Which solution will meet these requirements?

A.

Store the database credentials in AWS Secrets Manager. Turn on rotation. Write code in the Lambda function to retrieve the credentials from Secrets Manager.

B.

Include the database credentials as part of the Lambda function code. Update the credentials periodically and deploy the new Lambda function.

C.

Use Lambda environment variables. Update the environment variables when new credentials are available.

D.

Store the database credentials in AWS Systems Manager Parameter Store. Turn on rotation. Write code in the Lambda function to retrieve the credentials from Systems Manager Parameter Store.

Full Access
Question # 19

A company uses a Microsoft SQL Server database. The applications currently connect using SQL Server protocols. The company wants to migrate to Amazon Aurora PostgreSQL with minimal changes to application code.

Which combination of steps will meet these requirements? (Select TWO.)

A.

Use AWS SCT to rewrite SQL queries in the applications.

B.

Enable Babelfish on Aurora PostgreSQL to run SQL Server queries.

C.

Migrate the database schema and data using AWS SCT and AWS DMS.

D.

Use Amazon RDS Proxy to connect the applications to Aurora PostgreSQL.

E.

Use AWS DMS to rewrite SQL queries in the applications.

Full Access
Question # 20

A media company hosts a web application on AWS. The application gives users the ability to upload and view videos. The application stores the videos in an Amazon S3 bucket. The company wants to ensure that only authenticated users can upload videos. Authenticated users must have the ability to upload videos only within a specified time frame after authentication. Which solution will meet these requirements with the LEAST operational overhead?

A.

Configure the application to generate IAM temporary security credentials for authenticated users.

B.

Create an AWS Lambda function that generates pre-signed URLs when a user authenticates.

C.

Develop a custom authentication service that integrates with Amazon Cognito to control and log direct S3 bucket access through the application.

D.

Use AWS Security Token Service (AWS STS) to assume a pre-defined IAM role that grants authenticated users temporary permissions to upload videos directly to the S3 bucket.

Full Access
Question # 21

A solutions architect is designing the network architecture for an application that runs on Amazon EC2 instances in an Auto Scaling group. The application needs to access data that is in Amazon S3 buckets.

Traffic to the S3 buckets must not use public IP addresses. The solutions architect will deploy the application in a VPC that has public and private subnets.

Which solutions will meet these requirements? (Select TWO.)

A.

Deploy the EC2 instances in a private subnet. Configure a default route to an egress-only internet gateway.

B.

Deploy the EC2 instances in a public subnet. Create a gateway endpoint for Amazon S3. Associate the endpoint with the subnet's route table.

C.

Deploy the EC2 instances in a public subnet. Create an interface endpoint for Amazon S3. Configure DNS hostnames and DNS resolution for the VPC.

D.

Deploy the EC2 instances in a private subnet. Configure a default route to a NAT gateway in a public subnet.

E.

Deploy the EC2 instances in a private subnet. Configure a default route to a customer gateway.

Full Access
Question # 22

A company is planning to deploy its application on an Amazon Aurora PostgreSQL Serverless v2 cluster. The application will receive large amounts of traffic. The company wants to optimize the storage performance of the cluster as the load on the application increases

Which solution will meet these requirements MOST cost-effectively?

A.

Configure the cluster to use the Aurora Standard storage configuration.

B.

Configure the cluster storage type as Provisioned IOPS.

C.

Configure the cluster storage type as General Purpose.

D.

Configure the cluster to use the Aurora l/O-Optimized storage configuration.

Full Access
Question # 23

A company has deployed a multi-tier web application to support a website. The architecture includes an Application Load Balancer (ALB) in public subnets, two Amazon Elastic Container Service (Amazon ECS) tasks in the public subnets, and a PostgreSQL cluster that runs on Amazon EC2 instances in private subnets.

The EC2 instances that host the PostgreSQL database run shell scripts that need to access an external API to retrieve product information. A solutions architect must design a solution to allow the EC2 instances to securely communicate with the external API without increasing operational overhead.

Which solution will meet these requirements?

A.

Assign public IP addresses to the EC2 instances in the private subnets. Configure security groups to allow outbound internet access.

B.

Configure a NAT gateway in the public subnets. Update the route table for the private subnets to route traffic to the NAT gateway.

C.

Configure a VPC peering connection between the private subnets and a public subnet that has access to the external API.

D.

Deploy an interface VPC endpoint to securely connect to the external API.

Full Access
Question # 24

A company needs to accommodate traffic for a web application that the company hosts on AWS, especially during peak usage hours.

The application uses Amazon EC2 instances as web servers, an Amazon RDS DB instance for database operations, and an Amazon S3 bucket to store transaction documents. The application struggles to scale effectively and experiences performance issues.

The company wants to improve the scalability of the application and prevent future performance issues. The company also wants to improve global access speeds to the transaction documents for the company's global users.

Which solution will meet these requirements?

A.

Place the EC2 instances in Auto Scaling groups to scale appropriately during peak usage hours. Use Amazon RDS read replicas to improve database read performance. Deploy an Amazon CloudFront distribution that uses Amazon S3 as the origin.

B.

Increase the size of the EC2 instances to provide more compute capacity. Use Amazon ElastiCache to reduce database read loads. Use AWS Global Accelerator to optimize the delivery of the transaction documents that are in the S3 bucket.

C.

Transition workloads from the EC2 instances to AWS Lambda functions to scale in response to the usage peaks. Migrate the database to an Amazon Aurora global database to provide cross-Region reads. Use AWS Global Accelerator to deliver the transaction documents that are in the S3 bucket.

D.

Convert the application architecture to use Amazon Elastic Container Service (Amazon ECS) containers. Configure a Multi-AZ deployment of Amazon RDS to support database operations. Replicate the transaction documents that are in the S3 bucket across multiple AWS Regions.

Full Access
Question # 25

A company is building a web application. The company needs a load balancing solution that supports HTTPS header-based routing. The company's security team also requires a rules-based method of blocking specific incoming requests to decrease the effects of malicious activity.

Which solution will meet these requirements?

A.

Create an Application Load Balancer (ALB). Configure an HTTPS listener with mutual TLS enabled.

B.

Create an Application Load Balancer (ALB). Integrate the ALB with AWS WAF. Configure the security team's required rules.

C.

Create an Application Load Balancer (ALB). Integrate the ALB with AWS Config. Apply custom rules to all ALB resources.

D.

Create a Network Load Balancer (NLB). Configure AWS Network Firewall with the security team's required rules.

Full Access
Question # 26

A gaming company has a web application that displays game scores. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The application stores data in an Amazon RDS for MySQL database.

Users are experiencing long delays and interruptions caused by degraded database read performance. The company wants to improve the user experience.

Which solution will meet this requirement?

A.

Use an Amazon ElastiCache (Redis OSS) cache in front of the database.

B.

Use Amazon RDS Proxy between the application and the database.

C.

Migrate the application from EC2 instances to AWS Lambda functions.

D.

Use an Amazon Aurora Global Database to create multiple read replicas across multiple AWS Regions.

Full Access
Question # 27

A company is storing data that will not be frequently accessed in the AWS Cloud. If the company needs to access the data, the data must be retrieved within 12 hours. The company wants a solution that is cost-effective for storage costs per gigabyte.

Which Amazon S3 storage class will meet these requirements?

A.

S3 Standard

B.

S3 Glacier Flexible Retrieval

C.

S3 One Zone-Infrequent Access (S3 One Zone-IA)

D.

S3 Standard-Infrequent Access (S3 Standard-IA)

Full Access
Question # 28

A company runs game applications on AWS. The company needs to collect, visualize, and analyze telemetry data from the company's game servers. The company wants to gain insights into the behavior, performance, and health of game servers in near real time. Which solution will meet these requirements?

A.

Use Amazon Kinesis Data Streams to collect telemetry data. Use Amazon Managed Service for Apache Flink to process the data in near real time and publish custom metrics to Amazon CloudWatch. Use Amazon CloudWatch to create dashboards and alarms from the custom metrics.

B.

Use Amazon Data Firehose to collect, process, and store telemetry data in near real time. Use AWS Glue to extract, transform, and load (ETL) data from Firehose into required formats for analysis. Use Amazon QuickSight to visualize and analyze the data.

C.

Use Amazon Kinesis Data Streams to collect, process, and store telemetry data. Use Amazon EMR to process the data in near real time into required formats for analysis. Use Amazon Athena to analyze and visualize the data.

D.

Use Amazon DynamoDB Streams to collect and store telemetry data. Configure DynamoDB Streams to invoke AWS Lambda functions to process the data in near real time. Use Amazon Managed Grafana to visualize and analyze the data.

Full Access
Question # 29

A solutions architect is designing a web application that will run on Amazon EC2 instances behind an Application Load Balancer (ALB). The company strictly requires that the application be resilient against malicious internet activity and attacks, and protect against new common vulnerabilities and exposures.

What should the solutions architect recommend?

A.

Leverage Amazon CloudFront with the ALB endpoint as the origin.

B.

Deploy an appropriate managed rule for AWS WAF and associate it with the ALB.

C.

Subscribe to AWS Shield Advanced and ensure common vulnerabilities and exposures are blocked.

D.

Configure network ACLs and security groups to allow only ports 80 and 443 to access the EC2 instances.

Full Access
Question # 30

A company has deployed resources in the us-east-1 Region. The company also uses thousands of AWS Outposts servers deployed at remote locations around the world. These Outposts servers regularly download new software versions from us-east-1 that consist of hundreds of files. The company wants to improve the latency of the software download process.

Which solution will meet these requirements?

A.

Create an Amazon S3 bucket in us-east-1. Configure the bucket for static website hosting. Use bucket policies and ACLs to provide read access to the Outposts servers.

B.

Create an Amazon S3 bucket in us-east-1 and a second bucket in us-west-2. Configure replication. Set up a CloudFront distribution with origin failover between the buckets. Download by using signed URLs.

C.

Create an Amazon S3 bucket in us-east-1. Configure S3 Transfer Acceleration. Configure the Outposts servers to download by using the acceleration endpoint.

D.

Create an Amazon S3 bucket in us-east-1. Set up a CloudFront distribution using all edge locations with caching enabled. Configure the bucket as the origin. Download the software by using signed URLs.

Full Access
Question # 31

A news company that has reporters all over the world is hosting its broadcast system on AWS. The reporters send live broadcasts to the broadcast system. The reporters use software on their phones to send live streams through the Real Time Messaging Protocol (RTMP).

A solutions architect must design a solution that gives the reporters the ability to send the highest quality streams The solution must provide accelerated TCP connections back to the broadcast system.

What should the solutions architect use to meet these requirements?

A.

Amazon CloudFront

B.

AWS Global Accelerator

C.

AWS Client VPN

D.

Amazon EC2 instances and AWS Elastic IP addresses

Full Access
Question # 32

A company needs to ingest and analyze telemetry data from vehicles at scale for machine learning and reporting.

Which solution will meet these requirements?

A.

Use Amazon Timestream for LiveAnalytics to store data points. Grant Amazon SageMaker permission to access the data. Use Amazon QuickSight to visualize the data.

B.

Use Amazon DynamoDB to store data points. Use DynamoDB Connector to ingest data into Amazon EMR for processing. Use Amazon QuickSight to visualize the data.

C.

Use Amazon Neptune to store data points. Use Amazon Kinesis Data Streams to ingest data into a Lambda function for processing. Use Amazon QuickSight to visualize the data.

D.

Use Amazon Timestream for LiveAnalytics to store data points. Grant Amazon SageMaker permission to access the data. Use Amazon Athena to visualize the data.

Full Access
Question # 33

A data science team requires storage for nightly log processing. The size and number of logs is unknown and the logs will persist for 24 hours only.

What is the MOST cost-effective solution?

A.

Amazon S3 Glacier Deep Archive

B.

Amazon S3 Standard

C.

Amazon S3 Intelligent-Tiering

D.

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

Full Access
Question # 34

A disaster response team is using drones to collect images of recent storm damage. The response team's laptops lack the storage and compute capacity to transfer the images and process the data.

While the team has Amazon EC2 instances for processing and Amazon S3 buckets for storage, network connectivity is intermittent and unreliable. The images need to be processed to evaluate the damage.

What should a solutions architect recommend?

A.

Use AWS Snowball Edge devices to process and store the images.

B.

Upload the images to Amazon Simple Queue Service (Amazon SQS) during intermittent connectivity to EC2 instances.

C.

Configure Amazon Data Firehose to create multiple delivery streams aimed separately at the S3 buckets for storage and the EC2 instances for processing images.

D.

Use AWS Storage Gateway pre-installed on a hardware appliance to cache the images locally for Amazon S3 to process the images when connectivity becomes available.

Full Access
Question # 35

A company is planning to deploy a managed MySQL database solution for its non-production applications. The company plans to run the system for several years on AWS. Which solution will meet these requirements MOST cost-effectively?

A.

Create an Amazon RDS for MySQL instance. Purchase a Reserved Instance.

B.

Create an Amazon RDS for MySQL instance. Use the instance on an on-demand basis.

C.

Create an Amazon Aurora MySQL cluster with writer and reader nodes. Use the cluster on an on-demand basis.

D.

Create an Amazon EC2 instance. Manually install and configure MySQL Server on the instance.

Full Access
Question # 36

A company wants to migrate from an on-premises data center to AWS. The data center hosts a storage server that stores data in an NFS-based file system. The storage server stores 200 GB of data. The company needs to migrate the data without interruption to existing services. Multiple resources in AWS must be able to access the data by using the NFS protocol.

Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)

A.

Create an Amazon FSx for Lustre file system.

B.

Create an Amazon Elastic File System (Amazon EFS) file system.

C.

Create an Amazon S3 bucket to receive the data.

D.

Create an Amazon FSx for Windows file system.

E.

Install an AWS DataSync agent in the on-premises data center. Use a DataSync task between the on-premises file system and the AWS file system.

Full Access
Question # 37

As part of budget planning, management wants a report of AWS billed items listed by user. The data will be used to create department budgets. A solutions architect needs to determine the most efficient way to obtain this report information.

Which solution meets these requirements?

A.

Run a query with Amazon Athena to generate the report.

B.

Create a report in Cost Explorer and download the report.

C.

Access the bill details from the billing dashboard and download the bill.

D.

Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).

Full Access
Question # 38

A healthcare company is designing a system to store and manage logs in the AWS Cloud. The system ingests and stores logs in JSON format that contain sensitive patient information. The company must identify any sensitive data and must be able to search the log data by using SQL queries.

Which solution will meet these requirements?

A.

Store the logs in an Amazon S3 bucket. Configure Amazon Macie to discover sensitive data. Use Amazon Athena to query the logs.

B.

Store the logs in an Amazon EBS volume. Create an application that uses Amazon SageMaker AI to detect sensitive data. Use Amazon RDS to query the logs.

C.

Store the logs in Amazon DynamoDB. Use AWS KMS to discover sensitive data. Use Amazon Redshift Spectrum to query the logs.

D.

Store the logs in an Amazon S3 bucket. Use Amazon Inspector to discover sensitive data. Use Amazon Athena to query the logs.

Full Access
Question # 39

A company plans to deploy containerized microservices in the AWS Cloud. The containers must mount a persistent file store that the company can manage by using OS-level permissions. The company requires fully managed services to host the containers and file store.

A.

Use AWS Lambda functions and an Amazon API Gateway REST API to handle the microservices. Use Amazon S3 buckets for storage.

B.

Use Amazon EC2 instances to host the microservices. Use Amazon Elastic Block Store (Amazon EBS) volumes for storage.

C.

Use Amazon Elastic Container Service (Amazon ECS) containers on AWS Fargate to handle the microservices. Use an Amazon Elastic File System (Amazon EFS) file system for storage.

D.

Use Amazon Elastic Container Service (Amazon ECS) containers on AWS Fargate to handle the microservices. Use an Amazon EC2 instance that runs a dedicated file store for storage.

Full Access
Question # 40

A company uses Amazon Redshift to store structured data and Amazon S3 to store unstructured data. The company wants to analyze the stored data and create business intelligence reports. The company needs a data visualization solution that is compatible with Amazon Redshift and Amazon S3.

Which solution will meet these requirements?

A.

Use Amazon Redshift query editor v2 to analyze data stored in Amazon Redshift. Use Amazon Athena to analyze data stored in Amazon S3. Use Amazon QuickSight to access Amazon Redshift and Athena, visualize the data analyses, and create business intelligence reports.

B.

Use Amazon Redshift Serverless to analyze data stored in Amazon Redshift. Use Amazon S3 Object Lambda to analyze data stored in Amazon S3. Use Amazon Managed Grafana to access Amazon Redshift and Object Lambda, visualize the data analyses, and create business intelligence reports.

C.

Use Amazon Redshift Spectrum to analyze data stored in Amazon Redshift. Use Amazon Athena to analyze data stored in Amazon S3. Use Amazon QuickSight to access Amazon Redshift and Athena, visualize the data analyses, and create business intelligence reports.

D.

Use Amazon OpenSearch Service to analyze data stored in Amazon Redshift and Amazon S3. Use Amazon Managed Grafana to access OpenSearch Service, visualize the data analyses, and create business intelligence reports.

Full Access
Question # 41

A company hosts an application on AWS that uses an Amazon S3 bucket and an Amazon Aurora database. The company wants to implement a multi-Region disaster recovery (DR) strategy that minimizes potential data loss.

Which solution will meet these requirements?

A.

Create an Aurora read replica in a second Availability Zone within the same AWS Region. Enable S3 Versioning for the bucket.

B.

Create an Aurora read replica in a second AWS Region. Configure AWS Backup to create continuous backups of the S3 bucket to a second bucket in a second Availability Zone.

C.

Enable Aurora native database backups across multiple AWS Regions. Use S3 cross-account backups within the company's local Region.

D.

Migrate the database to an Aurora global database. Create a second S3 bucket in a second Region. Configure Cross-Region Replication.

Full Access
Question # 42

A weather forecasting company needs to process hundreds of gigabytes of data with sub-millisecond latency. The company has a high performance computing (HPC) environment in its data center and wants to expand its forecasting capabilities.

A solutions architect must identify a highly available cloud storage solution that can handle large amounts of sustained throughput Files that are stored in the solution should be accessible to thousands of compute instances that will simultaneously access and process the entire dataset.

What should the solutions architect do to meet these requirements?

A.

Use Amazon FSx for Lustre scratch file systems

B.

Use Amazon FSx for Lustre persistent file systems.

C.

Use Amazon Elastic File System (Amazon EFS) with Bursting Throughput mode.

D.

Use Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.

Full Access
Question # 43

A company is creating a new application that will store a large amount of data. The data will be analyzed hourly and will be modified by several Amazon EC2 Linux instances that are deployed across multiple Availability Zones. The needed amount of storage space will continue to grow for the next 6 months.

Which storage solution should a solutions architect recommend to meet these requirements?

A.

Store the data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the application instances.

B.

Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the application instances.

C.

Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the application instances.

D.

Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared between the application instances.

Full Access
Question # 44

A company has an application that runs on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster on Amazon EC2 instances. The application has a U1 that uses Amazon DynamoDB and data services that use Amazon S3 as part of the application deployment.

The company must ensure that the EKS Pods for the U1 can access only Amazon DynamoDB and that the EKS Pods for the data services can access only Amazon S3. The company uses AWS Identity and Access Management |IAM).

Which solution meets these requirements?

A.

Create separate IAM policies (or Amazon S3 and DynamoDB access with the required permissions. Attach both IAM policies to the EC2 instance profile. Use role-based access control (RBAC) to control access to Amazon S3 or DynamoDB (or the respective EKS Pods.

B.

Create separate IAM policies (or Amazon S3 and DynamoDB access with the required permissions. Attach the Amazon S3 IAM policy directly to the EKS Pods (or the data services and the DynamoDB policy to the EKS Pods for the U1.

C.

Create separate Kubernetes service accounts for the U1 and data services to assume an IAM role. Attach the Amazon S3 Full Access policy to the data services account and the AmazonDynamoDBFullAccess policy to the U1 service account.

D.

Create separate Kubernetes service accounts for the U1 and data services to assume an IAM role. Use IAM Role for Service Accounts (IRSA) to provide access to the EKS Pods for the U1 to Amazon S3 and the EKS Pods for the data services to DynamoDB.

Full Access
Question # 45

A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability.

How should a solutions architect design the architecture to meet these requirements?

A.

Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling.

B.

Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.

C.

Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server.

D.

Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure Amazon EventBridge as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the compute nodes.

Full Access
Question # 46

A global company runs a data lake application in the us-east-1 Region and the eu-west-1 Region in an active-passive configuration. Application data is stored locally in Amazon S3 buckets in each AWS Region. The bucket in us-east-1 is the primary active bucket that handles all writes. The company needs to ensure that the application has Regional fault tolerance. The company also needs the storage layer to provide a highly available active-active capability for reads across Regions. The storage layer must provide low latency access through a single global endpoint.

A.

Create an Amazon CloudFront distribution in each Region. Set the S3 bucket within each Region as the origin for the CloudFront distribution in the same Region.

B.

Use S3 Transfer Acceleration for cross-Region data transfers to the S3 buckets.

C.

Configure AWS Backup to replicate S3 buckets across Regions. Set up a disaster recovery environment.

D.

Create an S3 Multi-Region Access Point. Configure cross-Region replication.

Full Access
Question # 47

A company has an application that processes information from documents that users upload. When a user uploads a new document to an Amazon S3 bucket, an AWS Lambda function is invoked. The Lambda function processes information from the documents.

The company discovers that the application did not process many recently uploaded documents. The company wants to ensure that the application processes each document with retries if there is an error during the first attempt to process the document.

Which solution will meet these requirements?

A.

Create an Amazon API Gateway REST API that has a proxy integration to the Lambda function. Update the application to send requests to the REST API.

B.

Configure a replication policy on the S3 bucket to stage the documents in another S3 bucket that an AWS Batch job processes on a daily schedule.

C.

Deploy an Application Load Balancer in front of the Lambda function that processes the documents.

D.

Configure an Amazon Simple Queue Service (Amazon SQS) queue as an event source for the Lambda function. Configure an S3 event notification on the S3 bucket to send new document upload events to the SQS queue.

Full Access
Question # 48

A company runs a MySQL database on a single Amazon EC2 instance.

The company needs to improve availability of the database to prepare for power outages.

Which solution will meet this requirement?

A.

Add an Application Load Balancer (ALB) in front of the EC2 instance.

B.

Configure EC2 automatic instance recovery to move the instance to another Availability Zone.

C.

Migrate the MySQL database to Amazon RDS and enable Multi-AZ deployment.

D.

Enable termination protection for the EC2 instance.

Full Access
Question # 49

A company is implementing a new application on AWS. The company will run the application on multiple Amazon EC2 instances across multiple Availability Zones within multiple AWS Regions. The application will be available through the internet. Users will access the application from around the world.

The company wants to ensure that each user who accesses the application is sent to the EC2 instances that are closest to the user's location.

Which solution will meet these requirements?

A.

Implement an Amazon Route 53 geolocation routing policy. Use an internet-facing Application Load Balancer to distribute the traffic across all Availability Zones within the same Region.

B.

Implement an Amazon Route 53 geoproximity routing policy. Use an internet-facing Network Load Balancer to distribute the traffic across all Availability Zones within the same Region.

C.

Implement an Amazon Route 53 multivalue answer routing policy Use an internet-facing Application Load Balancer to distribute the traffic across all Availability Zones within the same Region.

D.

Implement an Amazon Route 53 weighted routing policy. Use an internet-facing Network Load Balancer to distribute the traffic across all Availability Zones within the same Region.

Full Access
Question # 50

A company asks a solutions architect to review the architecture for its messaging application. The application uses TCP and UDP traffic. The company is planning to deploy a new VoIP feature, but its 10 test users in other countries are reporting poor call quality.

The VoIP application runs on an Amazon EC2 instance with more than enough resources. The HTTP portion of the company's application behind an Application Load Balancer has no issues.

What should the solutions architect recommend for the company to do to address the VoIP performance issues?

A.

Use AWS Global Accelerator.

B.

Implement Amazon CloudFront into the architecture.

C.

Use an Amazon Route 53 geoproximity routing policy.

D.

Migrate from Application Load Balancers to Network Load Balancers.

Full Access
Question # 51

A company provides devices to users. When a device is registered, its ID is added to DynamoDB. A daily job activates devices using two Lambda functions:

• The Retrieve function lists unregistered device IDs.

• The Retrieve function then calls the Activate function in a loop to register each device.

The number of activations is increasing, and the company wants to avoid Lambda timeouts without modifying existing functions.

Which solution will scale appropriately?

A.

Use EventBridge Scheduler to periodically invoke the Retrieve function.

B.

Invoke the Activate function from DynamoDB Streams when a device ID is added.

C.

Use Step Functions to call the Retrieve function and use a Map state to run the Activate function for each ID.

D.

Move the Retrieve function to EC2 for longer processing time.

Full Access
Question # 52

A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The company designed the application to work with session affinity (sticky sessions) for a better user experience.

The application must be available publicly over the internet as an endpoint. A WAF must be applied to the endpoint for additional security. Session affinity (sticky sessions) must be configured on the endpoint.

A.

Create a public Network Load Balancer. Specify the application target group.

B.

Create a Gateway Load Balancer. Specify the application target group.

C.

Create a public Application Load Balancer. Specify the application target group.

D.

Create a second target group. Add Elastic IP addresses to the EC2 instances.

E.

Create a web ACL in AWS WAF. Associate the web ACL with the endpoint.

Full Access
Question # 53

An ecommerce company is preparing to deploy a web application on AWS to ensure continuous service for customers. The architecture includes a web application that the company hosts on Amazon EC2 instances, a relational database in Amazon RDS, and static assets that the company stores in Amazon S3.

The company wants to design a robust and resilient architecture for the application.

A.

Deploy Amazon EC2 instances in a single Availability Zone. Deploy an RDS DB instance in the same Availability Zone. Use Amazon S3 with versioning enabled to store static assets.

B.

Deploy Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Deploy a Multi-AZ RDS DB instance. Use Amazon CloudFront to distribute static assets.

C.

Deploy Amazon EC2 instances in a single Availability Zone. Deploy an RDS DB instance in a second Availability Zone for cross-AZ redundancy. Serve static assets directly from the EC2 instances.

D.

Use AWS Lambda functions to serve the web application. Use Amazon Aurora Serverless v2 for the database. Store static assets in Amazon Elastic File System (Amazon EFS) One Zone-Infrequent Access (One Zone-IA).

Full Access
Question # 54

A developer used the AWS SDK to create an application that aggregates and produces log records for 10 services. The application delivers data to an Amazon Kinesis Data Streams stream.

Each record contains a log message with a service name, creation timestamp, and other log information. The stream has 15 shards in provisioned capacity mode. The stream uses service name as the partition key.

The developer notices that when all the services are producing logs,ProvisionedThroughputExceededException errors occur during PutRecord requests. The stream metrics show that the write capacity the applications use is below the provisioned capacity.

How should the developer resolve this issue?

A.

Change the capacity mode from provisioned to on-demand.

B.

Double the number of shards until the throttling errors stop occurring.

C.

Change the partition key from service name to creation timestamp.

D.

Use a separate Kinesis stream for each service to generate the logs.

Full Access
Question # 55

A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiple speaker recognition and generates transcript files. The company wants to query the transcript files to analyze the business patterns.

Which solution will meet these requirements?

A.

Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use machine learning (ML) models to analyze the transcript files.

B.

Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena to analyze the transcript files.

C.

Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries to analyze the transcript files.

D.

Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use Amazon Textract to analyze the transcript files.

Full Access
Question # 56

A media company hosts its video processing workload on AWS. The workload uses Amazon EC2 instances in an Auto Scaling group to handle varying levels of demand. The workload stores the original videos and the processed videos in an Amazon S3 bucket.

The company wants to ensure that the video processing workload is scalable. The company wants to prevent failed processing attempts because of resource constraints. The architecturemust be able to handle sudden spikes in video uploads without impacting the processing capability.

Which solution will meet these requirements with the LEAST overhead?

A.

Migrate the workload from Amazon EC2 instances to AWS Lambda functions. Configure an Amazon S3 event notification to invoke the Lambda functions when a new video is uploaded. Configure the Lambda functions to process videos directly and to save processed videos back to the S3 bucket.

B.

Migrate the workload from Amazon EC2 instances to AWS Lambda functions. Use Amazon S3 to invoke an Amazon Simple Notification Service (Amazon SNS) topic when a new video is uploaded. Subscribe the Lambda functions to the SNS topic. Configure the Lambda functions to process the videos asynchronously and to save processed videos back to the S3 bucket.

C.

Configure an Amazon S3 event notification to send a message to an Amazon Simple Queue Service (Amazon SQS) queue when a new video is uploaded. Configure the existing Auto Scaling group to poll the SQS queue, process the videos, and save processed videos back to the S3 bucket.

D.

Configure an Amazon S3 upload trigger to invoke an AWS Step Functions state machine when a new video is uploaded. Configure the state machine to orchestrate the video processing workflow by placing a job message in the Amazon SQS queue. Configure the job message to invoke the EC2 instances to process the videos. Save processed videos back to the S3 bucket.

Full Access
Question # 57

A company has several on-premises Internet Small Computer Systems Interface (iSCSI) network storage servers The company wants to reduce the number of these servers by moving to the AWS Cloud. A solutions architect must provide low-latency access to frequently used data and reduce the dependency on on-premises servers with a minimal number of infrastructure changes.

Which solution will meet these requirements?

A.

Deploy an Amazon S3 File Gateway

B.

Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3

C.

Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes

D.

Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.

Full Access
Question # 58

A company launches a new web application that uses an Amazon Aurora PostgreSQL database. The company wants to add new features to the application that rely on AI. The company requires vector storage capability to use AI tools.

Which solution will meet this requirement MOST cost-effectively?

A.

Use Amazon OpenSearch Service to create an OpenSearch service. Configure the application to write vector embeddings to a vector index.

B.

Create an Amazon DocumentDB cluster. Configure the application to write vector embeddings to a vector index.

C.

Create an Amazon Neptune ML cluster. Configure the application to write vector embeddings to a vector graph.

D.

Install the pgvector extension on the Aurora PostgreSQL database. Configure the application to write vector embeddings to a vector table.

Full Access
Question # 59

An application uses an Amazon SQS queue and two AWS Lambda functions. One of the Lambda functions pushes messages to the queue, and the other function polls the queue and receives queued messages.

A solutions architect needs to ensure that only the two Lambda functions can write to or read from the queue.

Which solution will meet these requirements?

A.

Attach an IAM policy to the SQS queue that grants the Lambda function principals read and write access. Attach an IAM policy to the execution role of each Lambda function that denies all access to the SQS queue except for the principal of each function.

B.

Attach a resource-based policy to the SQS queue to deny read and write access to the queue for any entity except the principal of each Lambda function. Attach an IAM policy to the execution role of each Lambda function that allows read and write access to the queue.

C.

Attach a resource-based policy to the SQS queue that grants the Lambda function principals read and write access to the queue. Attach an IAM policy to the execution role of each Lambda function that allows read and write access to the queue.

D.

Attach a resource-based policy to the SQS queue to deny all access to the queue. Attach an IAM policy to the execution role of each Lambda function that grants read and write access to the queue.

Full Access
Question # 60

A company is developing a containerized web application that needs to be highly available and scalable. The application requires access to GPU resources.

A.

Package the application as an AWS Lambda function in a container image. Use Lambda to run the containerized application on a runtime with GPU access.

B.

Deploy the application container to Amazon Elastic Kubernetes Service (Amazon EKS). Use AWS Fargate to manage compute resources and access to GPU resources.

C.

Deploy the application container to Amazon Elastic Container Registry (Amazon ECR). Use Amazon ECR to run the containerized application with an attached GPU.

D.

Run the application on Amazon EC2 instances from a GPU instance family by using Amazon Elastic Container Service (Amazon ECS) for orchestration.

Full Access
Question # 61

A company is building a data analysis platform on AWS by using AWS Lake Formation. The platform will ingest data from different sources such as Amazon S3 and Amazon RDS. The company needs a secure solution to prevent access to portions of the data that contain sensitive information.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Create an IAM role that includes permissions to access Lake Formation tables.

B.

Create data filters to implement row-level security and cell-level security.

C.

Create an AWS Lambda function that removes sensitive information before Lake Formation ingests the data.

D.

Create an AWS Lambda function that periodically queries and removes sensitive information from Lake Formation tables.

Full Access
Question # 62

A company uses an Amazon DynamoDB table to store data that the company receives from devices. The DynamoDB table supports a customer-facing website to display recent activity oncustomer devices The company configured the table with provisioned throughput for writes and reads

The company wants to calculate performance metrics for customer device data on a daily basis. The solution must have minimal effect on the table's provisioned read and write capacity

Which solution will meet these requirements?

A.

Use an Amazon Athena SQL query with the Amazon Athena DynamoDB connector to calculate performance metrics on a recurring schedule.

B.

Use an AWS Glue job with the AWS Glue DynamoDB export connector to calculate performance metrics on a recurring schedule.

C.

Use an Amazon Redshift COPY command to calculate performance metrics on a recurring schedule.

D.

Use an Amazon EMR job with an Apache Hive external table to calculate performance metrics on a recurring schedule.

Full Access
Question # 63

A company has developed an API using an Amazon API Gateway REST API and AWS Lambda functions. The API serves static and dynamic content to users worldwide. The company wants to decrease the latency of transferring content for API requests.

Options:

A.

Deploy the REST API as an edge-optimized API endpoint. Enable caching. Enable content encoding in the API definition to compress the application data in transit.

B.

Deploy the REST API as a Regional API endpoint. Enable caching. Enable content encoding in the API definition to compress the application data in transit.

C.

Deploy the REST API as an edge-optimized API endpoint. Enable caching. Configure reserved concurrency for the Lambda functions.

D.

Deploy the REST API as a Regional API endpoint. Enable caching. Configure reserved concurrency for the Lambda functions.

Full Access
Question # 64

A company deploys an application on Amazon EC2 Spot Instances. The company observes frequent unavailability issues that affect the application's output. The application instances all use the same instance type in a single Availability Zone. The application architecture does not require the use of any specific instance family.

The company needs a solution to improve the availability of the application.

Which combination of steps will meet this requirement MOST cost-effectively? (Select THREE.)

A.

Create an EC2 Auto Scaling group that includes a mix of Spot Instances and a base number of On-Demand Instances.

B.

Create EC2 Capacity Reservations.

C.

Use the lowest price allocation strategy for Spot Instances.

D.

Specify similarly sized instance types and Availability Zones for the Spot Instances.

E.

Use a different instance type for the web application.

F.

Use the price capacity optimized strategy for Spot Instances.

Full Access
Question # 65

A solutions architect is investigating compute options for a critical analytics application. The application uses long-running processes to prepare and aggregate data. The processes cannot be interrupted. The application has a known baseline load. The application needs to handle occasional usage surges.

Which solution will meet these requirements MOST cost-effectively?

A.

Create an Amazon EC2 Auto Scaling group. Set the Min capacity and Desired capacity parameters to the number of instances required to handle the baseline load. Purchase Reserved Instances for the Auto Scaling group.

B.

Create an Amazon EC2 Auto Scaling group. Set the Min capacity, Max capacity, and Desired capacity parameters to the number of instances required to handle the baseline load. Use On-Demand Instances to address occasional usage surges.

C.

Create an Amazon EC2 Auto Scaling group. Set the Min capacity and Desired capacity parameters to the number of instances required to handle the baseline load. Purchase Reserved Instances for the Auto Scaling group. Use the OnDemandPercentageAboveBaseCapacity parameter to configure the launch template to launch Spot Instances.

D.

Re-architect the application to use AWS Lambda functions instead of Amazon EC2 instances. Purchase a one-year Compute Savings Plan to reduce the cost of Lambda usage.

Full Access
Question # 66

A company is redesigning a static website. The company needs a solution to host the new website in the company's AWS account. The solution must be secure and scalable.

Which combination of solutions will meet these requirements? (Select THREE.)

A.

Configure an Amazon CloudFront distribution. Set the Amazon S3 bucket as the origin.

B.

Associate an AWS Certificate Manager (ACM) TLS certificate to the Amazon CloudFront distribution.

C.

Enable static website hosting for the Amazon S3 bucket.

D.

Create an Amazon S3 bucket to store the static website content.

E.

Export the website's SSL/TLS certificate from AWS Certificate Manager (ACM) to the root of the Amazon S3 bucket.

F.

Turn off Block Public Access for the Amazon S3 bucket.

Full Access
Question # 67

A developer is creating an ecommerce workflow in an AWS Step Functions state machine that includes an HTTP Task state. The task passes shipping information and order details to an endpoint.

The developer needs to test the workflow to confirm that the HTTP headers and body are correct and that the responses meet expectations.

Which solution will meet these requirements?

A.

Use the TestState API to invoke only the HTTP Task. Set the inspection level to TRACE.

B.

Use the TestState API to invoke the state machine. Set the inspection level to DEBUG.

C.

Use the data flow simulator to invoke only the HTTP Task. View the request and response data.

D.

Change the log level of the state machine to ALL. Run the state machine.

Full Access
Question # 68

A company is redesigning its data intake process. In the existing process, the company receives data transfers and uploads the data to an Amazon S3 bucket every night. The company uses AWS Glue crawlers and jobs to prepare the data for a machine learning (ML) workflow.

The company needs a low-code solution to run multiple AWS Glue jobs in sequence and provide a visual workflow.

Which solution will meet these requirements?

A.

Use an Amazon EC2 instance to run a cron job and a script to check for the S3 files and call the AWS Glue jobs. Create an Amazon CloudWatch dashboard to visualize the workflow.

B.

Use Amazon EventBridge to call an AWS Step Functions workflow for the AWS Glue jobs. Use Step Functions to create a visual workflow.

C.

Use S3 Event Notifications to invoke a series of AWS Lambda functions and AWS Glue jobs in sequence. Use Amazon QuickSight to create a visual workflow.

D.

Create an Amazon Elastic Container Service (Amazon ECS) task that contains a Python script that manages the AWS Glue jobs and creates a visual workflow. Use Amazon EventBridge Scheduler to start the ECS task.

Full Access
Question # 69

A solutions architect is creating a data reporting application that will send traffic through third-party network firewalls in an AWS security account. The firewalls and application servers must be load balanced.

The application uses TCP connections to generate reports. The reports can run for several hours and can be idle for up to 1 hour. The reports must not time out during an idle period.

Which solution will meet these requirements?

A.

Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Set the ALB idle timeout period to 1 hour.

B.

Use a single firewall in the security account. Use an Application Load Balancer (ALB) for the application servers. Set the ALB idle timeout and firewall idle timeout periods to 1 hour.

C.

Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Set the idle timeout periods for the ALB, the GWLB, and the firewalls to 1 hour.

D.

Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Configure the ALB idle timeout period to 1 hour. Increase the application server capacity to finish the report generation faster.

Full Access
Question # 70

A solutions architect must design a database solution for a high-traffic ecommerce web application. The database stores customer profiles and shopping cart information. The database must support a peak load of several million requests each second and deliver responses in milliseconds. The operational overhead for managing and scaling the database must be minimized.

Which database solution should the solutions architect recommend?

A.

Amazon Aurora

B.

Amazon DynamoDB

C.

Amazon RDS

D.

Amazon Redshift

Full Access
Question # 71

A company's HTTP application is behind a Network Load Balancer (NLB). The NLB's target group is configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service.

The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the application's availability without writing custom scripts or code.

What should a solutions architect do to meet these requirements?

A.

Enable HTTP health checks on the NLB, supplying the URL of the company's application.

B.

Add a cron job to the EC2 instances to check the local application's logs once each minute. If HTTP errors are detected, the application will restart.

C.

Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company's application. Configure an Auto Scaling action to replace unhealthy instances.

D.

Create an Amazon CloudWatch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to replace unhealthy instances when the alarm is in the ALARM state.

Full Access
Question # 72

Question:

A company recently migrated a large amount of research data to an Amazon S3 bucket. The company needs an automated solution to identify sensitive data in the bucket. A security team also needs to monitor access patterns for the data 24 hours a day, 7 days a week to identify suspicious activities or evidence of tampering with security controls.

Options:

A.

Set up AWS CloudTrail reporting, and grant the security team read-only access to the CloudTrail reports. Set up an Amazon S3 Inventory report to identify sensitive data. Review the findings with the security team.

B.

Enable Amazon Macie and Amazon GuardDuty on the account. Grant the security team access to Macie and GuardDuty. Review the findings with the security team.

C.

Set up an Amazon S3 Inventory report. Use Amazon Athena and Amazon QuickSight to identify sensitive data. Create a dashboard for the security team to review findings.

D.

Use AWS Identity and Access Management (IAM) Access Advisor to monitor for suspicious activity and tampering. Create a dashboard for the security team. Set up an Amazon S3 Inventory report to identify sensitive data. Review the findings with the security team.

Full Access
Question # 73

A company receives data transfers from a small number of external clients that use SFTP software on an Amazon EC2 instance. The clients use an SFTP client to upload data. The clients use SSH keys for authentication. Every hour, an automated script transfers new uploads to an Amazon S3 bucket for processing.

The company wants to move the transfer process to an AWS managed service and to reduce the time required to start data processing. The company wants to retain the existing user management and SSH key generation process. The solution must not require clients to make significant changes to their existing processes.

Which solution will meet these requirements?

A.

Reconfigure the script that runs on the EC2 instance to run every 15 minutes. Create an S3 Event Notifications rule for all new object creation events. Set an Amazon Simple Notification Service (Amazon SNS) topic as the destination.

B.

Create an AWS Transfer Family SFTP server that uses the existing S3 bucket as a target. Use service-managed users to enable authentication.

C.

Require clients to add the AWS DataSync agent into their local environments. Create an IAM user for each client that has permission to upload data to the target S3 bucket.

D.

Create an AWS Transfer Family SFTP connector that has permission to access the target S3 bucket for each client. Store credentials in AWS Systems Manager. Create an IAM role to allow the SFTP connector to securely use the credentials.

Full Access
Question # 74

A company hosts an application that processes highly sensitive customer transactions on AWS. The application uses Amazon RDS as its database. The company manages its own encryption keys to secure the data in Amazon RDS.

The company needs to update the customer-managed encryption keys at least once each year.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Set up automatic key rotation in AWS Key Management Service (AWS KMS) for the encryption keys.

B.

Configure AWS Key Management Service (AWS KMS) to alert the company to rotate the encryption keys annually.

C.

Schedule an AWS Lambda function to rotate the encryption keys annually.

D.

Create an AWS CloudFormation stack to run an AWS Lambda function that deploys new encryption keys once each year.

Full Access
Question # 75

A company is launching a new application that requires a structured database to store user profiles, application settings, and transactional data. The database must be scalable with application traffic and must offer backups.

Which solution will meet these requirements MOST cost-effectively?

A.

Deploy a self-managed database on Amazon EC2 instances by using open-source software. Use Spot Instances for cost optimization. Configure automated backups to Amazon S3.

B.

Use Amazon RDS. Use on-demand capacity mode for the database with General Purpose SSD storage. Configure automatic backups with a retention period of 7 days.

C.

Use Amazon Aurora Serverless for the database. Use serverless capacity scaling. Configure automated backups to Amazon S3.

D.

Deploy a self-managed NoSQL database on Amazon EC2 instances. Use Reserved Instances for cost optimization. Configure automated backups directly to Amazon S3 Glacier Flexible Retrieval.

Full Access
Question # 76

A company is designing a serverless application to process a large number of events within an AWS account. The application saves the events to a data warehouse for further analysis. The application sends incoming events to an Amazon SQS queue. Traffic between the application and the SQS queue must not use public IP addresses.

A.

Create a VPC endpoint for Amazon SQS. Set the queue policy to deny all access except from the VPC endpoint.

B.

Configure server-side encryption with SQS-managed keys (SSE-SQS).

C.

Configure AWS Security Token Service (AWS STS) to generate temporary credentials for resources that access the queue.

D.

Configure VPC Flow Logs to detect SQS traffic that leaves the VPC.

Full Access
Question # 77

A company is building a solution to provide customers with an API that accesses financial data. The API backend needs to compute tax data for each request. The company anticipates greater demand to access the data during the last 3 months of each year.

A solutions architect needs to design a scalable solution that can meet the regular demand and the peak demand at the end of each year.

Which solution will meet these requirements?

A.

Host the API on an Amazon EC2 instance that runs third-party software. Configure the EC2 instance to perform tax computations.

B.

Deploy an Amazon API Gateway REST API. Create an AWS Lambda function to perform tax computations. Integrate the Lambda function with the REST API.

C.

Create an Application Load Balancer (ALB) in front of two Amazon EC2 instances. Configure the EC2 instances to perform tax computations.

D.

Deploy an Amazon API Gateway REST API. Configure an Amazon EC2 instance to perform tax computations. Integrate the EC2 instance with the REST API.

Full Access
Question # 78

A company hosts its application on several Amazon EC2 instances inside a VPC. The company creates a dedicated Amazon S3 bucket for each customer to store their relevant information in Amazon S3.

The company wants to ensure that the application running on EC2 instances can securely access only the S3 buckets that belong to the company's AWS account.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Create a gateway endpoint for Amazon S3 that is attached to the VPC Update the IAM instance profile policy to provide access to only the specific buckets that the application needs.

B.

Create a NAT gateway in a public subnet with a security group that allows access to only Amazon S3 Update the route tables to use the NAT Gateway.

C.

Create a gateway endpoint for Amazon S3 that is attached to the VPC Update the IAM instance profile policy with a Deny action and the following condition key:

D.

Create a NAT Gateway in a public subnet Update route tables to use the NAT Gateway Assign bucket policies for all buckets with a Deny action and the following condition key:

Full Access
Question # 79

A media company stores customer-uploaded videos in an Amazon S3 bucket with the Standard storage class. The company wants to create an S3 Lifecycle configuration. The company will set the maximum retention time to 7 days. However, the configuration must delete any video that is more than 1 TB in size after 48 hours.

A.

Create a single S3 Lifecycle configuration that has two rules. Configure the first rule to expire objects after 48 hours with a filter of ObjectSizeGreaterThan and a value of 1 TB. Configure the second rule to expire objects after 7 days.

B.

Create two S3 Lifecycle configurations. Include a rule in the first configuration to expire objects after 48 hours by using a Prefix filter of LargeFiles. Include a rule in the second configuration to expire objects after 7 days.

C.

Create a single S3 Lifecycle configuration that has two rules. Configure the first rule to expire objects after 48 hours. Configure the second rule to expire objects after 7 days.

D.

Create two S3 Lifecycle configurations. Include a rule in the first configuration to expire objects after 48 hours. Include a rule in the second configuration to expire objects after 7 days by using a filter of ObjectSizeLessThan and a value of 1 TB.

Full Access
Question # 80

A company runs a Node.js function on a server in its on-premises data center. The data center stores data in a PostgreSQL database. The company stores the credentials in a connection string in an environment variable on the server. The company wants to migrate its application to AWS and to replace the Node.js application server with AWS Lambda. The company also wants to migrate to Amazon RDS for PostgreSQL and to ensure that the database credentials are securely managed.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Store the database credentials as a parameter in AWS Systems Manager Parameter Store. Configure Parameter Store to automatically rotate the secrets every 30 days. Update the Lambda function to retrieve the credentials from the parameter.

B.

Store the database credentials as a secret in AWS Secrets Manager. Configure Secrets Manager to automatically rotate the credentials every 30 days Update the Lambda function to retrieve the credentials from the secret.

C.

Store the database credentials as an encrypted Lambda environment variable. Write a custom Lambda function to rotate the credentials. Schedule the Lambda function to run every 30 days.

D.

Store the database credentials as a key in AWS Key Management Service (AWS KMS). Configure automatic rotation for the key. Update the Lambda function to retrieve the credentials from the KMS key.

Full Access
Question # 81

A company hosts its order processing system on AWS. The architecture consists of a frontend and a backend. The frontend includes an Application Load Balancer (ALB) and Amazon EC2 instances in an Auto-Scaling group. The backend includes an EC2 instance and an Amazon RDS MySQL database.

To prevent incomplete or lost orders, the company wants to ensure that order states are always preserved. The company wants to ensure that every order will eventually be processed, even after an outage or pause. Every order must be processed exactly once.

A.

Create an Auto Scaling group and an ALB for the backend. Create a read replica for the RDS database in a second Availability Zone. Update the backend RDS endpoint.

B.

Create an Auto Scaling group and an ALB for the backend. Create an Amazon RDS proxy in front of the RDS database. Update the backend EC2 instance to use the Amazon RDS proxy endpoint.

C.

Create an Auto Scaling group for the backend. Configure the backend EC2 instances to con-sume messages from an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure a dead-letter queue (DLQ) for the SQS queue.

D.

Create an AWS Lambda function to replace the backend EC2 instance. Subscribe the func-tion to an Amazon Simple Notification Service (Amazon SNS) topic. Configure the frontend to send orders to the SNS topic.

Full Access
Question # 82

A website uses EC2 instances with Auto Scaling and EFS. How can the company optimize costs?

A.

Reconfigure the Auto Scaling group to set a desired number of instances. Turn off scheduled scaling.

B.

Create a new launch template version that uses larger EC2 instances.

C.

Reconfigure the Auto Scaling group to use a target tracking scaling policy.

D.

Replace the EFS volume with instance store volumes.

Full Access
Question # 83

A company runs a monolithic application in its on-premises data center. The company used Java/Tomcat to build the application. The application uses Microsoft SQL Server as a database.

The company wants to migrate the application to AWS.

Which solution will meet this requirement with the LEAST operational overhead?

A.

Use AWS App2Container to containerize the application. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS). Deploy the database to Amazon RDS for SQL Server. Configure a Multi-AZ deployment.

B.

Containerize the application and deploy the application on a self-managed Kubernetes cluster on an Amazon EC2 instance. Deploy the database on a separate EC2 instance. Set up Microsoft SQL Server Always On availability groups.

C.

Deploy the frontend of the web application as a website on Amazon S3. Use Amazon DynamoDB for the database tier.

D.

Use AWS App2Container to containerize the application. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon DynamoDB for the database tier.

Full Access
Question # 84

A company is designing a new Amazon Elastic Kubernetes Service (Amazon EKS) deployment to host multi-tenant applications that use a single cluster. The company wants to ensure that each pod has its own hosted environment. The environments must not share CPU, memory, storage, or elastic network interfaces.

Which solution will meet these requirements?

A.

Use Amazon EC2 instances to host self-managed Kubernetes clusters. Use taints and tolerations to enforce isolation boundaries.

B.

Use Amazon EKS with AWS Fargate. Use Fargate to manage resources and to enforce isolation boundaries.

C.

Use Amazon EKS and self-managed node groups. Use taints and tolerations to enforce isolation boundaries.

D.

Use Amazon EKS and managed node groups. Use taints and tolerations to enforce isolation boundaries.

Full Access
Question # 85

A digital image processing company wants to migrate its on-premises monolithic application to the AWS Cloud. The company processes thousands of images and generates large files as part of the processing workflow.

The company needs a solution to manage the growing number of image processing jobs. The solution must also reduce the manual tasks in the image processing workflow. The company does not want to manage the underlying infrastructure of the solution.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 Spot Instances to process the images. Configure Amazon Simple Queue Service (Amazon SQS) to orchestrate the workflow. Store the processed files in Amazon Elastic File System (Amazon EFS)

B.

Use AWS Batch jobs to process the images. Use AWS Step Functions to orchestrate the workflow. Store the processed files in an Amazon S3 bucket.

C.

Use AWS Lambda functions and Amazon EC2 Spot Instances lo process the images. Store the processed files in Amazon FSx.

D.

Deploy a group of Amazon EC2 instances to process the images. Use AWS Step Functions to orchestrate the workflow. Store the processed files in an Amazon Elastic Block Store (Amazon EBS) volume.

Full Access
Question # 86

A company has separate AWS accounts for its finance, data analytics, and development departments. Because of costs and security concerns, the company wants to control which services each AWS account can use

Which solution will meet these requirements with the LEAST operational overhead?

A.

Use AWS Systems Manager templates to control which AWS services each department can use

B.

Create organization units (OUs) for each department in AWS Organizations. Attach service control policies (SCPs) to the OUs.

C.

Use AWS CloudFormation to automatically provision only the AWS services that each department can use.

D.

Set up a list of products in AWS Service Catalog in the AWS accounts to manage and control the usage of specific AWS services

Full Access
Question # 87

A company runs an application on Amazon EC2 instances. The application needs to access an Amazon RDS database. The company wants to grant the EC2 instances access permissions to the RDS database while following the principle of least privilege.

Which solution will meet these requirements?

A.

Create an IAM user that has a policy that grants administrative permissions. Use the IAM user's access keys on the EC2 instances to access the RDS database.

B.

Create an IAM user that has a policy that grants the minimum required permissions to access the RDS database. Embed the IAM user's access keys on the EC2 instances to access the RDS database.

C.

Create an IAM role that has a policy that grants the minimum required permissions to access the RDS database. Attach the IAM role access key and the IAM role secret key to the EC2 instance profile.

D.

Create an IAM role that has a policy that grants the minimum required permissions to access the RDS database. Attach the IAM role to an EC2 instance profile. Associate the instance profile with the instances.

Full Access
Question # 88

A company is building a serverless application to process orders from an ecommerce site. The application needs to handle bursts of traffic during peak usage hours and to maintain high availability. The orders must be processed asynchronously in the order the application receives them.

Which solution will meet these requirements?

A.

Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use an AWS Lambda function to process the orders.

B.

Use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to receive orders. Use an AWS Lambda function to process the orders.

C.

Use an Amazon Simple Queue Service (Amazon SQS) standard queue to receive orders. Use AWS Batch jobs to process the orders.

D.

Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use AWS Batch jobs to process the orders.

Full Access
Question # 89

A company runs an ecommerce application on premises on Microsoft SQL Server. The company is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures. The company wants to minimize database server maintenance and operating costs after the migration is completed. The company also wants to minimize the need to rewrite code as part of the migration effort.

Which solution will meet these requirements?

A.

Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.

B.

Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for query processing.

C.

Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.

D.

Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.

Full Access
Question # 90

A company needs to archive an on-premises relational database. The company wants to retain the data. The company needs to be able to run SQL queries on the archived data to create annual reports. Which solution will meet these requirements with the LEAST operational overhead?

A.

Use AWS Database Migration Service (AWS DMS) to migrate the on-premises database to an Amazon RDS instance. Retire the on-premises database. Maintain the RDS instance in a stopped state until the data is needed for reports.

B.

Set up database replication from the on-premises database to an Amazon EC2 instance. Retire the on-premises database. Make a snapshot of the EC2 instance. Maintain the EC2 instance in a stopped state until the data is needed for reports.

C.

Create a database backup on premises. Use AWS DataSync to transfer the data to Amazon S3. Create an S3 Lifecycle configuration to move the data to S3 Glacier Deep Archive. Restore the backup to Amazon EC2 instances to run reports.

D.

Use AWS Database Migration Service (AWS DMS) to migrate the on-premises databases to Amazon S3 in Apache Parquet format. Store the data in S3 Glacier Flexible Retrieval. Use Amazon Athena to run reports.

Full Access
Question # 91

A security team needs to enforce rotation of all IAM users' access keys every 90 days. Keys older than 90 days must be automatically deactivated and removed. A solutions architect must create a remediation solution with minimal operational effort.

Which solution meets these requirements?

A.

Create an AWS Config rule to check key age. Configure the rule to run an AWS Batch job to remove the key.

B.

Create an Amazon EventBridge rule to check key age. Configure it to run an AWS Batch job to remove the key.

C.

Create an AWS Config rule to check key age. Define an EventBridge rule that schedules an AWS Lambda function to remove the key.

D.

Create an EventBridge rule to check key age. Define a second EventBridge rule to run an AWS Batch job to remove the key.

Full Access
Question # 92

A company has a production Amazon RDS for MySQL database. The company needs to create a new application that will read frequently changing data from the database with minimal impact on the database's overall performance. The application will rarely perform the same query more than once.

What should a solutions architect do to meet these requirements?

A.

Set up an Amazon ElastiCache cluster. Query the results in the cluster.

B.

Set up an Application Load Balancer (ALB). Query the results in the ALB.

C.

Set up a read replica for the database. Query the read replica.

D.

Set up querying of database snapshots. Query the database snapshots.

Full Access
Question # 93

A company has a serverless web application that is comprised of AWS Lambda functions. The application experiences spikes in traffic that cause increased latency because of cold starts. The company wants to improve the application's ability to handle traffic spikes and to minimize latency. The solution must optimize costs during periods when traffic is low.

Which solution will meet these requirements?

A.

Configure provisioned concurrency for the Lambda functions. Use AWS Application Auto Scaling to adjust the provisioned concurrency.

B.

Launch Amazon EC2 instances in an Auto Scaling group. Add a scheduled scaling policy to launch additional EC2 instances during peak traffic periods.

C.

Configure provisioned concurrency for the Lambda functions. Set a fixed concurrency level to handle the maximum expected traffic.

D.

Create a recurring schedule in Amazon EventBridge Scheduler. Use the schedule to invoke the Lambda functions periodically to warm the functions.

Full Access
Question # 94

A home security company is expanding its business globally. The company needs to encrypt customer data. The company does not want to manage its own keys. The company needs the keys to be usable in multiple AWS Regions and needs to control access to the keys.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Use AWS Key Management Service (AWS KMS) to create multi-Region keys. Apply tags to identify each key. Use attribute-based access control (ABAC) condition keys to control access to the keys.

B.

Use AWS Key Management Service (AWS KMS) to create multiple keys by importing key material. Apply tags to identify each key. Use attribute-based access control (ABAC) condition keys to control access to the keys.

C.

Use AWS CloudHSM to create a CloudHSM cluster in the company's primary Region. Synchronize the CloudHSM cluster to additional Regions by using the CloudHSM Management Utility (CMU).

D.

Use AWS CloudHSM to create users. Use the CloudHSM Management Utility (CMU) to share keys with the users. Use the shareKey command to share or unshare the key with additional users in each Region.

Full Access
Question # 95

A company has an on-premises SFTP file transfer solution. The company is migrating to the AWS Cloud to scale the file transfer solution and to optimize costs by using Amazon S3. The company's employees will use their credentials for the on-premises Microsoft Active Directory (AD) to access the new solution The company wants to keep the current authentication and file access mechanisms.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Configure an S3 File Gateway. Create SMB file shares on the file gateway that use the existing Active Directory to authenticate

B.

Configure an Auto Scaling group with Amazon EC2 instances to run an SFTP solution Configure the group to scale up at 60% CPU utilization.

C.

Create an AWS Transfer Family server with SFTP endpoints Choose the AWS Directory Service option as the identity provider Use AD Connector to connect the on-premises Active Directory.

D.

Create an AWS Transfer Family SFTP endpoint. Configure the endpoint to use the AWS Directory Service option as the identity provider to connect to the existing Active Directory.

Full Access
Question # 96

A company uses an organization in AWS Organizations to manage a multi-account landing zone. The company requires all users who access AWS accounts in the organization to use a centralized identity system that follows the principle of least privilege for operational tasks. The company currently uses an external identity provider (IdP).

Which combination of solutions will meet these requirements? (Select TWO.)

A.

Use AWS Identity and Access Management (IAM) to create IAM users and IAM user groups in each AWS account.

B.

Create permission sets in AWS IAM Identity Center. Assign the appropriate permission sets to the IAM users and IAM user groups in the accounts.

C.

Assign each IAM user to an IAM role by using an inline IAM policy based on operational duties. Assign each role to the appropriate AWS account in the organization.

D.

Configure a SAML identity provider in AWS Identity and Access Management (IAM) in each AWS account to establish a trust relationship with the company's external IdP.

E.

Enable AWS IAM Identity Center in the organization management account. Create user accounts and user groups.

Full Access
Question # 97

A company is developing a microservices-based application to manage the company's delivery operations. The application consists of microservices that process orders, manage a fleet of delivery vehicles, and optimize delivery routes.

The microservices must be able to scale independently and must be able to handle bursts of traffic without any data loss.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Use Amazon API Gateway REST APIs to establish communication between microservices. Deploy the application on Amazon EC2 instances in Auto Scaling groups.

B.

Use Amazon Simple Queue Service (Amazon SQS) to establish communication between microservices. Deploy the application on Amazon Elastic Container Service (Amazon ECS) containers on AWS Fargate.

C.

Use WebSocket-based communication between microservices. Deploy the application on Amazon EC2 instances in Auto Scaling groups.

D.

Use Amazon Simple Notification Service (Amazon SNS) to establish communication between microservices. Deploy the application on Amazon Elastic Container Service (Amazon ECS) containers on Amazon EC2 instances.

Full Access
Question # 98

An internal product team is deploying a new application to a private VPC in a company's AWS account. The application runs on Amazon EC2 instances that are in a security group named App1. The EC2 instances store application data in an Amazon S3 bucket and use AWS Secrets Manager to store application service credentials. The company's security policy prohibits applications in a private VPC from using public IP addresses to communicate.

Which combination of solutions will meet these requirements? (Select TWO.)

A.

Configure gateway endpoints for Amazon S3 and AWS Secrets Manager.

B.

Configure interface VPC endpoints for Amazon S3 and AWS Secrets Manager.

C.

Add routes to the endpoints in the VPC route table.

D.

Associate the App1 security group with the interface VPC endpoints. Configure a self-referencing security group rule to allow inbound traffic.

E.

Associate the App1 security group with the gateway endpoints. Configure a self-referencing security group rule to allow inbound traffic.

Full Access
Question # 99

A company wants to send data from its on-premises systems to Amazon S3 buckets. The company created the S3 buckets in three different accounts. The company must send the data privately without traveling across the internet. The company has no existing dedicated connectivity to AWS.

Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

A.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Set up an AWS Direct Connect connection with a private VIF between the on-premises environment and the private VPC.

B.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Set up an AWS Direct Connect connection with a public VIF between the on-premises environment and the private VPC.

C.

Create an Amazon S3 interface endpoint in the networking account.

D.

Create an Amazon S3 gateway endpoint in the networking account.

E.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Peer VPCs from the accounts that host the S3 buckets with the VPC in the network account.

Full Access
Question # 100

A company hosts an application on Amazon EC2 instances that are part of a target group behind an Application Load Balancer (ALB). The company has attached a security group to the ALB.

During a recent review of application logs, the company found many unauthorized login attempts from IP addresses that belong to countries outside the company's normal user base. The company wants to allow traffic only from the United States and Australia.

A.

Edit the default network ACL to block IP addresses from outside of the allowed countries.

B.

Create a geographic match rule in AWS WAF. Attach the rule to the ALB.

C.

Configure the ALB security group to allow the IP addresses of company employees. Edit the default network ACL to block IP addresses from outside of the allowed countries.

D.

Use a host-based firewall on the EC2 instances to block IP addresses from outside of the allowed countries. Configure the ALB security group to allow the IP addresses of company employees.

Full Access
Question # 101

An international company needs to share data from an Amazon S3 bucket to employees who are located around the world. The company needs a secure solution to provide employees with access to the S3 bucket. The employees are already enrolled in AWS IAM Identity Center.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Create a help desk application to generate an Amazon S3 presigned URL for each employee. Configure the presigned URLs to have short expirations. Instruct employees to contact the company help desk to receive a presigned URL to access the S3 bucket.

B.

Create a group for Amazon S3 access in IAM Identity Center. Add the employees who require access to the S3 bucket to the group. Create an IAM policy to allow Amazon S3 access from the group. Instruct employees to use the AWS access portal to access the AWS Management Console and navigate to the S3 bucket.

C.

Create an Amazon S3 File Gateway. Create one share for data uploads and a second share for data downloads. Set up an SFTP service on an Amazon EC2 instance. Mount the shares to the EC2 instance. Instruct employees to use the SFTP server.

D.

Configure AWS Transfer Family SFTP endpoints. Select the custom identity provider option. Use AWS Secrets Manager to manage the user credentials. Instruct employees to use Transfer Family SFTP.

Full Access
Question # 102

A startup company is hosting a website for its customers on an Amazon EC2 instance. The website consists of a stateless Python application and a MySQL database. The website serves only a small amount of traffic. The company is concerned about the reliability of the instance and needs to migrate to a highly available architecture. The company cannot modify the application code.

Which combination of actions should a solutions architect take to achieve high availability for the website? (Select TWO.)

A.

Provision an internet gateway in each Availability Zone in use.

B.

Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.

C.

Migrate the database to Amazon DynamoDB. and enable DynamoDB auto scaling.

D.

Use AWS DataSync to synchronize the database data across multiple EC2 instances.

E.

Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances that are distributed across two Availability Zones.

Full Access
Question # 103

A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetes etcd key-value store.

Which solution will meet these requirements?

A.

Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to manage, rotate, and store all secrets in Amazon EKS.

B.

Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.

C.

Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver as an add-on.

D.

Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias. Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.

Full Access
Question # 104

A company recently migrated its application to AWS. The application runs on Amazon EC2 Linux instances in an Auto Scaling group across multiple Availability Zones. The application stores data in an Amazon Elastic File System (Amazon EFS) file system that uses EFS Standard-Infrequent Access storage. The application indexes the company's files, and the index is stored in an Amazon RDS database.

The company needs to optimize storage costs with some application and services changes.

Which solution will meet these requirements MOST cost-effectively?

A.

Create an Amazon S3 bucket that uses an Intelligent-Tiering lifecycle policy. Copy all files to the S3 bucket. Update the application to use Amazon S3 API to store and retrieve files.

B.

Deploy Amazon FSx for Windows File Server file shares. Update the application to use CIFS protocol to store and retrieve files.

C.

Deploy Amazon FSx for OpenZFS file system shares. Update the application to use the new mount point to store and retrieve files.

D.

Create an Amazon S3 bucket that uses S3 Glacier Flexible Retrieval. Copy all files to the S3 bucket. Update the application to use Amazon S3 API to store and retrieve files as standard retrievals.

Full Access
Question # 105

A solutions architect is designing the architecture for a web application that has a frontend and a backend. The backend services must receive data from the frontend services for processing. The frontend must manage access to the application by using API keys. The backend must scale without affecting the frontend.

Which solution will meet these requirements?

A.

Deploy an Amazon API Gateway HTTP API as the frontend to direct traffic to an Amazon Simple Queue Service (Amazon SQS) queue. Use AWS Lambda functions as the backend to read from the queue.

B.

Deploy an Amazon API Gateway REST API as the frontend to direct traffic to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate as the backend to read from the queue.

C.

Deploy an Amazon API Gateway REST API as the frontend to direct traffic to an Amazon Simple Notification Service (Amazon SNS) topic. Use AWS Lambda functions as the backend. Subscribe the Lambda functions to the topic.

D.

Deploy an Amazon API Gateway HTTP API as the frontend to direct traffic to an Amazon Simple Notification Service (Amazon SNS) topic. Use Amazon Elastic Kubernetes Service (Amazon EKS) on AWS Fargate as the backend. Subscribe Amazon EKS to the topic.

Full Access
Question # 106

A financial services company must retain log data for 1 year. The company stores log files in an Amazon S3 bucket and wants to prevent any user from deleting or overwriting the log files during this period. The data must remain available for read-only requests.

A.

Enable S3 Versioning on the bucket. Use Object Lock in compliance mode with a 1-year retention period.

B.

Enable S3 Transfer Acceleration on the bucket. Create an S3 Lifecycle Configuration rule to move objects to Amazon S3 Glacier Flexible Retrieval after 1 year.

C.

Enable S3 Versioning on the bucket. Create an S3 Lifecycle Configuration rule to move objects to Amazon S3 Glacier Flexible Retrieval after 1 year.

D.

Create an AWS Lambda function to programmatically check the timestamp of S3 data and to move the data to Amazon S3 Glacier Deep Archive if the data is older than 1 year.

Full Access
Question # 107

A company sets up an organization in AWS Organizations that contains 10AWS accounts. A solutions architect must design a solution to provide access to the accounts for several thousand employees. The company has an existing identity provider (IdP). The company wants to use the existing IdP for authentication to AWS.

Which solution will meet these requirements?

A.

Create IAM users for the employees in the required AWS accounts. Connect IAM users to the existing IdP. Configure federated authentication for the IAM users.

B.

Set up AWS account root users with user email addresses and passwords that are synchronized from the existing IdP.

C.

Configure AWS IAM Identity Center Connect IAM Identity Center to the existing IdP Provision users and groups from the existing IdP

D.

Use AWS Resource Access Manager (AWS RAM) to share access to the AWS accounts with the users in the existing IdP.

Full Access
Question # 108

A multinational company operates in multiple AWS Regions. The company must ensure that its developers and administrators have secure, role-based access to AWS resources.

The roles must be specific to each user's geographic location and job responsibilities.

The company wants to implement a solution to ensure that each team can access only resources within the team's Region. The company wants to use its existing directory service to manage user access. The existing directory service organizes users into roles based on location. The system must be capable of integrating seamlessly with multi-factor authentication (MFA).

Which solution will meet these requirements?

A.

Use AWS Security Token Service (AWS STS) to generate temporary access tokens. Integrate STS with the directory service. Assign Region-specific roles.

B.

Configure AWS IAM Identity Center with federated access. Integrate IAM Identity Center with the directory service to set up Region-specific IAM roles.

C.

Create IAM managed policies that restrict access by location. Apply policies based on group membership in the directory.

D.

Use custom Lambda functions to dynamically assign IAM policies based on login location and job function.

Full Access
Question # 109

A company plans to deploy an application that uses an Amazon CloudFront distribution. The company will set an Application Load Balancer (ALB) as the origin for the distribution. The company wants to ensure that users access the ALB only through the CloudFront distribution. The company plans to deploy the solution in a new VPC.

Which solution will meet these requirements?

A.

Configure the network ACLs in the subnet where the ALB is deployed to allow inbound traf-fic only from the public IP addresses of the CloudFront edge locations.

B.

Create a VPC origin for the CloudFront distribution. Set the VPC origin Amazon Resource Name (ARN) to the ARN of the ALB.

C.

Create a security group that allows only inbound traffic from the public IP addresses of the CloudFront edge locations. Associate the security group with the ALB.

D.

Create a VPC origin for the CloudFront distribution. Configure an ALB rule. Set the source IP condition to allow traffic only from the public IP addresses of the CloudFront edge locations.

Full Access
Question # 110

A solutions architect has an application container, an AWS Lambda function, and an Amazon Simple Queue Service (Amazon SQS) queue. The Lambda function uses the SQS queue as an event source. The Lambda function makes a call to a third-party machine learning (ML) API when the function is invoked. The response from the third-party API can take up to 60 seconds to return.

The Lambda function's timeout value is currently 65 seconds. The solutions architect has noticed that the Lambda function sometimes processes duplicate messages from the SQS queue.

What should the solutions architect do to ensure that the Lambda function does not process duplicate messages?

A.

Configure the Lambda function with a larger amount of memory.

B.

Configure an increase in the Lambda function's timeout value.

C.

Configure the SQS queue's delivery delay value to be greater than the maximum time it takes to call the third-party API.

D.

Configure the SQS queue's visibility timeout value to be greater than the maximum time it takes to call the third-party API.

Full Access
Question # 111

Question:

A company wants to migrate an application that uses a microservice architecture to AWS. The services currently run on Docker containers on-premises. The application has an event-driven architecture that uses Apache Kafka. The company configured Kafka to use multiple queues to send and receive messages. Some messages must be processed by multiple services. Which solution will meet these requirements with the LEAST management overhead?

Options:

A.

Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. Deploy a Kafka cluster on EC2 instances to handle service-to-service communication.

B.

Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Create multiple Amazon Simple Queue Service (Amazon SQS) queues to handle service-to-service communication.

C.

Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Deploy an Amazon Managed Streaming for Apache Kafka (Amazon MSK) cluster to handle service-to-service communication.

D.

Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. Use Amazon EventBridge to handle service-to-service communication.

Full Access
Question # 112

A company's application receives requests from customers in JSON format. The company uses Amazon Simple Queue Service (Amazon SQS) to handle the requests.

After the application's most recent update, the company's customers reported that requests were being duplicated. A solutions architect discovers that the application is consuming messages from the SQS queue more than once.

What is the root cause of the issue?

A.

The visibility timeout is longer than the time it takes the application to process messages from the queue.

B.

The duplicated messages in the SQS queue contain unescaped Unicode characters.

C.

The message size exceeds the maximum of 256 KiB for each SQS message.

D.

The visibility timeout is shorter than the time it takes the application to process messages from the queue.

Full Access
Question # 113

A company wants to create a payment processing application. The application must run when a payment record arrives in an existing Amazon S3 bucket. The application must process each payment record exactly once. The company wants to use an AWS Lambda function to process the payments.

Which solution will meet these requirements?

A.

Configure the existing S3 bucket to send object creation events to Amazon EventBridge. Configure EventBridge to route events to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure the Lambda function to run when a new event arrives in the SQS queue.

B.

Configure the existing S3 bucket to send object creation events to an Amazon Simple Notification Service (Amazon SNS) topic. Configure the Lambda function to run when a new event arrives in the SNS topic.

C.

Configure the existing S3 bucket to send object creation events to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the Lambda function to run when a new event arrives in the SQS queue.

D.

Configure the existing S3 bucket to send object creation events directly to the Lambda function. Configure the Lambda function to handle object creation events and to process the payments.

Full Access
Question # 114

A company needs to optimize its Amazon S3 storage costs for an application that generates many files that cannot be recreated Each file is approximately 5 MB and is stored in Amazon S3 Standard storage.

The company must store the files for 4 years before the files can be deleted The files must be immediately accessible The files are frequently accessed in the first 30 days of object creation, but they are rarely accessed after the first 30 days.

Which solution will meet these requirements MOST cost-effectively?

A.

Create an S3 Lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object creation. Delete the files 4 years after object creation.

B.

Create an S3 Lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days after object creation Delete the files 4 years after object creation.

C.

Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation Delete the files 4 years after object creation.

D.

Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation. Move the files to S3 Glacier Flexible Retrieval 4 years after object creation.

Full Access
Question # 115

A company is using microservices to build an ecommerce application on AWS. The company wants to preserve customer transaction information after customers submit orders. The company wants to store transaction data in an Amazon Aurora database. The company expects sales volumes to vary throughout each year.

A.

Use an Amazon API Gateway REST API to invoke an AWS Lambda function to send transaction data to the Aurora database. Send transaction data to an Amazon Simple Queue Service (Amazon SQS) queue that has a dead-letter queue. Use a second Lambda function to read from the SQS queue and to update the Aurora database.

B.

Use an Amazon API Gateway HTTP API to send transaction data to an Application Load Balancer (ALB). Use the ALB to send the transaction data to Amazon Elastic Container Service (Amazon ECS) on Amazon EC2. Use ECS tasks to store the data in Aurora database.

C.

Use an Application Load Balancer (ALB) to route transaction data to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon EKS to send the data to the Aurora database.

D.

Use Amazon Data Firehose to send transaction data to Amazon S3. Use AWS Database Migration Service (AWS DMS) to migrate the data from Amazon S3 to the Aurora database.

Full Access
Question # 116

A developer creates a web application that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The developer reviews the deployment and notices some suspicious traffic to the application. The traffic is malicious and is coming from a single public IP address. A solutions architect must block the public IP address.

Which solution will meet this requirement?

A.

Create a security group rule to deny all inbound traffic from the suspicious IP address. Associate the security group with the ALB.

B.

Implement Amazon Detective to monitor traffic and to block malicious activity from the internet. Configure Detective to integrate with the ALB.

C.

Implement AWS Resource Access Manager (AWS RAM) to manage traffic rules and to block malicious activity from the internet. Associate AWS RAM with the ALB.

D.

Add the malicious IP address to an IP set in AWS WAF. Create a web ACL. Include an IP set rule with the action set to BLOCK. Associate the web ACL with the ALB.

Full Access
Question # 117

A company wants to isolate its workloads by creating an AWS account for each workload. The company needs a solution that centrally manages networking components for the workloads. The solution also must create accounts with automatic security controls (guardrails).

Which solution will meet these requirements with the LEAST operational overhead?

A.

Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.

B.

Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.

C.

Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.

D.

Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.

Full Access
Question # 118

A company runs a web application in a single AWS Region. A solutions architect wants to ensure that the web application can continue to operate if the application becomes unavailable in the Region.

Which solution will meet this requirement?

A.

Deploy the application in multiple Regions. Use Amazon Route 53 DNS health checks to route traffic to a healthy Region.

B.

Deploy the application in multiple Availability Zones within a single Region. Use Amazon Route 53 DNS health checks to route traffic to healthy application resources.

C.

Deploy the application in multiple Regions. Use an Amazon Route 53 simple routing record to route traffic to a healthy Region.

D.

Deploy the application in multiple Availability Zones within a single Region. Use an Amazon Route 53 latency record in each Availability Zone to route traffic to a healthy Availability Zone.

Full Access
Question # 119

A company wants to migrate applications from its on-premises servers to AWS. As a first step, the company is modifying and migrating a non-critical application to a single Amazon EC2 instance. The application will store information in an Amazon S3 bucket. The company needs to follow security best practices when deploying the application on AWS.

Which approach should the company take to allow the application to interact with Amazon S3?

A.

Create an IAM role that has administrative access to AWS. Attach the role to the EC2 instance.

B.

Create an IAM user. Attach the AdministratorAccess policy. Copy the generated access key and secret key. Within the application code, use the access key and secret key along with the AWS SDK to communicate with Amazon S3.

C.

Create an IAM role that has the necessary access to Amazon S3. Attach the role to the EC2 instance.

D.

Create an IAM user. Attach a policy that provides the necessary access to Amazon S3. Copy the generated access key and secret key. Within the application code, use the access key and secret key along with the AWS SDK to communicate with Amazon S3.

Full Access
Question # 120

A company has a social media application that is experiencing rapid user growth. The current architecture uses t-family Amazon EC2 instances. The current architecture struggles to handle the increasing number of user posts and images. The application experiences performance slowdowns during peak usage times.

A solutions architect needs to design an updated architecture that will resolve the performance issues and scale as usage increases.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Use the largest Amazon EC2 instance in the same family to host the application. Install a relational database on the instance to store all account information and to store posts and images.

B.

Use Amazon Simple Queue Service (Amazon SQS) to buffer incoming posts. Use a larger EC2 instance in the same family to host the application. Store account information in Amazon DynamoDB. Store posts and images in the local EC2 instance file system.

C.

Use an Amazon API Gateway REST API and AWS Lambda functions to process requests. Store account information in Amazon DynamoDB. Use Amazon S3 to store posts and images.

D.

Deploy multiple EC2 instances in the same family. Use an Application Load Balancer to distribute traffic. Use a shared file system to store account information and to store posts and images.

Full Access
Question # 121

A company provides a trading platform to customers. The platform uses an Amazon API Gateway REST API, AWS Lambda functions, and an Amazon DynamoDB table. Each trade that the platform processes invokes a Lambda function that stores the trade data in Amazon DynamoDB. The company wants to ingest trade data into a data lake in Amazon S3 for near real-time analysis. Which solution will meet these requirements with the LEAST operational overhead?

A.

Use Amazon DynamoDB Streams to capture the trade data changes. Configure DynamoDB Streams to invoke a Lambda function that writes the data to Amazon S3.

B.

Use Amazon DynamoDB Streams to capture the trade data changes. Configure DynamoDB Streams to invoke a Lambda function that writes the data to Amazon Data Firehose. Write the data from Data Firehose to Amazon S3.

C.

Enable Amazon Kinesis Data Streams on the DynamoDB table to capture the trade data changes. Configure Kinesis Data Streams to invoke a Lambda function that writes the data to Amazon S3.

D.

Enable Amazon Kinesis Data Streams on the DynamoDB table to capture the trade data changes. Configure a data stream to be the input for Amazon Data Firehose. Write the data from Data Firehose to Amazon S3.

Full Access
Question # 122

A company needs to migrate its customer transactions database from on premises to AWS. The database is an Oracle DB instance on Linux. A new requirement mandates rotating the database password yearly.

Which solution provides this capability with the least operational overhead?

A.

Convert the database to DynamoDB using AWS SCT. Store the password in Parameter Store. Use CloudWatch and Lambda for rotation.

B.

Migrate the database to Amazon RDS for Oracle. Store the password in AWS Secrets Manager. Turn on automatic rotation with a yearly rotation schedule.

C.

Migrate the database to an EC2 instance. Use Parameter Store to keep and rotate the connection string using a Lambda function with a yearly schedule.

D.

Migrate the database to Amazon Neptune using AWS SCT. Use CloudWatch and Lambda for yearly rotation.

Full Access
Question # 123

A company wants to standardize its Amazon Elastic Block Store (Amazon EBS) volume encryption strategy. The company also wants to minimize the cost and configuration effort required to operate the volume encryption check.

Which solution will meet these requirements?

A.

Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Use Amazon EventBridge to schedule an AWS Lambda function to run the API calls.

B.

Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Run the API calls on an AWS Fargate task.

C.

Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on EBS volumes. Use AWS Cost Explorer to display resources that are not properly tagged. Encrypt the untagged resources manually.

D.

Create an AWS Config rule for Amazon EBS to evaluate if a volume is encrypted and to flag the volume if it is not encrypted.

Full Access
Question # 124

A company has multiple AWS accounts with applications deployed in the us-west-2 Region. Application logs are stored within Amazon S3 buckets in each account. The company wants to build a centralized log analysis solution that uses a single S3 bucket. Logs must not leave us-west-2, and the company wants to incur minimal operational overhead.

A.

Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the centralized S3 bucket.

B.

Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.

C.

Write a script that uses the PutObject API operation every day to copy the entire contents of the buckets to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.

D.

Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the S3 buckets (s3:ObjectCreated:*) event. Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.

Full Access
Question # 125

A company runs an AWS Lambda function in private subnets in a VPC. The subnets have a default route to the internet through an Amazon EC2 NAT instance. The Lambda function processes input data and saves its output as an object to Amazon S3.

Intermittently, the Lambda function times out while trying to upload the object because of saturated traffic on the NAT instance's network The company wants to access Amazon S3 without traversing the internet.

Which solution will meet these requirements?

A.

Replace the EC2 NAT instance with an AWS managed NAT gateway.

B.

Increase the size of the EC2 NAT instance in the VPC to a network optimized instance type

C.

Provision a gateway endpoint for Amazon S3 in the VPC. Update the route tables of the subnets accordingly.

D.

Provision a transit gateway. Place transit gateway attachments in the private subnets where the Lambda function is running.

Full Access
Question # 126

An insurance company runs an application on premises to process contracts. The application processes jobs that are comprised of many tasks. The individual tasks run for up to 5 minutes. Some jobs can take up to 24 hours in total to finish. If a task fails, the task must be reprocessed.

The company wants to migrate the application to AWS. The company will use Amazon S3 as part of the solution. The company wants to configure jobs to start automatically when a contract is uploaded to an S3 bucket.

Which solution will meet these requirements?

A.

Use AWS Lambda functions to process individual tasks. Create a primary Lambda function to handle the overall job processing by calling individual Lambda functions in sequence. Configure the S3 bucket to send an event notification to invoke the primary Lambda function to begin processing.

B.

Use a state machine in AWS Step Functions to handle the overall contract processing job. Configure the S3 bucket to send an event notification to Amazon EventBridge. Create a rule in Amazon EventBridge to target the state machine.

C.

Use an AWS Batch job to handle the overall contract processing job. Configure the S3 bucket to send an event notification to initiate the Batch job.

D.

Use an S3 event notification to notify an Amazon Simple Queue Service (Amazon SQS) queue when a contract is uploaded. Configure an AWS Lambda function to read messages from the queue and to run the contract processing job.

Full Access
Question # 127

A company wants to release a new device that will collect data to track overnight sleep on an intelligent mattress. Sensors will send data that will be uploaded to an Amazon S3 bucket. Each mattress generates about 2 MB of data each night.

An application must process the data and summarize the data for each user. The application must make the results available as soon as possible. Every invocation of the application will require about 1 GB of memory and will finish running within 30 seconds.

Which solution will run the application MOST cost-effectively?

A.

AWS Lambda with a Python script

B.

AWS Glue with a Scala job

C.

Amazon EMR with an Apache Spark script

D.

AWS Glue with a PySpark job

Full Access
Question # 128

An ecommerce company stores terabytes of customer data in the AWS Cloud. The data contains personally identifiable information (PII). The company wants to use the data in three applications. Only one of the applications needs to process the PII. The PII must be removed before the other two applications process the data.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Store the data in an Amazon DynamoDB table. Create a proxy application layer to intercept and process the data that each application requests.

B.

Store the data in an Amazon S3 bucket. Process and transform the data by using S3 Object Lambda before returning the data to the requesting application.

C.

Process the data and store the transformed data in three separate Amazon S3 buckets so that each application has its own custom dataset. Point each application to its respective S3 bucket.

D.

Process the data and store the transformed data in three separate Amazon DynamoDB tables so that each application has its own custom dataset. Point each application to its respective DynamoDB table.

Full Access
Question # 129

Question:

A company runs a mobile game app that stores session data (up to 256 KB) for up to 48 hours. The data updates frequently and must be deleted automatically after expiration. Restorability is also required.

Options:

A.

Use an Amazon DynamoDB table to store the session data. Enable point-in-time recovery (PITR) and TTL.

B.

Use Amazon MemoryDB and enable PITR and TTL.

C.

Store session data in S3 Standard. Enable Versioning and a Lifecycle rule to expire objects after 48 hours.

D.

Store data in S3 Intelligent-Tiering with Versioning and a Lifecycle rule to expire after 48 hours.

Full Access
Question # 130

A company needs to set up a centralized solution to audit API calls to AWS for workloads that run on AWS services and non AWS services. The company must store logs of the audits for 7 years.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Set up a data lake in Amazon S3. Incorporate AWS CloudTrail logs and logs from non AWS services into the data lake. Use CloudTrail to store the logs for 7 years.

B.

Configure custom integrations for AWS CloudTrail Lake to collect and store CloudTrail events from AWS services and non AWS services. Use CloudTrail to store the logs for 7 years.

C.

Enable AWS CloudTrail for AWS services. Ingest non AWS services into CloudTrail to store the logs for 7 years

D.

Create new Amazon CloudWatch Logs groups. Send the audit data from non AWS services to the CloudWatch Logs groups. Enable AWS CloudTrail for workloads that run on AWS. Use CloudTrail to store the logs for 7 years.

Full Access
Question # 131

A company uses Amazon S3 to host its static website. The company wants to add a contact form to the webpage. The contact form will have dynamic server-side components for users to input their name, email address, phone number, and user message.

The company expects fewer than 100 site visits each month. The contact form must notify the company by email when a customer fills out the form.

Which solution will meet these requirements MOST cost-effectively?

A.

Host the dynamic contact form in Amazon Elastic Container Service (Amazon ECS). Set up Amazon Simple Email Service (Amazon SES) to connect to a third-party email provider.

B.

Create an Amazon API Gateway endpoint that returns the contact form from an AWS Lambda function. Configure another Lambda function on the API Gateway to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.

C.

Host the website by using AWS Amplify Hosting for static content and dynamic content. Use server-side scripting to build the contact form. Configure Amazon Simple Queue Service (Amazon SQS) to deliver the message to the company.

D.

Migrate the website from Amazon S3 to Amazon EC2 instances that run Windows Server. Use Internet Information Services (IIS) for Windows Server to host the webpage. Use client-side scripting to build the contact form. Integrate the form with Amazon WorkMail.

Full Access
Question # 132

A company has an on-premises MySQL database that handles transactional data. The company is migrating the database to the AWS Cloud. The migrated database must maintain compatibility with the company's applications that use the database. The migrated database also must scale automatically during periods of increased demand.

Which migration solution will meet these requirements?

A.

Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configure elastic storage scaling.

B.

Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on Auto Scaling for the Amazon Redshift cluster.

C.

Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora. Turn on Aurora Auto Scaling.

D.

Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoDB. Configure an Auto Scaling policy.

Full Access
Question # 133

A company is using an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The company must ensure that Kubernetes service accounts in the EKS cluster have secure and granular access to specific AWS resources by using IAM roles for service accounts (IRSA).

Which combination of solutions will meet these requirements? (Select TWO.)

A.

Create an IAM policy that defines the required permissions. Attach the policy directly to the IAM role of the EKS nodes.

B.

Implement network policies within the EKS cluster to prevent Kubernetes service accounts from accessing specific AWS services.

C.

Modify the EKS cluster's IAM role to include permissions for each Kubernetes service account. Ensure a one-to-one mapping between IAM roles and Kubernetes roles.

D.

Define an IAM role that includes the necessary permissions. Annotate the Kubernetes service accounts with the Amazon Resource Name (ARN) of the IAM role.

E.

Set up a trust relationship between the IAM roles for the service accounts and an OpenID Connect (OIDC) identity provider.

Full Access
Question # 134

A company uses Amazon EC2 instances and stores data on Amazon Elastic Block Store (Amazon EBS) volumes. The company must ensure that all data is encrypted at rest by using AWS Key Management Service (AWS KMS). The company must be able to control rotation of the encryption keys.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Create a customer managed key Use the key to encrypt the EBS volumes.

B.

Use an AWS managed key to encrypt the EBS volumes. Use the key to configure automatic key rotation.

C.

Create an external KMS key with imported key material. Use the key to encrypt the EBS volumes.

D.

Use an AWS owned key to encrypt the EBS volumes.

Full Access
Question # 135

A company recently migrated its application to a VPC on AWS. An AWS Site-to-Site VPN connection connects the company's on-premises network to the VPC. The application retrieves customer data from another system that resides on premises. The application uses an on-premises DNS server to resolve domain records. After the migration, the application is not able to connect to the customer data because of name resolution errors.

Which solution will give the application the ability to resolve the internal domain names?

A.

Launch EC2 instances in the VPC. On the EC2 instances, deploy a custom DNS forwarder that forwards all DNS requests to the on-premises DNS server. Create an Amazon Route 53 private hosted zone that uses the EC2 instances for name servers.

B.

Create an Amazon Route 53 Resolver outbound endpoint. Configure the outbound endpoint to forward DNS queries against the on-premises domain to the on-premises DNS server.

C.

Set up two AWS Direct Connect connections between the AWS environment and the on-premises network. Set up a link aggregation group (LAG) that includes the two connections. Change the VPC resolver address to point to the on-premises DNS server.

D.

Create an Amazon Route 53 public hosted zone for the on-premises domain. Configure the network ACLs to forward DNS requests against the on-premises domain to the Route 53 public hosted zone.

Full Access
Question # 136

A company has developed an API using Amazon API Gateway REST API and AWS Lambda. How can latency be reduced for users worldwide?

A.

Deploy the REST API as an edge-optimized API endpoint. Enable caching. Enable content encoding to compress data in transit.

B.

Deploy the REST API as a Regional API endpoint. Enable caching. Enable content encoding to compress data in transit.

C.

Deploy the REST API as an edge-optimized API endpoint. Enable caching. Configure reserved concurrency for Lambda functions.

D.

Deploy the REST API as a Regional API endpoint. Enable caching. Configure reserved concurrency for Lambda functions.

Full Access
Question # 137

A company uses Amazon RDS (or PostgreSQL to run its applications in the us-east-1 Region. The company also uses machine learning (ML) models to forecast annual revenue based on neat real-time reports. The reports are generated by using the same RDS for PostgreSQL database. The database performance slows during business hours. The company needs to improve database performance.

Which solution will meet these requirements MOST cost-effectively?

A.

Create a cross-Region read replica. Configure the reports to be generated from the read replica.

B.

Activate Multi-AZ DB instance deployment for RDS for PostgreSQL. Configure the reports to be generated from the standby database.

C.

Use AWS Data Migration Service (AWS DMS) to logically replicate data lo a new database. Configure the reports to be generated from the new database.

D.

Create a read replica in us-east-1. Configure the reports to be generated from the read replica.

Full Access
Question # 138

A company is building a new application that uses multiple serverless architecture components. The application architecture includes an Amazon API Gateway REST API and AWS Lambda functions to manage incoming requests.

The company needs a service to send messages that the REST API receives to multiple target Lambda functions for processing. The service must filter messages so each target Lambda function receives only the messages the function needs.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Send the requests from the REST API to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe multiple Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure the target Lambda functions to poll the SQS queues.

B.

Send the requests from the REST API to a set of Amazon EC2 instances that are configured to process messages. Configure the instances to filter messages and to invoke the target Lambda functions.

C.

Send the requests from the REST API to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Configure Amazon MSK to publish the messages to the target Lambda functions.

D.

Send the requests from the REST API to multiple Amazon Simple Queue Service (Amazon SQS) queues. Configure the target Lambda functions to poll the SQS queues.

Full Access
Question # 139

A company is deploying a critical application by using Amazon RDS for MySQL. The application must be highly available and must recover automatically. The company needs to support interactive users (transactional queries) and batch reporting (analytical queries) with no more than a 4-hour lag. The analytical queries must not affect the performance of the transactional queries.

A.

Configure Amazon RDS for MySQL in a Multi-AZ DB instance deployment with one standby instance. Point the transactional queries to the primary DB instance. Point the analytical queries to a secondary DB instance that runs in a different Availability Zone.

B.

Configure Amazon RDS for MySQL in a Multi-AZ DB cluster deployment with two standby instances. Point the transactional queries to the primary DB instance. Point the analytical queries to the reader endpoint.

C.

Configure Amazon RDS for MySQL to use multiple read replicas across multiple Availability Zones. Point the transactional queries to the primary DB instance. Point the analytical queries to one of the replicas in a different Availability Zone.

D.

Configure Amazon RDS for MySQL as the primary database for the transactional queries with automated backups enabled. Configure automated backups. Each night, create a read-only database from the most recent snapshot to support the analytical queries. Terminate the previously created database.

Full Access
Question # 140

A company has an application that receives and processes purchase orders. The application supports only XML data. The company needs to configure the application to accept orders in JSON format. The company does not want to modify the application.

A solutions architect is using an Amazon API Gateway HTTP API to create a new purchase order API. The solutions architect needs to modify the application DNS record to point to the new HTTP API.

A.

Use an HTTP proxy integration to pass XML requests to the application. For JSON requests, use API Gateway mappings to convert the purchase orders to XML. Use an AWS Lambda function that is integrated with API Gateway to call the application.

B.

Use an HTTP proxy integration to pass XML requests to the application. For JSON requests, use an AWS Lambda function that is integrated with API Gateway to convert the purchase orders from JSON to XML and to call the application.

C.

Use an HTTP custom integration to pass XML requests to the application. For JSON requests, use API Gateway mappings to convert the purchase orders to XML. Use an AWS Lambda function that is integrated with API Gateway to call the application.

D.

Use an HTTP custom integration to pass XML requests to the application. For JSON requests, use an AWS Lambda function that is integrated with API Gateway to convert the purchase orders to JSON and to call the application.

Full Access
Question # 141

A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The company needs to migrate the data to an AWS managed service for development and maintenance of the application data. The solution must require minimal operational support and provide immutable, cryptographically verifiable logs of data changes.

Which solution will meet these requirements MOST cost-effectively?

A.

Copy the records from the application into an Amazon Redshift cluster.

B.

Copy the records from the application into an Amazon Neptune cluster.

C.

Copy the records from the application into an Amazon Timestream database.

D.

Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger.

Full Access
Question # 142

A company has an organization in AWS Organizations. The company runs Amazon EC2 instances across four AWS accounts in the root organizational unit (OU). There are three nonproduction accounts and one production account. The company wants to prohibit users from launching EC2 instances of a certain size in the nonproduction accounts. The company has created a service control policy (SCP) to deny access to launch instances that use the prohibited types.

Which solutions to deploy the SCP will meet these requirements? (Select TWO.)

A.

Attach the SCP to the root OU for the organization.

B.

Attach the SCP to the three nonproduction Organizations member accounts.

C.

Attach the SCP to the Organizations management account.

D.

Create an OU for the production account. Attach the SCP to the OU. Move the production member account into the new OU.

E.

Create an OU for the required accounts. Attach the SCP to the OU. Move the nonproduction member accounts into the new OU.

Full Access
Question # 143

A company needs a solution to enforce data encryption at rest on Amazon EC2 instances. The solution must automatically identify noncompliant resources and enforce compliance policies on findings.

Which solution will meet these requirements with the LEAST administrative overhead?

A.

Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.

B.

Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the detection and remediation of unencrypted EBS volumes.

C.

Use Amazon Macie to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.

D.

Use Amazon Inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.

Full Access
Question # 144

A company needs to connect its on-premises data center network to a new VPC. The data center network has a 100 Mbps symmetrical internet connection. An application that is running on premises will transfer multiple gigabytes of data each day. The application will use an Amazon Data Firehose delivery stream for processing.

What should a solutions architect recommend for maximum performance?

A.

Create a VPC peering connection between the on-premises network and the VPC. Configure routing for the on-premises network to use the VPC peering connection.

B.

Procure an AWS Snowball Edge Storage Optimized device. After several days' worth of data has accumulated, copy the data to the device and ship the device to AWS for expedited transfer to Firehose. Repeat as needed.

C.

Create an AWS Site-to-Site VPN connection between the on-premises network and the VPC. Configure BGP routing between the customer gateway and the virtual private gateway. Use the VPN connection to send the data from on premises to Firehose.

D.

Use AWS PrivateLink to create an interface VPC endpoint for Firehose in the VPC. Set up a 1 Gbps AWS Direct Connect connection between the on-premises network and AWS. Use the PrivateLink endpoint to send the data from on premises to Firehose.

Full Access
Question # 145

A company currently stores 5 TB of data in on-premises block storage systems. The company's current storage solution provides limited space for additional data. The company runs applications on premises that must be able to retrieve frequently accessed data with low latency. The company requires a cloud-based storage solution.

Which solution will meet these requirements with the MOST operational efficiency?

A.

Use Amazon S3 File Gateway Integrate S3 File Gateway with the on-premises applications to store and directly retrieve files by using the SMB file system.

B.

Use an AWS Storage Gateway Volume Gateway with cached volumes as iSCSt targets.

C.

Use an AWS Storage Gateway Volume Gateway with stored volumes as iSCSI targets.

D.

Use an AWS Storage Gateway Tape Gateway. Integrate Tape Gateway with the on-premises applications to store virtual tapes in Amazon S3.

Full Access
Question # 146

A company uses AWS to host a public website. The load on the webservers recently increased.

The company wants to learn more about the traffic flow and traffic sources. The company also wants to increase the overall security of the website.

Which solution will meet these requirements?

A.

Deploy AWS WAF and set up logging. Use Amazon Data Firehose to deliver the log files to an Amazon S3 bucket for analysis.

B.

Deploy Amazon API Gateway and set up logging. Use Amazon Kinesis Data Streams to deliver the log files to an Amazon S3 bucket for analysis.

C.

Deploy a Network Load Balancer and set up logging. Use Amazon Data Firehose to deliver the log files to an Amazon S3 bucket for analysis.

D.

Deploy an Application Load Balancer and set up logging. Use Amazon Kinesis Data Streams to deliver the log files to an Amazon S3 bucket for analysis.

Full Access
Question # 147

A company is migrating a new application from an on-premises data center to a new VPC in the AWS Cloud. The company has multiple AWS accounts and VPCs that share many subnets and applications. The company wants to have fine-grained access control for the new application.The company wants to ensure that all network resources across accounts and VPCs that are granted permission to access the new application can access the application.

Which solution will meet these requirements?

A.

Set up a VPC peering connection for each VPC that needs access to the new application VPC. Update route tables in each VPC to enable connectivity.

B.

Deploy a transit gateway in the account that hosts the new application. Share the transit gateway with each account that needs to connect to the application. Update route tables in the VPC that hosts the new application and in the transit gateway to enable connectivity.

C.

Use an AWS PrivateLink endpoint service to make the new application accessible to other VPCs. Control access to the application by using an endpoint policy.

D.

Use an Application Load Balancer (ALB) to expose the new application to the internet. Configure authentication and authorization processes to ensure that only specified VPCs can access the application.

Full Access
Question # 148

A company is designing the architecture for a new mobile app that uses the AWS Cloud. The company uses organizational units (OUs) in AWS Organizations to manage its accounts. The company wants to tag Amazon EC2 instances with data sensitivity by using values of sensitive and nonsensitive IAM identities must not be able to delete a tag or create instances without a tag

Which combination of steps will meet these requirements? (Select TWO.)

A.

In Organizations, create a new tag policy that specifies the data sensitivity tag key and the required values. Enforce the tag values for the EC2 instances Attach the tag policy to the appropriate OU.

B.

In Organizations, create a new service control policy (SCP) that specifies the data sensitivity tag key and the required tag values Enforce the tag values for the EC2 instances. Attach the SCP to the appropriate OU.

C.

Create a tag policy to deny running instances when a tag key is not specified. Create another tag policy that prevents identities from deleting tags Attach the tag policies to the appropriate OU.

D.

Create a service control policy (SCP) to deny creating instances when a tag key is not specified. Create another SCP that prevents identities from deleting tags Attach the SCPs to the appropriate OU.

E.

Create an AWS Config rule to check if EC2 instances use the data sensitivity tag and the specified values. Configure an AWS Lambda function to delete the resource if a noncompliant resource is found.

Full Access
Question # 149

A company has an application with a REST-based interface that allows data to be received in near-real time from a third-party vendor. Once received, the application processes and stores the data for further analysis. The application is running on Amazon EC2 instances.

The third-party vendor has received many 503 Service Unavailable Errors when sending data to the application. When the data volume spikes, the compute capacity reaches its maximum limit and the application is unable to process all requests.

Which design should a solutions architect recommend to provide a more scalable solution?

A.

Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions.

B.

Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit for the third-party vendor.

C.

Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in an Auto Scaling group behind an Application Load Balancer.

D.

Repackage the application as a container. Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2 launch type with an Auto Scaling group.

Full Access
Question # 150

A company runs Amazon EC2 instances as web servers. Peak traffic occurs at two predictable times each day. The web servers remain mostly idle during the rest of the day.

A solutions architect must manage the web servers while maintaining fault tolerance in the most cost-effective way.

Which solution will meet these requirements?

A.

Use an EC2 Auto Scaling group to scale the instances based on demand.

B.

Purchase Reserved Instances to ensure peak capacity at all times.

C.

Use a cron job to stop the EC2 instances when traffic demand is low.

D.

Use a script to vertically scale the EC2 instances during peak demand.

Full Access
Question # 151

A company is deploying a new application to a VPC on existing Amazon EC2 instances. The application has a presentation tier that uses an Auto Scaling group of EC2 instances. The application also has a database tier that uses an Amazon RDS Multi-AZ database.

The VPC has two public subnets that are split between two Availability Zones. A solutions architect adds one private subnet to each Availability Zone for the RDS database. The solutions architect wants to restrict network access to the RDS database to block access from EC2 instances that do not host the new application.

Which solution will meet this requirement?

A.

Modify the RDS database security group to allow traffic from a CIDR range that includes IP addresses of the EC2 instances that host the new application.

B.

Associate a new ACL with the private subnets. Deny all incoming traffic from IP addresses that belong to any EC2 instance that does not host the new application.

C.

Modify the RDS database security group to allow traffic from the security group that is associated with the EC2 instances that host the new application.

D.

Associate a new ACL with the private subnets. Deny all incoming traffic except for traffic from a CIDR range that includes IP addresses of the EC2 instances that host the new application.

Full Access
Question # 152

A company needs a solution to ingest streaming sensor data from 100,000 devices, transform the data in near real time, and load the data into Amazon S3 for analysis. The solution must be fully managed, scalable, and maintain sub-second ingestion latency.

A.

Use Amazon Kinesis Data Streams to ingest the data. Use Amazon Managed Service for Apache Flink to process the data in near real time. Use an Amazon Data Firehose stream to send processed data to Amazon S3.

B.

Use Amazon Simple Queue Service (Amazon SQS) standard queues to collect the sensor data. Invoke AWS Lambda functions to transform and process SQS messages in batches. Configure the Lambda functions to use an AWS SDK to write transformed data to Amazon S3.

C.

Deploy a fleet of Amazon EC2 instances that run Apache Kafka to ingest the data. Run Apache Spark on Amazon EMR clusters to process the data. Configure Spark to write processed data directly to Amazon S3.

D.

Implement Amazon EventBridge to capture all sensor data. Use AWS Batch to run containerized transformation jobs on a schedule. Configure AWS Batch jobs to process data in chunks. Save results to Amazon S3.

Full Access
Question # 153

A company is migrating a new application from an on-premises data center to a new VPC in the AWS Cloud. The company has multiple AWS accounts and VPCs that share many subnets and applications.

The company wants to have fine-grained access control for the new application. The company wants to ensure that all network resources across accounts and VPCs that are granted permission to access the new application can access the application.

A.

Set up a VPC peering connection for each VPC that needs access to the new application VPC. Update route tables in each VPC to enable connectivity.

B.

Deploy a transit gateway in the account that hosts the new application. Share the transit gateway with each account that needs to connect to the application. Update route tables in the VPC that hosts the new application and in the transit gateway to enable connectivity.

C.

Use an AWS PrivateLink endpoint service to make the new application accessible to other VPCs. Control access to the application by using an endpoint policy.

D.

Use an Application Load Balancer (ALB) to expose the new application to the internet. Configure authentication and authorization processes to ensure that only specified VPCs can access the application.

Full Access
Question # 154

A company is testing an application that runs on an Amazon EC2 Linux instance. A single 500 GB Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) volume is attached to the EC2 instance.

The company will deploy the application on multiple EC2 instances in an Auto Scaling group. All instances require access to the data that is stored in the EBS volume. The company needs a highly available and resilient solution that does not introduce significant changes to the application's code.

Which solution will meet these requirements?

A.

Provision an EC2 instance that uses NFS server software. Attach a single 500 GB gp2 EBS volume to the instance.

B.

Provision an Amazon FSx for Windows File Server file system. Configure the file system as an SMB file store within a single Availability Zone.

C.

Provision an EC2 instance with two 250 GB Provisioned IOPS SSD EBS volumes.

D.

Provision an Amazon Elastic File System (Amazon EFS) file system. Configure the file system to use General Purpose performance mode.

Full Access
Question # 155

A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run Amazon Linux in an Auto Scaling group. Each instance stores product manuals on Amazon EBS volumes.

New instances often start with outdated data and may take up to 30 minutes to download updates. The company needs a solution ensuring all instances always have up-to-date product manuals, can scale rapidly, and does not require application code changes.

Which solution will meet these requirements?

A.

Store the product manuals on instance store volumes attached to each EC2 instance.

B.

Store the product manuals in an Amazon S3 bucket. Configure EC2 instances to download updates from the bucket.

C.

Store the product manuals in an Amazon EFS file system. Mount the EFS volume on the EC2 instances.

D.

Store the product manuals in an S3 bucket using S3 Standard-IA. Configure EC2 instances to download updates from S3.

Full Access
Question # 156

A company hosts a web application in a VPC on AWS. A public Application Load Balancer (ALB) forwards connections from the internet to an Auto Scaling group of Amazon EC2 instances. The Auto Scaling group runs in private subnets across four Availability Zones.

The company stores data in an Amazon S3 bucket in the same Region. The EC2 instances use NAT gateways in each Availability Zone for outbound internet connectivity.

The company wants to optimize costs for its AWS architecture.

Which solution will meet this requirement?

A.

Reconfigure the Auto Scaling group and the ALB to use two Availability Zones instead of four. Do not change the desired count or scaling metrics for the Auto Scaling group to maintain application availability.

B.

Create a new, smaller VPC that still has sufficient IP address availability to run the application. Redeploy the application stack in the new VPC. Delete the existing VPC and its resources.

C.

Deploy an S3 gateway endpoint to the VPC. Configure the EC2 instances to access the S3 bucket through the S3 gateway endpoint.

D.

Deploy an S3 interface endpoint to the VPC. Configure the EC2 instances to access the S3 bucket through the S3 interface endpoint.

Full Access
Question # 157

A company uses an Amazon Aurora PostgreSQL DB cluster to store its critical data in the us-east-1 Region. The company wants to develop a disaster recovery plan to recover the database in the us-west-1 Region. The company has a recovery time objective (RTO) of 5 minutes and has a recovery point objective (RPO) of 1 minute.

What should a solutions architect do to meet these requirements?

A.

Create a read replica in us-west-1. Set the DB cluster to automatically fail over to the read replica if the primary instance is not responding.

B.

Create an Aurora global database. Set us-west-1 as the secondary Region. Update connections to use the writer and reader endpoints as appropriate.

C.

Set up a second Aurora DB cluster in us-west-1. Use logical replication to keep the databases synchronized. Create an Amazon EventBridge rule to change the database endpoint if the primary DB cluster does not respond.

D.

Use Aurora automated snapshots to store data in an Amazon S3 bucket. Enable S3 Versioning. Configure S3 Cross-Region Replication to us-west-1. Create a second Aurora DB cluster in us-west-1. Create an Amazon EventBridge rule to restore the snapshot if the primary DB cluster does not respond.

Full Access
Question # 158

A company uses an Amazon EC2 Auto Scaling group to host an API. The EC2 instances are in a target group that is associated with an Application Load Balancer (ALB). The company stores data in an Amazon Aurora PostgreSQL database.

The API has a weekly maintenance window. The company must ensure that the API returns a static maintenance response during the weekly maintenance window.

Which solution will meet this requirement with the LEAST operational overhead?

A.

Create a table in Aurora PostgreSQL that has fields to contain keys and values. Create a key for a maintenance flag. Set the flag when the maintenance window starts. Configure the API to query the table for the maintenance flag and to return a maintenance response if the flag is set. Reset the flag when the maintenance window is finished.

B.

Create an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe the EC2 instances to the queue. Publish a message to the queue when the maintenance window starts. Configure the API to return a maintenance message if the instances receive a maintenance start message from the queue. Publish another message to the queue when the maintenance window is finished to restore normal operation.

C.

Create a listener rule on the ALB to return a maintenance response when the path on a request matches a wildcard. Set the rule priority to one. Perform the maintenance. When the maintenance window is finished, delete the listener rule.

D.

Create an Amazon Simple Notification Service (Amazon SNS) topic Subscribe the EC2 instances to the topic Publish a message to the topic when the maintenance window starts. Configure the API to return a maintenance response if the instances receive the maintenance start message from the topic. Publish another message to the topic when the maintenance window finshes to restore normal operation.

Full Access
Question # 159

A company has a static website that is hosted on Amazon CloudFront in front of Amazon S3. The static website uses a database backend. The company notices that the website does not reflect updates that have been made in the website's Git repository. The company checks the continuous integration and continuous delivery (CI/CD) pipeline between the Git repository and Amazon S3. The company verifies that the webhooks are configured properly and that the CI/CD pipeline Is sending messages that indicate successful deployments.

A solutions architect needs to implement a solution that displays the updates on the website.

Which solution will meet these requirements?

A.

Add an Application Load Balancer.

B.

Add Amazon ElastiCache for Redis or Memcached to the database layer of the web application.

C.

Invalidate the CloudFront cache.

D.

Use AWS Certificate Manager (ACM) to validate the website's SSL certificate.

Full Access
Question # 160

A company wants to protect AWS-hosted resources, including Application Load Balancers and CloudFront distributions. They need near real-time visibility into attacks and a dedicated AWS response team for DDoS events.

Which AWS service meets these requirements?

A.

AWS WAF

B.

AWS Shield Standard

C.

Amazon Macie

D.

AWS Shield Advanced

Full Access
Question # 161

A company uses Amazon Route 53 as its DNS provider. The company hosts a website both on premises and in the AWS Cloud. The company's on-premises data center is near the us-west-1 Region. The company hosts the website on AWS in the eu-central-1 Region.

The company wants to optimize load times for the website as much as possible.

Which solution will meet these requirements?

A.

Create a DNS record with a failover routing policy that routes all primary traffic to eu-central-1. Configure the routing policy to use the on-premises data center as the secondary location.

B.

Create a DNS record with an IP-based routing policy. Configure specific IP ranges to return the value for the eu-central-1 website. Configure all other IP ranges to return the value for the on-premises website.

C.

Create a DNS record with a latency-based routing policy. Configure one latency record for the eu-central-1 website and one latency record for the on-premises data center. Associate the record for the on-premises data center with the us-west-1 Region.

D.

Create a DNS record with a weighted routing policy. Split the traffic evenly between eu-central-1 and the on-premises data center.

Full Access
Question # 162

A company hosts dozens of multi-tier applications on AWS. The presentation layer and logic layer are Amazon EC2 Linux instances that use Amazon EBS volumes.

The company needs a solution to ensure that operating system vulnerabilities are not introduced to the EC2 instances when the company deploys new features. The company uses custom AMIs to deploy EC2 instances in an Auto Scaling group. The solution must scale to handle all applications that the company hosts.

Which solution will meet these requirements?

A.

Use Amazon Inspector to patch operating system vulnerabilities. Invoke Amazon Inspector when a new AMI is deployed.

B.

Use AWS Backup to back up the EBS volume of each updated instance. Use the EBS backup volumes to create new AMIs. Use the existing Auto Scaling group to deploy the new AMIs.

C.

Use AWS Systems Manager Patch Manager to patch operating system vulnerabilities in the custom AMIs.

D.

Use EC2 Image Builder to create new AMIs when the company deploys new features. Include the update-linux component in the build components of the new AMIs. Use the existing Auto Scaling group to deploy the new AMIs.

Full Access
Question # 163

A company wants to provide a third-party system that runs in a private data center with access to its AWS account. The company wants to call AWS APIs directly from the third-party system. The company has an existing process for managing digital certificates. The company does not want to use SAML or OpenID Connect (OIDC) capabilities and does not want to store long-term AWS credentials.

Which solution will meet these requirements?

A.

Configure mutual TLS to allow authentication of the client and server sides of the communication channel.

B.

Configure AWS Signature Version 4 to authenticate incoming HTTPS requests to AWS APIs.

C.

Configure Kerberos to exchange tickets for assertions that can be validated by AWS APIs.

D.

Configure AWS Identity and Access Management (IAM) Roles Anywhere to exchange X.509 certificates for AWS credentials to interact with AWS APIs.

Full Access
Question # 164

A company's software development team needs an Amazon RDS Multi-AZ cluster. The RDS cluster will serve as a backend for a desktop client that is deployed on premises. The desktop client requires direct connectivity to the RDS cluster.

The company must give the development team the ability to connect to the cluster by using the client when the team is in the office.

Which solution provides the required connectivity MOST securely?

A.

Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Use AWS Site-to-Site VPN with a customer gateway in the company's office.

B.

Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use AWS Site-to-Site VPN with a customer gateway in the company's office.

C.

Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use RDS security groups to allow the company's office IP ranges to access the cluster.

D.

Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Create a cluster user for each developer. Use RDS security groups to allow the users to access the cluster.

Full Access
Question # 165

A company is building a data processing application that uses AWS Lambda functions. The Lambda functions need to communicate with an Amazon RDS DB instance deployed inside a VPC in the same AWS account.

Which solution meets these requirements in the most secure way?

A.

Configure the DB instance for public access. Allow Lambda public address space.

B.

Deploy Lambda inside the VPC. Attach a network ACL allowing outbound access to the VPC CIDR. Update the DB security group to allow traffic from 0.0.0.0/0.

C.

Deploy Lambda inside the VPC. Attach a security group to the Lambda functions. Allow outbound access only to the VPC CIDR. Update the DB instance security group to allow traffic from the Lambda security group.

D.

Peer the Lambda default VPC with the DB VPC and avoid security groups.

Full Access
Question # 166

A company runs a production database on Amazon RDS for MySQL. The company wants to upgrade the database version for security compliance reasons. Because the database contains critical data, the company wants a quick solution to upgrade and test functionality without losing any data.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Create an RDS manual snapshot. Upgrade to the new version of Amazon RDS for MySQL.

B.

Use native backup and restore. Restore the data to the upgraded new version of Amazon RDS for MySQL.

C.

Use AWS Database Migration Service (AWS DMS) to replicate the data to the upgraded new version of Amazon RDS for MySQL.

D.

Use Amazon RDS Blue/Green Deployments to deploy and test production changes.

Full Access
Question # 167

A company hosts a web application on Amazon EC2 instances that are part of an Auto Scaling group behind an Application Load Balancer (ALB). The application experiences spikes in requests that come through the ALB throughout each day. The traffic spikes last between 15 and 20 minutes.

The company needs a solution that uses a standard or custom metric to scale the EC2 instances based on the number of requests that come from the ALB.

Which solution will meet these requirements MOST cost-effectively?

A.

Configure an Amazon CloudWatch alarm to monitor the ALB RequestCount metric. Configure a simple scaling policy to scale the EC2 instances in response to the metric.

B.

Configure a predictive scaling policy based on the ALB RequestCount metric to scale the EC2 instances.

C.

Configure an Amazon CloudWatch alarm to monitor the ALB UnhealthyHostCount metric. Configure a target tracking policy to scale the EC2 instances in response to the metric.

D.

Create an Amazon CloudWatch alarm to monitor a user-defined metric for GET requests. Configure a target tracking policy threshold to scale the EC2 instances.

Full Access
Question # 168

A company runs a critical three-tier web application that consists of multiple virtual machines (VMs) and virtual databases in an on-premises environment. The company wants to set up a disaster recovery (DR) environment in AWS.

The company requires a 15-minute recovery time objective (RTO). The company must be able to test the failover solution to validate the recovery. The solution must provide an automated failover mechanism.

Which solution will meet these requirements?

A.

Use AWS Backup to create backups of the on-premises VMs and to restore the backups in AWS. Configure recovery to Amazon EC2 instances to meet the RTO requirement.

B.

Use AWS Database Migration Service (AWS DMS) to replicate the on-premises databases to Amazon RDS. Set up AWS Storage Gateway for baseline and incremental data replication to AWS to meet the RTO requirement.

C.

Use AWS DataSync and AWS Storage Gateway to migrate the baseline and incremental data to AWS. Use Amazon EC2, Amazon S3, and an Application Load Balancer to set up the DR environment.

D.

Use AWS Elastic Disaster Recovery to replicate the VMs incrementally to AWS. Configure Elastic Disaster Recovery to automate the DR process.

Full Access
Question # 169

A company uses an Amazon CloudFront distribution to serve thousands of media files to users. The CloudFront distribution uses a private Amazon S3 bucket as an origin.

A solutions architect must prevent users in specific countries from accessing the company's files.

Which solution will meet these requirements in the MOST operationally-efficient way?

A.

Require users to access the files by using CloudFront signed URLs.

B.

Configure geographic restrictions in CloudFront.

C.

Require users to access the files by using CloudFront signed cookies.

D.

Configure an origin access control (OAC) between CloudFront and the S3 bucket.

Full Access
Question # 170

A company wants to send data from its on-premises systems to Amazon S3 buckets. The company created the S3 buckets in three different accounts. The company must send the data privately without the data traveling across the internet. The company has no existing dedicated connectivity to AWS.

Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

A.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Set up an AWS Direct Connect connection with a private VIF between the on-premises environment and the private VPC.

B.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Set up an AWS Direct Connect connection with a public VIF between the on-premises environment and the private VPC.

C.

Create an Amazon S3 interface endpoint in the networking account.

D.

Create an Amazon S3 gateway endpoint in the networking account.

E.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Peer VPCs from the accounts that host the S3 buckets with the VPC in the network account.

Full Access
Question # 171

A company has built an application that uses an Amazon Simple Queue Service (Amazon SQS) standard queue and an AWS Lambda function. The Lambda function writes messages to the SQS queue. The company needs a solution to ensure that the consumer of the SQS queue never receives duplicate messages.

Which solution will meet this requirement with the FEWEST changes to the current architecture?

A.

Modify the SQS queue to enable long polling for the queue.

B.

Delete the existing SQS queue. Recreate the queue as a FIFO queue. Enable content-based deduplication for the queue.

C.

Modify the SQS queue to enable content-based deduplication for the queue.

D.

Delete the SQS queue. Create an Amazon MQ message broker. Configure the broker to deduplicate messages.

Full Access
Question # 172

An ecommerce company is launching a new marketing campaign. The company anticipates the campaign to generate ten times the normal number of daily orders through the company's ecommerce application. The campaign will last 3 days.

The ecommerce application architecture is based on Amazon EC2 instances in an Auto Scaling group and an Amazon RDS for MySQL database. The application writes order transactions to an Amazon Elastic File System (Amazon EFS) file system before the application writes orders to the database. During normal operations, the application write operations peak at 5,000 IOPS.

A solutions architect needs to ensure that the application can handle the anticipated workload during the marketing campaign.

Which solution will meet this requirement?

A.

For the duration of the campaign, increase the provisioned IOPS for the RDS for MySQL database. Set the Amazon EFS throughput mode to Bursting throughput.

B.

For the duration of the campaign, increase the provisioned IOPS for the RDS for MySQL database. Set the Amazon EFS throughput mode to Elastic throughput.

C.

Convert the database to a Multi-AZ deployment. Set the Amazon EFS throughput mode to Elastic throughput for the duration of the campaign.

D.

Use AWS Database Migration Service (AWS DMS) to convert the database to RDS for PostgreSQL. Set the Amazon EFS throughput mode to Bursting throughput.

Full Access
Question # 173

A solutions architect is designing a multi-Region disaster recovery (DR) strategy for a company. The company runs an application on Amazon EC2 instances in Auto Scaling groups that are behind an Application Load Balancer (ALB). The company hosts the application in the company's primary and secondary AWS Regions.

The application must respond to DNS queries from the secondary Region if the primary Region fails. Only one Region must serve traffic at a time.

Which solution will meet these requirements?

A.

Create an outbound endpoint in Amazon Route 53 Resolver. Create forwarding rules that determine how queries will be forwarded to DNS resolvers on the network. Associate the rules with VPCs in each Region.

B.

Create primary and secondary DNS records in Amazon Route 53. Configure health checks and a failover routing policy.

C.

Create a traffic policy in Amazon Route 53. Use a geolocation routing policy and a value type of ELB Application Load Balancer.

D.

Create an Amazon Route 53 profile. Associate DNS resources to the profile. Associate the profile with VPCs in each Region.

Full Access
Question # 174

A company hosts multiple applications on AWS for different product lines. The applications use different compute resources, including Amazon EC2 instances and Application Load Balancers. The applications run in different AWS accounts under the same organization in AWS Organizations across multiple AWS Regions. Teams for each product line have tagged each compute resource in the individual accounts.

The company wants more details about the cost for each product line from the consolidated billing feature in Organizations.

Which combination of steps will meet these requirements? (Select TWO.)

A.

Select a specific AWS generated tag in the AWS Billing console.

B.

Select a specific user-defined tag in the AWS Billing console.

C.

Select a specific user-defined tag in the AWS Resource Groups console.

D.

Activate the selected tag from each AWS account.

E.

Activate the selected tag from the Organizations management account.

Full Access
Question # 175

An ecommerce company runs a PostgreSQL database on an Amazon EC2 instance. The database stores data in Amazon Elastic Block Store (Amazon EBS) volumes. The daily peak input/output transactions per second (IOPS) do not exceed 15,000 IOPS. The company wants to migrate the database to Amazon RDS for PostgreSQL and to provision disk IOPS performance that is independent of disk storage capacity.

Which solution will meet these requirements MOST cost-effectively?

A.

Configure General Purpose SSD (gp2) EBS volumes. Provision a 5 TiB volume.

B.

Configure Provisioned IOPS SSD (io1) EBS volumes. Provision 15,000 IOPS.

C.

Configure General Purpose SSD (gp3) EBS volumes. Provision 15,000 IOPS.

D.

Configure magnetic EBS volumes to achieve maximum IOPS.

Full Access
Question # 176

A company recently launched a new product that is highly available in one AWS Region The product consists of an application that runs on Amazon Elastic Container Service (Amazon ECS), apublic Application Load Balancer (ALB), and an Amazon DynamoDB table. The company wants a solution that will make the application highly available across Regions.

Which combination of steps will meet these requirements? (Select THREE.)

A.

In a different Region, deploy the application to a new ECS cluster that is accessible through a new ALB.

B.

Create an Amazon Route 53 failover record.

C.

Modify the DynamoDB table to create a DynamoDB global table.

D.

In the same Region, deploy the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that is accessible through a new ALB.

E.

Modify the DynamoDB table to create global secondary indexes (GSIs).

F.

Create an AWS PrivateLink endpoint for the application.

Full Access
Question # 177

An ecommerce company is planning to migrate an on-premises Microsoft SQL Server database to the AWS Cloud. The company needs to migrate the database to SQL Server Always On availability groups. The cloud-based solution must be highly available.

Options:

A.

Deploy three Amazon EC2 instances with SQL Server across three Availability Zones. Attach one Amazon Elastic Block Store (Amazon EBS) volume to the EC2 instances.

B.

Migrate the database to Amazon RDS for SQL Server. Configure a Multi-AZ deployment and read replicas.

C.

Deploy three Amazon EC2 instances with SQL Server across three Availability Zones. Use Amazon FSx for Windows File Server as the storage tier.

D.

Deploy three Amazon EC2 instances with SQL Server across three Availability Zones. Use Amazon S3 as the storage tier.

Full Access
Question # 178

A company wants to restrict access to the content of its web application. The company needs to protect the content by using authorization techniques that are available on AWS. The company also wants to implement a serverless architecture for authorization and authentication that has low login latency.

The solution must integrate with the web application and serve web content globally. The application currently has a small user base, but the company expects the application's user base to increase

Which solution will meet these requirements?

A.

Configure Amazon Cognito for authentication. Implement Lambda@Edge for authorization. Configure Amazon CloudFront to serve the web application globally

B.

Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement AWS Lambda for authorization. Use an Application Load Balancer to serve the web application globally.

C.

Configure Amazon Cognito for authentication. Implement AWS Lambda for authorization Use Amazon S3 Transfer Acceleration to serve the web application globally.

D.

Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement Lambda@Edge for authorization. Use AWS Elastic Beanstalk to serve the web application globally.

Full Access
Question # 179

An ecommerce company runs a multi-tier application on AWS. The frontend and backend tiers both run on Amazon EC2 instances. The database tier runs on an Amazon RDS for MySQL DB instance. The backend tier communicates with the RDS DB instance.

The application makes frequent calls to return identical datasets from the database. The frequent calls on the database cause performance slowdowns. A solutions architect must improve the performance of the application backend.

Which solution will meet this requirement?

A.

Configure an Amazon Simple Notification Service (Amazon SNS) topic between the EC2 instances and the RDS DB instance.

B.

Configure an Amazon ElastiCache (Redis OSS) cache. Configure the backend EC2 instances to read from the cache.

C.

Configure an Amazon DynamoDB Accelerator (DAX) cluster. Configure the backend EC2 instances to read from the cluster.

D.

Configure Amazon Data Firehose to stream the calls to the database.

Full Access
Question # 180

A company needs to collect streaming data from several sources and store the data in the AWS Cloud. The dataset is heavily structured, but analysts need to perform several complex SQL queries and need consistent performance. Some of the data is queried more frequently than the rest. The company wants a solution that meets its performance requirements in a cost-effective manner.

Which solution meets these requirements?

A.

Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to ingest the data to save it to Amazon S3. Use Amazon Athena to perform SQL queries over the ingested data.

B.

Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to ingest the data to save it to Amazon Redshift. Enable Amazon Redshift workload management (WLM) to prioritize workloads.

C.

Use Amazon Data Firehose to ingest the data to save it to Amazon Redshift. Enable Amazon Redshift workload management (WLM) to prioritize workloads.

D.

Use Amazon Data Firehose to ingest the data to save it to Amazon S3. Load frequently queried data to Amazon Redshift using the COPY command. Use Amazon Redshift Spectrum for less frequently queried data.

Full Access
Question # 181

A company runs an order management application on AWS. The application allows customers to place orders and pay with a credit card. The company uses an Amazon CloudFront distribution to deliver the application.

A security team has set up logging for all incoming requests. The security team needs a solution to generate an alert if any user modifies the logging configuration.

Options (Select TWO):

A.

Configure an Amazon EventBridge rule that is invoked when a user creates or modifies a CloudFront distribution. Add the AWS Lambda function as a target of the EventBridge rule.

B.

Create an Application Load Balancer (ALB). Enable AWS WAF rules for the ALB. Configure an AWS Config rule to detect security violations.

C.

Create an AWS Lambda function to detect changes in CloudFront distribution logging. Configure the Lambda function to use Amazon Simple Notification Service (Amazon SNS) to send notifications to the security team.

D.

Set up Amazon GuardDuty. Configure GuardDuty to monitor findings from the CloudFront distribution. Create an AWS Lambda function to address the findings.

E.

Create a private API in Amazon API Gateway. Use AWS WAF rules to protect the private API from common security problems.

Full Access
Question # 182

A company needs to migrate its customer transactions database from on-premises to AWS. The database resides on an Oracle DB instance that runs on a Linux server. According to a new security requirement, the company must rotate the database password each year.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Convert the database to Amazon DynamoDB by using AWS Schema Conversion Tool (AWS SCT). Store the password in AWS Systems Manager Parameter Store. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function for yearly password rotation.

B.

Migrate the database to Amazon RDS for Oracle. Store the password in AWS Secrets Manager. Turn on automatic rotation. Configure a yearly rotation schedule.

C.

Migrate the database to an Amazon EC2 instance. Use AWS Systems Manager Parameter Store to keep and rotate the connection string by using an AWS Lambda function on a yearly schedule.

D.

Migrate the database to Amazon Neptune by using AWS Schema Conversion Tool (AWS SCT). Create an Amazon CloudWatch alarm to invoke an AWS Lambda function for yearly password rotation.

Full Access
Question # 183

A company needs a solution to automate email ingestion. The company needs to automatically parse email messages, look for email attachments, and save any attachments to an Amazon S3 bucket in near real time. Email volume varies significantly from day to day.

Which solution will meet these requirements?

A.

Set up email receiving in Amazon Simple Email Service {Amazon SES). Create a rule set and a receipt rule. Create an AWS Lambda function that Amazon SES can invoke to process the email bodies and attachments.

B.

Set up email content filtering in Amazon Simple Email Service (Amazon SES). Create a content filtering rule based on sender, recipient, message body, and attachments.

C.

Set up email receiving in Amazon Simple Email Service (Amazon SES). Configure Amazon SES and S3 Event Notifications to process the email bodies and attachments.

D.

Create an AWS Lambda function to process the email bodies and attachments. Use Amazon EventBridge to invoke the Lambda function. Configure an EventBridge rule to listen for incoming emails.

Full Access
Question # 184

A company is developing a monolithic Microsoft Windows based application that will run on Amazon EC2 instances. The application will run long data-processing jobs that must not be in-terrupted. The company has modeled expected usage growth for the next 3 years. The company wants to optimize costs for the EC2 instances during the 3-year growth period.

A.

Purchase a Compute Savings Plan with a 3-year commitment. Adjust the hourly commit-ment based on the plan recommendations.

B.

Purchase an EC2 Instance Savings Plan with a 3-year commitment. Adjust the hourly com-mitment based on the plan recommendations.

C.

Purchase a Compute Savings Plan with a 1-year commitment. Renew the purchase and adjust the capacity each year as necessary.

D.

Deploy the application on EC2 Spot Instances. Use an Auto Scaling group with a minimum size of 1 to ensure that the application is always running.

Full Access
Question # 185

A company runs an application that uses Docker containers in an on-premises data center. The application runs on a container host that stores persistent data files in a local volume. Container instances use the stored persistent data.

The company wants to migrate the application to fully managed AWS services.

Which solution will meet these requirements?

A.

Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Attach an Amazon Elastic Block Store (Amazon EBS) volume to an Amazon EC2 instance. Mount the EBS volume on the containers to provide persistent storage.

B.

Use Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Mount the EFS volume on the containers to provide persistent storage.

C.

Use Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Create an Amazon DynamoDB table. Configure the application to use the DynamoDB table for persistent storage.

D.

Use Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Mount the EFS volume on the containers to provide persistent storage.

Full Access
Question # 186

A company is implementing a new policy to enhance the security of its AWS environment. The policy requires all administrative actions that users perform on the AWS Management Console to be secured by multi-factor authentication (MFA).

Which solution will allow the company to enforce this policy in the MOST operationally efficient way?

A.

Enable MFA on the root account. Ensure that all administrators use the root account to perform administrative actions.

B.

Create an IAM policy that requires MFA to be enabled for the IAM roles that administrators assume to perform administrative actions.

C.

Configure an Amazon CloudWatch alarm that sends an email notification when an administrator performs an administrative action without MFA.

D.

Use AWS Config to periodically audit IAM users and to automatically attach an IAM policy that requires MFA when AWS Config detects administrative actions.

Full Access
Question # 187

A company is developing a new online gaming application. The application will run on Amazon EC2 instances in multiple AWS Regions and will have a high number of globally distributed users. A solutions architect must design the application to optimize network latency for the users.

Which actions should the solutions architect take to meet these requirements? (Select TWO.)

A.

Configure AWS Global Accelerator. Create Regional endpoint groups in each Region where an EC2 fleet is hosted.

B.

Create a content delivery network (CDN) by using Amazon CloudFront. Enable caching for static and dynamic content, and specify a high expiration period.

C.

Integrate AWS Client VPN into the application. Instruct users to select which Region is closest to them after they launch the application. Establish a VPN connection to that Region.

D.

Create an Amazon Route 53 weighted routing policy. Configure the routing policy to give the highest weight to the EC2 instances in the Region that has the largest number of users.

E.

Configure an Amazon API Gateway endpoint in each Region where an EC2 fleet is hosted. Instruct users to select which Region is closest to them after they launch the application. Use the API Gateway endpoint that is closest to them.

Full Access
Question # 188

A company runs an online order management system on AWS. The company stores order and inventory data for the previous 5 years in an Amazon Aurora MySQL database. The company deletes inventory data after 5 years.

The company wants to optimize costs to archive data.

Which solution will meet this requirement?

A.

Create an AWS Glue crawler to export data to Amazon S3. Create an AWS Lambda function to compress the data.

B.

Use the SELECT INTO OUTFILE S3 query on the Aurora database to export the data to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.

C.

Create an AWS Glue DataBrew job to migrate data from Aurora to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.

D.

Use the AWS Schema Conversion Tool (AWS SCT) to replicate data from Aurora to Amazon S3. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.

Full Access
Question # 189

A media publishing company is building an application on AWS to give users the ability to print their own books. The application frontend runs on a Docker container.

The amount of incoming orders varies significantly. The incoming orders can temporarily exceed the throughput of the company's book printing machines. Order-processing payloads are up to 4 MB in size.

The company needs to develop a solution that can scale to handle incoming orders.

Which solution will meet this requirement?

A.

Use Amazon Simple Queue Service (Amazon SQS) to queue incoming orders. Create an AWS Lambda@Edge function to process orders. Deploy the frontend application on Amazon Elastic Kubernetes Service (Amazon EKS).

B.

Use Amazon Simple Notification Service (Amazon SNS) to queue incoming orders. Create an AWS Lambda function to process orders. Deploy the frontend application on AWS Fargate.

C.

Use Amazon Simple Queue Service (Amazon SQS) to queue incoming orders. Create an AWS Lambda function to process orders. Deploy the frontend application on Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type.

D.

Use Amazon Simple Notification Service (Amazon SNS) to queue incoming orders. Create an AWS Lambda@Edge function to process orders. Deploy the frontend application on Amazon EC2 instances.

Full Access
Question # 190

A company runs a production application on a fleet of Amazon EC2 instances. The application reads messages from an Amazon Simple Queue Service (Amazon SQS) queue and processes the messages in parallel. The message volume is unpredictable and highly variable.

The company must ensure that the application continually processes messages without any downtime.

Which solution will meet these requirements MOST cost-effectively?

A.

Use only Spot Instances to handle the maximum capacity required.

B.

Use only Reserved Instances to handle the maximum capacity required.

C.

Use Reserved Instances to handle the baseline capacity. Use Spot Instances to provide additional capacity when required.

D.

Use Reserved Instances in an EC2 Auto Scaling group to handle the minimum capacity. Configure an auto scaling policy that is based on the SQS queue backlog.

Full Access
Question # 191

A company is developing a social media application. The company anticipates rapid and unpredictable growth in users and data volume. The application needs to handle a continuous high volume of user requests. User requests include long-running processes that store large amounts of user-generated content and user profiles in a relational format. The processes must run in a specific order. The company requires an architecture that can scale resources to meet demand spikes without downtime or performance degradation. The company must ensure that the components of the application can evolve independently without affecting other parts of the system. Which combination of AWS services will meet these requirements?

A.

Deploy the application on Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Use Amazon RDS as the database. Use Amazon Simple Queue Service (Amazon SQS) to decouple message processing between components.

B.

Deploy the application on Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Use Amazon RDS as the database. Use Amazon Simple Notification Service (Amazon SNS) to decouple message processing between components.

C.

Use Amazon DynamoDB as the database. Use AWS Lambda functions to implement the application. Configure Amazon DynamoDB Streams to invoke the Lambda functions. Use AWS Step Functions to manage workflows between services.

D.

Use an AWS Elastic Beanstalk environment with auto scaling to deploy the application. Use Amazon RDS as the database. Use Amazon Simple Notification Service (Amazon SNS) to decouple message processing between components.

Full Access
Question # 192

Question:

A company wants to deploy an internal web application on AWS. The web application must be accessible only from the company's office. The company needs to download security patches for the web application from the internet. The company has created a VPC and has configured an AWS Site-to-Site VPN connection to the company's office. A solutions architect must design a secure architecture for the web application. Which solution will meet these requirements?

Options:

A.

Deploy the web application on Amazon EC2 instances in public subnets behind a public Application Load Balancer (ALB). Attach an internet gateway to the VPC. Set the inbound source of the ALB's security group to 0.0.0.0/0.

B.

Deploy the web application on Amazon EC2 instances in private subnets behind an internal Application Load Balancer (ALB). Deploy NAT gateways in public subnets. Attach an internet gateway to the VPC. Set the inbound source of the ALB's security group to the company's office network CIDR block.

C.

Deploy the web application on Amazon EC2 instances in public subnets behind an internal Application Load Balancer (ALB). Deploy NAT gateways in private subnets. Attach an internet gateway to the VPC. Set the outbound destination of the ALB's security group to the company's office network CIDR block.

D.

Deploy the web application on Amazon EC2 instances in private subnets behind a public Application Load Balancer (ALB). Attach an internet gateway to the VPC. Set the outbound destination of the ALB's security group to 0.0.0.0/0.

Full Access
Question # 193

A company runs database workloads on AWS that are the backend for the company's customer portals. The company runs a Multi-AZ database cluster on Amazon RDS for PostgreSQL.

The company needs to implement a 30-day backup retention policy. The company currently has both automated RDS backups and manual RDS backups. The company wants to maintain both types of existing RDS backups that are less than 30 days old.

Which solution will meet these requirements MOST cost-effectively?

A.

Configure the RDS backup retention policy to 30 days tor automated backups by using AWS Backup. Manually delete manual backups that are older than 30 days.

B.

Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days. Configure the RDS backup retention policy to 30 days tor automated backups.

C.

Configure the RDS backup retention policy to 30 days for automated backups. Manually delete manual backups that are older than 30 days

D.

Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days automatically by using AWS CloudFormation. Configure the RDS backup retention policy to 30 days for automated backups.

Full Access
Question # 194

Question:

An ecommerce company hosts an API that handles sales requests. The company hosts the API frontend on Amazon EC2 instances that run behind an Application Load Balancer (ALB). The company hosts the API backend on EC2 instances that perform the transactions. The backend tiers are loosely coupled by an Amazon Simple Queue Service (Amazon SQS) queue.

The company anticipates a significant increase in request volume during a new product launch event. The company wants to ensure that the API can handle increased loads successfully.

Options:

A.

Double the number of frontend and backend EC2 instances to handle the increased traffic during the product launch event. Create a dead-letter queue to retain unprocessed sales requests when the demand exceeds the system capacity.

B.

Place the frontend EC2 instances into an Auto Scaling group. Create an Auto Scaling policy to launch new instances to handle the incoming network traffic.

C.

Place the frontend EC2 instances into an Auto Scaling group. Add an Amazon ElastiCache cluster in front of the ALB to reduce the amount of traffic the API needs to handle.

D.

Place the frontend and backend EC2 instances into separate Auto Scaling groups. Create a policy for the frontend Auto Scaling group to launch instances based on incoming network traffic. Create a policy for the backend Auto Scaling group to launch instances based on the SQS queue backlog.

Full Access
Question # 195

A companyQUESTION NO: 24

A company has launched an Amazon RDS for MySQL DB instance. Most of the connections to the database come from serverless applications. Application traffic to the database changes significantly at random intervals. At times of high demand, users report that their applications experience database connection rejection errors.

Which solution will resolve this issue with the LEAST operational overhead?

A.

Create a proxy in RDS Proxy. Configure the users' applications to use the DB instance through RDS Proxy.

B.

Deploy Amazon ElastiCache (Memcached) between the users' applications and the DB instance.

C.

Migrate the DB instance to a different instance class that has higher I/O capacity. Configure the users' applications to use the new DB instance.

D.

Configure Multi-AZ for the DB instance. Configure the users' applications to switch between the DB instances.

Full Access
Question # 196

A company has customers located across the world. The company wants to use automation to secure its systems and network infrastructure The company's security team must be able to track and audit all incremental changes to the infrastructure.

Which solution will meet these requirements?

A.

Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes

B.

Use AWS Cloud Formation to set up the infrastructure. Use AWS Config to track changes.

C.

Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.

D.

Use AWS Cloud Formation to set up the infrastructure. Use AWS Service Catalog to track changes.

Full Access
Question # 197

A company needs an automated solution to detect cryptocurrency mining activity on Amazon EC2 instances. The solution must automatically isolate any identified EC2 instances for forensic analysis.

Which solution will meet these requirements?

A.

Create an Amazon EventBridge rule that runs when Amazon GuardDuty detects cryptocurrency mining activity. Configure the rule to invoke an AWS Lambda function to isolate the identified EC2 instances.

B.

Create an AWS Security Hub custom action that runs when Amazon GuardDuty detects cryptocurrency mining activity. Configure the custom action to invoke an AWS Lambda function to isolate the identified EC2 instances.

C.

Create an Amazon Inspector rule that runs when Amazon GuardDuty detects cryptocurrency mining activity. Configure the rule to invoke an AWS Lambda function to isolate the identified EC2 instances.

D.

Create an AWS Config custom rule that runs when AWS Config detects cryptocurrency mining activity. Configure the rule to invoke an AWS Lambda function to isolate the identified EC2 instances.

Full Access
Question # 198

A company uses a set of Amazon EC2 instances to host a website. The website uses an Amazon S3 bucket to store images and media files.

The company wants to automate website infrastructure creation to deploy the website to multiple AWS Regions. The company also wants to provide the EC2 instances access to the S3 bucket so the instances can store and access data by using AWS Identity and Access Management (IAM).

Which solution will meet these requirements MOST securely?

A.

Create an AWS Cloud Format ion template for the web server EC2 instances. Save an IAM access key in the UserData section of the AWS;:EC2::lnstance entity in the CloudFormation template.

B.

Create a file that contains an IAM secret access key and access key ID. Store the file in a new S3 bucket. Create an AWS CloudFormation template. In the template, create a parameter to specify the location of the S3 object that contains the access key and access key ID.

C.

Create an IAM role and an IAM access policy that allows the web server EC2 instances to access the S3 bucket. Create an AWS CloudFormation template for the web server EC2 instances that contains an IAM instance profile entity that references the IAM role and the IAM access policy.

D.

Create a script that retrieves an IAM secret access key and access key ID from IAM and stores them on the web server EC2 instances. Include the script in the UserData section of the AWS::EC2::lnstance entity in an AWS CloudFormation template.

Full Access
Question # 199

A company hosts an application on AWS. The application has generated approximately 2.5 TB of data over the previous 12 years. The company currently stores the data on Amazon EBS volumes.

The company wants a cost-effective backup solution for long-term storage. The company must be able to retrieve the data within minutes when required for audits.

Which solution will meet these requirements?

A.

Create EBS snapshots to back up the data.

B.

Create an Amazon S3 bucket. Use the S3 Glacier Deep Archive storage class to back up the data.

C.

Create an Amazon S3 bucket. Use the S3 Glacier Flexible Retrieval storage class to back up the data.

D.

Create an Amazon Elastic File System (Amazon EFS) file system to back up the data.

Full Access
Question # 200

How can a law firm make files publicly readable while preventing modifications or deletions until a specific future date?

A.

Upload files to an Amazon S3 bucket configured for static website hosting. Grant read-only IAM permissions to any AWS principals.

B.

Create an S3 bucket. Enable S3 Versioning. Use S3 Object Lock with a retention period. Create a CloudFront distribution. Use a bucket policy to restrict access.

C.

Create an S3 bucket. Enable S3 Versioning. Configure an event trigger with AWS Lambda to restore modified objects from a private S3 bucket.

D.

Upload files to an S3 bucket for static website hosting. Use S3 Object Lock with a retention period. Grant read-only IAM permissions.

Full Access
Question # 201

A company stores a file in an S3 bucket containing IP allow/deny lists. The file must be accessible via an HTTP endpoint. Firewalls outside AWS must read the file. The company wants to restrict access to only the firewall IP addresses.

The S3 Block Public Access feature is enabled on the account.

Which solution meets these requirements?

A.

Host the bucket as a static website and restrict access by IP.

B.

Create a bucket policy that explicitly allows access only from the firewall IP addresses.

C.

Create a CloudFront distribution with the S3 bucket as the origin. Use an origin access control (OAC) that allows access only from the firewall IP addresses.

D.

Create a Lambda function to validate IP addresses and return the lists.

Full Access