Summer Sale - Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dpt65

Professional-Cloud-Network-Engineer Questions and Answers

Question # 6

Your company runs an enterprise platform on-premises using virtual machines (VMS). Your internet customers have created tens of thousands of DNS domains panting to your public IP addresses allocated to the Vtvls Typically, your customers hard-code your IP addresses In their DNS records You are now planning to migrate the platform to Compute Engine and you want to use Bring your Own IP you want to minimize disruption to the Platform What Should you d0?

A.

Create a VPC and request static external IP addresses from Google Cloud Assagn the IP addresses to the Compute Engine instances. Notify your customers of the new IP addresses so they can update their DNS

B.

Verify ownership of your IP addresses. After the verification, Google Cloud advertises and provisions the IP prefix for you_ Assign the IP addresses to the Compute Engine Instances

C.

Create a VPC With the same IP address range as your on-premises network Asson the IP addresses to the Compute Engine Instances.

D.

Verify ownership of your IP addresses. Use live migration to import the prefix Assign the IP addresses to Compute Engine instances.

Full Access
Question # 7

You have an application running on Compute Engine that uses BigQuery to generate some results that are stored in Cloud Storage. You want to ensure that none of the application instances have external IP addresses.

Which two methods can you use to accomplish this? (Choose two.)

A.

Enable Private Google Access on all the subnets.

B.

Enable Private Google Access on the VPC.

C.

Enable Private Services Access on the VPC.

D.

Create network peering between your VPC and BigQuery.

E.

Create a Cloud NAT, and route the application traffic via NAT gateway.

Full Access
Question # 8

You have two Google Cloud projects in a perimeter to prevent data exfiltration. You need to move a third project inside the perimeter; however, the move could negatively impact the existing environment. You need to validate the impact of the change. What should you do?

A.

Enable Firewall Rules Logging inside the third project.

B.

Modify the existing VPC Service Controls policy to include the new project in dry run mode.

C.

Monitor the Resource Manager audit logs inside the perimeter.

D.

Enable VPC Flow Logs inside the third project, and monitor the logs for negative impact.

Full Access
Question # 9

Your organization is developing a landing zone architecture with the following requirements:

    There should be no communication between production and non-production environments.

    Communication between applications within an environment may be necessary.

    Network administrators should centrally manage all network resources, including subnets, routes, and firewall rules.

    Each application should be billed separately.

    Developers of an application within a project should have the autonomy to create their compute resources.

    Up to 1000 applications are expected per environment.

You need to create a design that accommodates these requirements. What should you do?

A.

Create a design where each project has its own VPC. Ensure all VPCs are connected by a Network Connectivity Center hub that is centrally managed by the network team.

B.

Create a design that implements a single Shared VPC. Use VPC firewall rules with secure tags to enforce micro-segmentation between environments.

C.

Create a design that has one host project with a Shared VPC for the production environment, another host project with a Shared VPC for the non-production environment, and a service project that is associated with the corresponding host project for each initiative.

D.

Create a design that has a Shared VPC for each project. Implement hierarchical firewall policies to apply micro-segmentation between VPCs.

Full Access
Question # 10

You need to enable Cloud CDN for all the objects inside a storage bucket. You want to ensure that all the object in the storage bucket can be served by the CDN.

What should you do in the GCP Console?

A.

Create a new cloud storage bucket, and then enable Cloud CDN on it.

B.

Create a new TCP load balancer, select the storage bucket as a backend, and then enable Cloud CDN on the backend.

C.

Create a new SSL proxy load balancer, select the storage bucket as a backend, and then enable Cloud CDN on the backend.

D.

Create a new HTTP load balancer, select the storage bucket as a backend, enable Cloud CDN on the backend, and make sure each object inside the storage bucket is shared publicly.

Full Access
Question # 11

You have a storage bucket that contains the following objects:

- folder-a/image-a-1.jpg

- folder-a/image-a-2.jpg

- folder-b/image-b-1.jpg

- folder-b/image-b-2.jpg

Cloud CDN is enabled on the storage bucket, and all four objects have been successfully cached. You want to remove the cached copies of all the objects with the prefix folder-a, using the minimum number of commands.

What should you do?

A.

Add an appropriate lifecycle rule on the storage bucket.

B.

Issue a cache invalidation command with pattern /folder-a/*.

C.

Make sure that all the objects with prefix folder-a are not shared publicly.

D.

Disable Cloud CDN on the storage bucket. Wait 90 seconds. Re-enable Cloud CDN on the storage bucket.

Full Access
Question # 12

You have deployed a proof-of-concept application by manually placing instances in a single Compute Engine zone. You are now moving the application to production, so you need to increase your application availability and ensure it can autoscale.

How should you provision your instances?

A.

Create a single managed instance group, specify the desired region, and select Multiple zones for the location.

B.

Create a managed instance group for each region, select Single zone for the location, and manually distribute instances across the zones in that region.

C.

Create an unmanaged instance group in a single zone, and then create an HTTP load balancer for the instance group.

D.

Create an unmanaged instance group for each zone, and manually distribute the instances across the desired zones.

Full Access
Question # 13

Question:

Your organization has an on-premises data center. You need to provide connectivity from the on-premises data center to Google Cloud. Bandwidth must be at least 1 Gbps, and the traffic must not traverse the internet. What should you do?

A.

Configure HA VPN by using high availability gateways and tunnels.

B.

Configure Dedicated Interconnect by creating a VLAN attachment, activate the connection, and submit the pairing key to your service provider.

C.

Configure Cross-Cloud Interconnect by creating a VLAN attachment, activate the connection, and then submit the pairing key to your service provider.

D.

Configure Partner Interconnect by creating a VLAN attachment, submit the pairing key to your service provider, and activate the connection.

Full Access
Question # 14

Your frontend application VMs and your backend database VMs are all deployed in the same VPC but across different subnets. Global network firewall policy rules are configured to allow traffic from the frontend VMs to the backend VMs. Based on a recent compliance requirement, this traffic must now be inspected by network virtual appliances (NVAs) firewalls that are deployed in the same VPC. The NVAs are configured to be full network proxies and will source NAT-allowed traffic. You need to configure VPC routing to allow the NVAs to inspect the traffic between subnets. What should you do?

A.

Place your NVAs behind an internal passthrough Network Load Balancer named ilb1. Add global network firewall policy rules to allow traffic through your NVAs. Create a custom static route with the destination IP range of the backend VM subnet, frontend instance tag, and the next hop of ilb1. Add a frontend network tag to your frontend VMs.

B.

Create your NVA with multiple interfaces. Configure NIC0 for NVA in the backend subnet. Configure NIC1 for NVA in the frontend subnet. Place your NVAs behind an internal passthrough Network Load Balancer named ilb1. Add global network firewall policy rules to allow traffic through your NVAs. Create a custom static route with the destination IP range of the backend VM subnet, frontend instance tag, and the next hop of ilb1. Add a frontend ne

C.

Place your NVAs behind an internal passthrough Network Load Balancer named ilb1. Add the global network firewall policy rules to allow traffic through your NVAs. Create a policy-based route (PBR) with the source IP range of the backend VM subnet, destination IP range of the frontend VM subnet, and the next hop of ilb1. Scope the PBR to the VMs with the backend network tag. Add a backend network tag to your backend servers.

D.

Place your NVAs behind an internal passthrough Network Load Balancer named ilb1. Add global network firewall policy rules to allow traffic through your NVAs. Create a policy-based route (PBR) with the source IP range of the frontend VM subnet, destination IP range of the backend VM subnet, and the next hop of ilb1. Scope the PBR to the VMs with the frontend network tag. Add a frontend network tag to your frontend servers.

Full Access
Question # 15

You have created an HTTP(S) load balanced service. You need to verify that your backend instances are responding properly.

How should you configure the health check?

A.

Set request-path to a specific URL used for health checking, and set proxy-header to PROXY_V1.

B.

Set request-path to a specific URL used for health checking, and set host to include a custom host header that identifies the health check.

C.

Set request-path to a specific URL used for health checking, and set response to a string that the backend service will always return in the response body.

D.

Set proxy-header to the default value, and set host to include a custom host header that identifies the health check.

Full Access
Question # 16

You have several microservices running in a private subnet in an existing Virtual Private Cloud (VPC). You need to create additional serverless services that use Cloud Run and Cloud Functions to access the microservices. The network traffic volume between your serverless services and private microservices is low. However, each serverless service must be able to communicate with any of your microservices. You want to implement a solution that minimizes cost. What should you do?

A.

Deploy your serverless services to the serverless VPC. Peer the serverless service VPC to the existing VPC. Configure firewall rules to allow traffic between the serverless services and your existing microservices.

B.

Create a serverless VPC access connector for each serverless service. Configure the connectors to allow traffic between the serverless services and your existing microservices.

C.

Deploy your serverless services to the existing VPC. Configure firewall rules to allow traffic between the serverless services and your existing microservices.

D.

Create a serverless VPC access connector. Configure the serverless service to use the connector for communication to the microservices.

Full Access
Question # 17

Your organization has approximately 100 teams that need to manage their own environments. A central team must manage the network. You need to design a landing zone that provides separate projects for each team. You must also make sure the solution can scale. What should you do?

A.

Configure VPC Network Peering, and peer one of the VPCs to the service project.

B.

Configure a Shared VPC, and create a VPC network in the service project.

C.

Configure a Shared VPC, and create a VPC network in the host project.

D.

Configure Policy-based Routing for each team.

Full Access
Question # 18

You are troubleshooting an issue where your organization's Cloud HA VPN is disconnected from your on-premises router for approximately 10 seconds before reestablishing the tunnel. The issue regularly occurs every few hours. You notice that the HA VPN logs show an entry of Received SA_DELETE when this issue occurs. You need to resolve this issue and prevent future VPN downtime from impacting your production applications. What should you do?

A.

Q Update the pre-shared key (PSK) of the on-premises router’s VPN tunnel configuration to match the PSK of the Cloud HA VPN.

B.

Q Update the on-premises router’s BGP router ID to reflect the link-local IP peer address assigned by Cloud Router.

C.

Q Update the on-premises router’s Phase 1 and Phase 2 lifetime IKE parameters to match the values in the Cloud HA VPN documentation.

D.

Q Update the on-premises router’s Diffie-Hellman groups and cipher proposal list to match the values in the Cloud HA VPN documentation.

Full Access
Question # 19

Your organization has a hub and spoke architecture with VPC Network Peering, and hybrid connectivity is centralized at the hub. The Cloud Router in the hub VPC is advertising subnet routes, but the on-premises router does not appear to be receiving any subnet routes from the VPC spokes. You need to resolve this issue. What should you do?

A.

Create custom routes at the Cloud Router in the hub to advertise the subnets of the VPC spokes.

B.

Create custom learned routes at the Cloud Router in the hub to advertise the subnets of the VPC spokes.

C.

Create custom routes at the Cloud Router in the spokes to advertise the subnets of the VPC spokes.

D.

Create a BGP route policy at the Cloud Router, and ensure the subnets of the VPC spokes are being announced towards the on-premises environment.

Full Access
Question # 20

(You are developing an internet of things (IoT) application that captures sensor data from multiple devices that have already been set up. You need to identify the global data storage product your company should use to store this data. You must ensure that the storage solution you choose meets your requirements of sub-millisecond latency. What should you do?)

A.

Store the IoT data in Spanner. Use caches to speed up the process and avoid latencies.

B.

Store the IoT data in Bigtable.

C.

Capture IoT data in BigQuery datasets.

D.

Store the IoT data in Cloud Storage. Implement caching by using Cloud CDN.

Full Access
Question # 21

Question:

Your organization has approximately 100 teams that need to manage their own environments. A central team must manage the network. You need to design a landing zone that provides separate projects for each team and ensure the solution can scale. What should you do?

A.

Configure VPC Network Peering and peer one of the VPCs to the service project.

B.

Configure Policy-based Routing for each team.

C.

Configure a Shared VPC and create a VPC network in the host project.

D.

Configure a Shared VPC, and create a VPC network in the service project.

Full Access
Question # 22

You are creating an instance group and need to create a new health check for HTTP(s) load balancing.

Which two methods can you use to accomplish this? (Choose two.)

A.

Create a new health check using the gcloud command line tool.

B.

Create a new health check using the VPC Network section in the GCP Console.

C.

Create a new health check, or select an existing one, when you complete the load balancer’s backend configuration in the GCP Console.

D.

Create a new legacy health check using the gcloud command line tool.

E.

Create a new legacy health check using the Health checks section in the GCP Console.

Full Access
Question # 23

Question:

Your organization is developing a landing zone architecture with the following requirements:

    No communication between production and non-production environments.

    Communication between applications within an environment may be necessary.

    Network administrators should centrally manage all network resources, including subnets, routes, and firewall rules.

    Each application should be billed separately.

    Developers of an application within a project should have the autonomy to create their compute resources.

    Up to 1000 applications are expected per environment.

What should you do?

A.

Create a design that has a Shared VPC for each project. Implement hierarchical firewall policies to apply micro-segmentation between VPCs.

B.

Create a design where each project has its own VPC. Ensure all VPCs are connected by a Network Connectivity Center hub that is centrally managed by the network team.

C.

Create a design that implements a single Shared VPC. Use VPC firewall rules with secure tags to enforce micro-segmentation between environments.

D.

Create a design that has one host project with a Shared VPC for the production environment, another host project with a Shared VPC for the non-production environment, and a service project that is associated with the corresponding host project for each initiative.

Full Access
Question # 24

You have configured Cloud CDN using HTTP(S) load balancing as the origin for cacheable content. Compression is configured on the web servers, but responses served by Cloud CDN are not compressed.

What is the most likely cause of the problem?

A.

You have not configured compression in Cloud CDN.

B.

You have configured the web servers and Cloud CDN with different compression types.

C.

The web servers behind the load balancer are configured with different compression types.

D.

You have to configure the web servers to compress responses even if the request has a Via header.

Full Access
Question # 25

You want to implement an IPSec tunnel between your on-premises network and a VPC via Cloud VPN. You need to restrict reachability over the tunnel to specific local subnets, and you do not have a device capable of speaking Border Gateway Protocol (BGP).

Which routing option should you choose?

A.

Dynamic routing using Cloud Router

B.

Route-based routing using default traffic selectors

C.

Policy-based routing using a custom local traffic selector

D.

Policy-based routing using the default local traffic selector

Full Access
Question # 26

You are designing a hybrid cloud environment for your organization. Your Google Cloud environment is interconnected with your on-premises network using Cloud HA VPN and Cloud Router. The Cloud Router is configured with the default settings. Your on-premises DNS server is located at 192.168.20.88 and is protected by a firewall, and your Compute Engine resources are located at 10.204.0.0/24. Your Compute Engine resources need to resolve on-premises private hostnames using the domain corp.altostrat.com while still resolving Google Cloud hostnames. You want to follow Google-recommended practices. What should you do?

A.

Create a private forwarding zone in Cloud DNS for ‘corp.altostrat.com’ called corp-altostrat-com that points to 192.168.20.88.

Configure your on-premises firewall to accept traffic from 10.204.0.0/24.

Set a custom route advertisement on the Cloud Router for 10.204.0.0/24

B.

Create a private forwarding zone in Cloud DNS for ‘corp.altostrat.com’ called corp-altostrat-com that points to 192.168 20.88.

Configure your on-premises firewall to accept traffic from 35.199.192.0/19

Set a custom route advertisement on the Cloud Router for 35.199.192.0/19.

C.

Create a private forwarding zone in Cloud DNS for ‘corp .altostrat.com’ called corp-altostrat-com that points to 192.168.20.88.

Configure your on-premises firewall to accept traffic from 10.204.0.0/24.

Modify the /etc/resolv conf file on your Compute Engine instances to point to 192.168.20 88

D.

Create a private zone in Cloud DNS for ‘corp altostrat.com’ called corp-altostrat-com.

Configure DNS Server Policies and create a policy with Alternate DNS servers to 192.168.20.88.

Configure your on-premises firewall to accept traffic from 35.199.192.0/19.

Set a custom route advertisement on the Cloud Router for 35.199.192.0/19.

Full Access
Question # 27

Your company just completed the acquisition of Altostrat (a current GCP customer). Each company has a separate organization in GCP and has implemented a custom DNS solution. Each organization will retain its current domain and host names until after a full transition and architectural review is done in one year. These are the assumptions for both GCP environments.

• Each organization has enabled full connectivity between all of its projects by using Shared VPC.

• Both organizations strictly use the 10.0.0.0/8 address space for their instances, except for bastion hosts (for accessing the instances) and load balancers for serving web traffic.

• There are no prefix overlaps between the two organizations.

• Both organizations already have firewall rules that allow all inbound and outbound traffic from the 10.0.0.0/8 address space.

• Neither organization has Interconnects to their on-premises environment.

You want to integrate networking and DNS infrastructure of both organizations as quickly as possible and with minimal downtime.

Which two steps should you take? (Choose two.)

A.

Provision Cloud Interconnect to connect both organizations together.

B.

Set up some variant of DNS forwarding and zone transfers in each organization.

C.

Connect VPCs in both organizations using Cloud VPN together with Cloud Router.

D.

Use Cloud DNS to create A records of all VMs and resources across all projects in both organizations.

E.

Create a third organization with a new host project, and attach all projects from your company and Altostrat to it using shared VPC.

Full Access
Question # 28

In order to provide subnet level isolation, you want to force instance-A in one subnet to route through a security appliance, called instance-B, in another subnet.

What should you do?

A.

Create a more specific route than the system-generated subnet route, pointing the next hop to instance-B with no tag.

B.

Create a more specific route than the system-generated subnet route, pointing the next hop to instance-B with a tag applied to instance-A.

C.

Delete the system-generated subnet route and create a specific route to instance-B with a tag applied to instance-A.

D.

Move instance-B to another VPC and, using multi-NIC, connect instance-B's interface to instance-A's network. Configure the appropriate routes to force traffic through to instance-A.

Full Access
Question # 29

Question:

Your organization has a hub and spoke architecture with VPC Network Peering, and hybrid connectivity is centralized at the hub. The Cloud Router in the hub VPC is advertising subnet routes, but the on-premises router does not appear to be receiving any subnet routes from the VPC spokes. You need to resolve this issue. What should you do?

A.

Create custom learned routes at the Cloud Router in the hub to advertise the subnets of the VPC spokes.

B.

Create custom routes at the Cloud Router in the spokes to advertise the subnets of the VPC spokes.

C.

Create a BGP route policy at the Cloud Router, and ensure the subnets of the VPC spokes are being announced towards the on-premises environment.

D.

Create custom routes at the Cloud Router in the hub to advertise the subnets of the VPC spokes.

Full Access
Question # 30

Your company has recently expanded their EMEA-based operations into APAC. Globally distributed users report that their SMTP and IMAP services are slow. Your company requires end-to-end encryption, but you do not have access to the SSL certificates.

Which Google Cloud load balancer should you use?

A.

SSL proxy load balancer

B.

Network load balancer

C.

HTTPS load balancer

D.

TCP proxy load balancer

Full Access
Question # 31

Your team is developing an application that will be used by consumers all over the world. Currently, the application sits behind a global external application load balancer You need to protect the application from potential application-level attacks. What should you do?

A.

Enable Cloud CDN on the backend service.

B.

Create multiple firewall deny rules to block malicious users, and apply them to the global external application load balancer

C.

Create a Google Cloud Armor security policy with web application firewall rules, and apply the security policy to the backend service.

D.

Create a VPC Service Controls perimeter with the global external application load balancer as the protected service, and apply it to the backend service

Full Access
Question # 32

Your organization wants to set up hybrid connectivity with VLAN attachments that terminate in a single Cloud Router with 99.9% uptime. You need to create a network design for your on-premises router that meets those requirements and has an active/passive configuration that uses only one VLAN attachment at a time. What should you do?

A.

Create a design that uses a BGP multi-exit discriminator (MED) attribute to influence the egress path from Google Cloud to the on-premises environment.

B.

Create a design that uses the as_path BGP attribute to influence the egress path from Google Cloud to the on-premises environment.

C.

Create a design that uses an equal-cost multipath (ECMP) with flow-based hashing on your on-premises devices.

D.

Create a design that uses the local_pref BGP attribute to influence the egress path from Google Cloud to the on-premises environment.

Full Access
Question # 33

You have provisioned a Dedicated Interconnect connection of 20 Gbps with a VLAN attachment of 10 Gbps. You recently noticed a steady increase in ingress traffic on the Interconnect connection from the on-premises data center. You need to ensure that your end users can achieve the full 20 Gbps throughput as quickly as possible. Which two methods can you use to accomplish this? (Choose two.)

A.

Configure an additional VLAN attachment of 10 Gbps in another region. Configure the on-premises router to advertise routes with the same multi-exit discriminator (MED).

B.

Configure an additional VLAN attachment of 10 Gbps in the same region. Configure the on-premises router to advertise routes with the same multi-exit discriminator (MED).

C.

From the Google Cloud Console, modify the bandwidth of the VLAN attachment to 20 Gbps.

D.

From the Google Cloud Console, request a new Dedicated Interconnect connection of 20 Gbps, and configure a VLAN attachment of 10 Gbps.

E.

Configure Link Aggregation Control Protocol (LACP) on the on-premises router to use the 20-Gbps Dedicated Interconnect connection.

Full Access
Question # 34

Your organization recently re-architected your cloud environment to use Network Connectivity Center. However, an error occurred when you tried to add a new VPC named vpc-dev as a spoke. The error indicated that there was an issue with an existing spoke and the IP space of a VPC named vpc-pre-prod. You must complete the migration quickly and efficiently. What should you do?

A.

Remove the conflicting VPC spoke for vpc-pre-prod from the set of VPC spokes in Network Connectivity Center. Add the VPC spoke for vpc-dev. Add the previously removed vpc-pre-prod as a VPC spoke.

B.

Delete the VMs associated with the conflicting subnets, then delete the conflicting subnets in vpc-dev. Recreate the subnets with a new IP range and redeploy the previously deleted VMs in the new subnets. Add the VPC spoke for vpc-dev.

C.

Exclude the conflicting IP range by using the --exclude-export-ranges flag when creating the VPC spoke for vpc-dev.

D.

Exclude the conflicting IP range by using the --exclude-export-ranges flag in the hub when attaching the VPC spoke for vpc-dev.

Full Access
Question # 35

(You are managing the security configuration of your company's Google Cloud organization. The Operations team needs specific permissions on both a Google Kubernetes Engine (GKE) cluster and a Cloud SQL instance. Two predefined Identity and Access Management (IAM) roles exist that contain a subset of the permissions needed by the team. You need to configure the necessary IAM permissions for this team while following Google-recommended practices. What should you do?)

A.

Grant the team the two predefined IAM roles.

B.

Create a custom IAM role that combines the permissions from the two relevant predefined roles.

C.

Create a custom IAM role that includes only the required permissions from the predefined roles.

D.

Grant the team the IAM roles of Kubernetes Engine Admin and Cloud SQL Admin.

Full Access
Question # 36

You are using a 10-Gbps direct peering connection to Google together with the gsutil tool to upload files to Cloud Storage buckets from on-premises servers. The on-premises servers are 100 milliseconds away from the Google peering point. You notice that your uploads are not using the full 10-Gbps bandwidth available to you. You want to optimize the bandwidth utilization of the connection.

What should you do on your on-premises servers?

A.

Tune TCP parameters on the on-premises servers.

B.

Compress files using utilities like tar to reduce the size of data being sent.

C.

Remove the -m flag from the gsutil command to enable single-threaded transfers.

D.

Use the perfdiag parameter in your gsutil command to enable faster performance: gsutil perfdiag gs://[BUCKET NAME].

Full Access
Question # 37

Question:

Your organization has distributed geographic applications with significant data volumes. You need to create a design that exposes the HTTPS workloads globally and keeps traffic costs to a minimum. What should you do?

A.

Deploy a regional external Application Load Balancer with Standard Network Service Tier.

B.

Deploy a regional external Application Load Balancer with Premium Network Service Tier.

C.

Deploy a global external proxy Network Load Balancer with Standard Network Service Tier.

D.

Deploy a global external Application Load Balancer with Premium Network Service Tier.

Full Access
Question # 38

You have the networking configuration shown in the diagram. A pair of redundant Dedicated Interconnect connections (int-Igal and int-Iga2) terminate on the same Cloud Router. The Interconnect connections terminate on two separate on-premises routers. You are advertising the same prefixes from the Border Gateway Protocol (BGP) sessions associated with the Dedicated Interconnect connections. You need to configure one connection as Active for both ingress and egress traffic. If the active Interconnect connection fails, you want the passive Interconnect connection to automatically begin routing all traffic Which two actions should you take to meet this requirement? (Choose Two)

A.

Configure the advertised route priority > 10,200 on the active Interconnect connection.

B.

Advertise a lower MED on the passive Interconnect connection from the on-premises router

C.

Configure the advertised route priority as 200 for the BGP session associated with the active Interconnect connection.

D.

Configure the advertised route priority as 200 for the BGP session associated with the passive Interconnect connection.

E.

Advertise a lower MED on the active Interconnect connection from the on-premises router

Full Access
Question # 39

You have provisioned a Partner Interconnect connection to extend connectivity from your on-premises data center to Google Cloud. You need to configure a Cloud Router and create a VLAN attachment to connect to resources inside your VPC. You need to configure an Autonomous System number (ASN) to use with the associated Cloud Router and create the VLAN attachment.

What should you do?

A.

Use a 4-byte private ASN 4200000000-4294967294.

B.

Use a 2-byte private ASN 64512-65535.

C.

Use a public Google ASN 15169.

D.

Use a public Google ASN 16550.

Full Access
Question # 40

(Your digital media company stores a large number of video files on-premises. Each video file ranges from 100 MB to 100 GB. You are currently storing 150 TB of video data in your on-premises network, with no room for expansion. You need to migrate all infrequently accessed video files older than one year to Cloud Storage to ensure that on-premises storage remains available for new files. You must also minimize costs and control bandwidth usage. What should you do?)

A.

Create a Cloud Storage bucket. Establish an Identity and Access Management (IAM) role with write permissions to the bucket. Use the gsutil tool to directly copy files over the network to Cloud Storage.

B.

Set up a Cloud Interconnect connection between the on-premises network and Google Cloud. Establish a private endpoint for Filestore access. Transfer the data from the existing Network File System (NFS) to Filestore.

C.

Use Transfer Appliance to request an appliance. Load the data locally, and ship the appliance back to Google for ingestion into Cloud Storage.

D.

Use Storage Transfer Service to move the data from the selected on-premises file storage systems to a Cloud Storage bucket.

Full Access
Question # 41

You are configuring an Application Load Balancer. The backend resides in your on-premises data center and is connected by Dedicated Interconnect. You need to ensure the load balancer can reference these on-premises resources. You do not want the traffic to traverse the internet at all. What should you do?

A.

Q Configure a Private Service Connect network endpoint group (NEG) as a backend service as part of the load balancer. Ensure firewalls are opened for the client source IPs.

B.

Q Configure a zonal network endpoint group (NEG) as a backend service as part of the load balancer. Ensure firewalls are opened for the client source IPs.

C.

Q Configure an internet network endpoint group (NEG) as a backend service as part of the load balancer. Ensure firewalls are opened for the proxy-only subnet.

D.

Q Configure a hybrid network endpoint group (NEG) as a backend service as part of the load balancer. Ensure firewalls are opened for the proxy-only subnet.

Full Access
Question # 42

You recently deployed two network virtual appliances in us-central1. Your network appliances provide connectivity to your on-premises network, 10.0.0.0/8. You need to configure the routing for your Virtual Private Cloud (VPC). Your design must meet the following requirements:

All access to your on-premises network must go through the network virtual appliances.

Allow on-premises access in the event of a single network virtual appliance failure.

Both network virtual appliances must be used simultaneously.

Which method should you use to accomplish this?

A.

Configure two routes for 10.0.0.0/8 with different priorities, each pointing to separate network virtual appliances.

B.

Configure an internal HTTP(S) load balancer with the two network virtual appliances as backends. Configure a route for 10.0.0.0/8 with the internal HTTP(S) load balancer as the next hop.

C.

Configure a network load balancer for the two network virtual appliances. Configure a route for 10.0.0.0/8 with the network load balancer as the next hop.

D.

Configure an internal TCP/UDP load balancer with the two network virtual appliances as backends. Configure a route for 10.0.0.0/8 with the internal load balancer as the next hop.

Full Access
Question # 43

You are in the early stages of planning a migration to GCP. You want to test the functionality of your hybrid cloud design before you start to implement it in production. The design includes services running on a Compute Engine Virtual Machine instance that need to communicate to on-premises servers using private IP addresses. The on-premises servers have connectivity to the internet, but you have not yet established any Cloud Interconnect connections. You want to choose the lowest cost method of enabling connectivity between your instance and on-premises servers and complete the test in 24 hours.

Which connectivity method should you choose?

A.

Cloud VPN

B.

50-Mbps Partner VLAN attachment

C.

Dedicated Interconnect with a single VLAN attachment

D.

Dedicated Interconnect, but don’t provision any VLAN attachments

Full Access
Question # 44

Question:

You are configuring the final elements of a migration effort where resources have been moved from on-premises to Google Cloud. While reviewing the deployed architecture, you noticed that DNS resolution is failing when queries are being sent to the on-premises environment. You log in to a Compute Engine instance, try to resolve an on-premises hostname, and the query fails. DNS queries are not arriving at the on-premises DNS server. You need to use managed services to reconfigure Cloud DNS to resolve the DNS error. What should you do?

A.

Validate that the Compute Engine instances are using the Metadata Service IP address as their resolver. Configure an outbound forwarding zone for the on-premises domain pointing to the on-premises DNS server. Configure Cloud Router to advertise the Cloud DNS proxy range to the on-premises network.

B.

Validate that there is network connectivity to the on-premises environment and that the Compute Engine instances can reach other on-premises resources. If errors persist, remove the VPC Network Peerings and recreate the peerings after validating the routes.

C.

Review the existing Cloud DNS zones, and validate that there is a route in the VPC directing traffic destined to the IP address of the DNS servers. Recreate the existing DNS forwarding zones to forward all queries to the on-premises DNS servers.

D.

Ensure that the operating systems of the Compute Engine instances are configured to send DNS queries to the on-premises DNS servers directly.

Full Access
Question # 45

You created a VPC network named Retail in auto mode. You want to create a VPC network named Distribution and peer it with the Retail VPC.

How should you configure the Distribution VPC?

A.

Create the Distribution VPC in auto mode. Peer both the VPCs via network peering.

B.

Create the Distribution VPC in custom mode. Use the CIDR range 10.0.0.0/9. Create the necessary subnets, and then peer them via network peering.

C.

Create the Distribution VPC in custom mode. Use the CIDR range 10.128.0.0/9. Create the necessary subnets, and then peer them via network peering.

D.

Rename the default VPC as "Distribution" and peer it via network peering.

Full Access
Question # 46

You need to create a GKE cluster in an existing VPC that is accessible from on-premises. You must meet the following requirements:

    IP ranges for pods and services must be as small as possible.

    The nodes and the master must not be reachable from the internet.

    You must be able to use kubectl commands from on-premises subnets to manage the cluster.

How should you create the GKE cluster?

A.

• Create a private cluster that uses VPC advanced routes.

•Set the pod and service ranges as /24.

•Set up a network proxy to access the master.

B.

• Create a VPC-native GKE cluster using GKE-managed IP ranges.

•Set the pod IP range as /21 and service IP range as /24.

•Set up a network proxy to access the master.

C.

• Create a VPC-native GKE cluster using user-managed IP ranges.

•Enable a GKE cluster network policy, set the pod and service ranges as /24.

•Set up a network proxy to access the master.

•Enable master authorized networks.

D.

• Create a VPC-native GKE cluster using user-managed IP ranges.

•Enable privateEndpoint on the cluster master.

•Set the pod and service ranges as /24.

•Set up a network proxy to access the master.

•Enable master authorized networks.

Full Access
Question # 47

You have a web application that is currently hosted in the us-central1 region. Users experience high latency when traveling in Asia. You've configured a network load balancer, but users have not experienced a performance improvement. You want to decrease the latency.

What should you do?

A.

Configure a policy-based route rule to prioritize the traffic.

B.

Configure an HTTP load balancer, and direct the traffic to it.

C.

Configure Dynamic Routing for the subnet hosting the application.

D.

Configure the TTL for the DNS zone to decrease the time between updates.

Full Access
Question # 48

You are implementing a VPC architecture for your organization by using a Network Connectivity Center hub and spoke topology:

• There is one Network Connectivity Center hybrid spoke to receive on-premises routes.

• There is one VPC spoke that needs to be added as a Network Connectivity Center spoke.

Your organization has limited routable IP space fortheir cloud environment (192.168.0.0/20). The Network Connectivity Center spoke VPC is connected to on-premises with a Cloud Interconnect connection in the us-east4 region. The on-premises IP range is 172.16.0.0/16. You need to reach on-premises resources from multiple Google Cloud regions (us-westl, europe-centrall, and asia-southeastl) and minimize the IP addresses being used. What should you do?

A.

O 1. Configure a Private NAT gateway and NAT subnet in us-westl (192.168.1.0/24), europe-centrall (192.168.2.0/24) and asia-southeastl (192.168.3.0/24).

2. Add the VPC as a spoke and configure an export include policy to advertise only 192.168.1.0/24, 192.168.2.0/24, and 192.168.3.0/24 to the hub.

3. Enable global dynamic routing to allow resources in us-westl, us-centrall and asia-southeastl to reach the on-premises location th

B.

Q 1. Configure a Private NAT gateway instance in us-westl (192.168.1.0/24), europe-centrall (192.168.2.0/24), and asia-southeastl (192.168.3.0/24).

2. Add the VPC as a spoke and configure an export exclude policy on the VPC spoke to advertise only the NAT subnets 192.168.1.0/24, 192.168.2.0/24, and 192.168.3.0/24 to the hub.

3. Enable global dynamic routing to allow resources in us-westl, us-centrall, and asia-southeastl to reac

C.

Q 1. Configure a Private NAT gateway instance in us-east4 (192.168.1.0/24).

2. Add the VPC as a spoke and configure an export include policy on the VPC spoke to advertise 192.168.1.0/24 to the hub.

3. Enable global dynamic routing to allow resources in us-westl, us-centrall and asia-southeast l to reach the on-premises location through us-east 4.

D.

O 1- Configure a Private NAT gateway instance in us-westl (172.16.1.0/24), europe-centrall (172.16.2.0/24), and asia-southeastl (172.16.3.0/24).

2. Add the VPC as a spoke and configure an export include policy on the VPC spoke to advertise only the NAT subnets 172.16.1.0/24, 172.16.2.0/24, and 172.16.3.0/24 to the hub.

3. Enable global dynamic to allow resources in us-westl, us-centrall, and asia-southeastl to reach the on-premi

Full Access
Question # 49

Question:

Your company's current network architecture has three VPC Service Controls perimeters:

    One perimeter (PERIMETER_PROD) to protect production storage buckets

    One perimeter (PERIMETER_NONPROD) to protect non-production storage buckets

    One perimeter (PERIMETER_VPC) that contains a single VPC (VPC_ONE)

In this single VPC (VPC_ONE), the IP_RANGE_PROD is dedicated to the subnets of the production workloads, and the IP_RANGE_NONPROD is dedicated to subnets of non-production workloads. Workloads cannot be created outside those two ranges. You need to ensure that production workloads can access only production storage buckets and non-production workloads can access only non-production storage buckets with minimal setup effort. What should you do?

A.

Develop a design that uses the IP_RANGE_PROD and IP_RANGE_NONPROD perimeters to create two access levels, with each access level referencing a single range. Create two ingress access policies with each access policy referencing one of the two access levels. Update the PERIMETER_PROD and PERIMETER_NONPROD perimeters.

B.

Develop a design that removes the PERIMETER_VPC perimeter. Update the PERIMETER_NONPROD perimeter to include the project containing VPC_ONE. Remove the PERIMETER_PROD perimeter.

C.

Develop a design that creates a new VPC (VPC_NONPROD) in the same project as VPC_ONE. Migrate all the non-production workloads from VPC_ONE to the PERIMETER_NONPROD perimeter. Remove the PERIMETER_VPC perimeter. Update the PERIMETER_PROD perimeter to include VPC_ONE and the PERIMETER_NONPROD perimeter to include VPC_NONPROD.

D.

Develop a design that removes the PERIMETER_VPC perimeter. Update the PERIMETER_PROD perimeter to include the project containing VPC_ONE. Remove the PERIMETER_NONPROD perimeter.

Full Access
Question # 50

Question:

Your organization recently exposed a set of services through a global external Application Load Balancer. After conducting some testing, you observed that responses would intermittently yield a non-HTTP 200 response. You need to identify the error. What should you do? (Choose 2 answers)

A.

Access a VM in the VPC through SSH, and try to access a backend VM directly. If the request is successful from the VM, increase the quantity of backends.

B.

Enable and review the health check logs. Review the error responses in Cloud Logging.

C.

Validate the health of the backend service. Enable logging on the load balancer, and identify the error response in Cloud Logging. Determine the cause of the error by reviewing the statusDetails log field.

D.

Delete the load balancer and backend services. Create a new passthrough Network Load Balancer. Configure a failover group of VMs for the backend.

E.

Validate the health of the backend service. Enable logging for the backend service, and identify the error response in Cloud Logging. Determine the cause of the error by reviewing the statusDetails log field.

Full Access
Question # 51

You need to configure a Google Kubernetes Engine (GKE) cluster. The initial deployment should have 5 nodes with the potential to scale to 10 nodes. The maximum number of Pods per node is 8. The number of services could grow from 100 to up to 1024. How should you design the IP schema to optimally meet this requirement?

A.

Configure a /28 primary IP address range for the node IP addresses. Configure a (25 secondary IP range for the Pods. Configure a /22 secondary IP range for the Services.

B.

Configure a /28 primary IP address range for the node IP addresses. Configure a /25 secondary IP range for the Pods. Configure a /21 secondary IP range for the Services.

C.

Configure a /28 primary IP address range for the node IP addresses. Configure a /28 secondary IP range for the Pods. Configure a /21 secondary IP range for the Services.

D.

Configure a /28 primary IP address range for the node IP addresses. Configure a /24 secondary IP range for the Pads. Configure a /22 secondary IP range for the Services.

Full Access
Question # 52

You are using the gcloud command line tool to create a new custom role in a project by coping a predefined role. You receive this error message:

INVALID_ARGUMENT: Permission resourcemanager.projects.list is not valid

What should you do?

A.

Add the resourcemanager.projects.get permission, and try again.

B.

Try again with a different role with a new name but the same permissions.

C.

Remove the resourcemanager.projects.list permission, and try again.

D.

Add the resourcemanager.projects.setIamPolicy permission, and try again.

Full Access
Question # 53

Your company has a Virtual Private Cloud (VPC) with two Dedicated Interconnect connections in two different regions: us-west1 and us-east1. Each Dedicated Interconnect connection is attached to a Cloud Router in its respective region by a VLAN attachment. You need to configure a high availability failover path. By default, all ingress traffic from the on-premises environment should flow to the VPC using the us-west1 connection. If us-west1 is unavailable, you want traffic to be rerouted to us-east1. How should you configure the multi-exit discriminator (MED) values to enable this failover path?

A.

Use regional routing. Set the us-east1 Cloud Router to a base priority of 100, and set the us-west1 Cloud Router to a base priority of 1

B.

Use global routing. Set the us-east1 Cloud Router to a base priority of 100, and set the us-west1 Cloud Router to a base priority of 1

C.

Use regional routing. Set the us-east1 Cloud Router to a base priority of 1000, and set the us-west1 Cloud Router to a base priority of 1

D.

Use global routing. Set the us-east1 Cloud Router to a base priority of 1000, and set the us-west1 Cloud Router to a base priority of 1

Full Access
Question # 54

You need to configure a static route to an on-premises resource behind a Cloud VPN gateway that is configured for policy-based routing using the gcloud command.

Which next hop should you choose?

A.

The default internet gateway

B.

The IP address of the Cloud VPN gateway

C.

The name and region of the Cloud VPN tunnel

D.

The IP address of the instance on the remote side of the VPN tunnel

Full Access
Question # 55

You want Cloud CDN to serve the https://www.example.com/images/spacetime.png static image file that is hosted in a private Cloud Storage bucket, You are using the VSE ORIG.-X_NZADERS cache mode You receive an HTTP 403 error when opening the file In your browser and you see that the HTTP response has a Cache-control: private, max-age=O header How should you correct this Issue?

A.

Configure a Cloud Storage bucket permission that gives the Storage Legacy Object Reader role

B.

Change the cache mode to cache all content.

C.

Increase the default time-to-live (TTL) for the backend service.

D.

Enable negative caching for the backend bucket

Full Access
Question # 56

You need to centralize the Identity and Access Management permissions and email distribution for the WebServices Team as efficiently as possible.

What should you do?

A.

Create a Google Group for the WebServices Team.

B.

Create a G Suite Domain for the WebServices Team.

C.

Create a new Cloud Identity Domain for the WebServices Team.

D.

Create a new Custom Role for all members of the WebServices Team.

Full Access
Question # 57

You are deploying a global external TCP load balancing solution and want to preserve the source IP address of the original layer 3 payload.

Which type of load balancer should you use?

A.

HTTP(S) load balancer

B.

Network load balancer

C.

Internal load balancer

D.

TCP/SSL proxy load balancer

Full Access
Question # 58

Your company has recently installed a Cloud VPN tunnel between your on-premises data center and your Google Cloud Virtual Private Cloud (VPC). You need to configure access to the Cloud Functions API for your on-premises servers. The configuration must meet the following requirements:

Certain data must stay in the project where it is stored and not be exfiltrated to other projects.

Traffic from servers in your data center with RFC 1918 addresses do not use the internet to access Google Cloud APIs.

All DNS resolution must be done on-premises.

The solution should only provide access to APIs that are compatible with VPC Service Controls.

What should you do?

A.

Create an A record for private.googleapis.com using the 199.36.153.8/30 address range.

Create a CNAME record for *.googleapis.com that points to the A record.

Configure your on-premises routers to use the Cloud VPN tunnel as the next hop for the addresses you used in the A record.

Remove the default internet gateway from the VPC where your Cloud VPN tunnel terminates.

B.

Create an A record for restricted.googleapis.com using the 199.36.153.4/30 address range.

Create a CNAME record for *.googleapis.com that points to the A record.

Configure your on-premises routers to use the Cloud VPN tunnel as the next hop for the addresses you used in the A record.

Configure your on-premises firewalls to allow traffic to the restricted.googleapis.com addresses.

C.

Create an A record for restricted.googleapis.com using the 199.36.153.4/30 address range.

Create a CNAME record for *.googleapis.com that points to the A record.

Configure your on-premises routers to use the Cloud VPN tunnel as the next hop for the addresses you used in the A record.

Remove the default internet gateway from the VPC where your Cloud VPN tunnel terminates.

D.

Create an A record for private.googleapis.com using the 199.36.153.8/30 address range.

Create a CNAME record for *.googleapis.com that points to the A record.

Configure your on-premises routers to use the Cloud VPN tunnel as the next hop for the addresses you used in the A record.

Configure your on-premises firewalls to allow traffic to the private.googleapis.com addresses.

Full Access
Question # 59

You have deployed a new internal application that provides HTTP and TFTP services to on-premises hosts. You want to be able to distribute traffic across multiple Compute Engine instances, but need to ensure that clients are sticky to a particular instance across both services.

Which session affinity should you choose?

A.

None

B.

Client IP

C.

Client IP and protocol

D.

Client IP, port and protocol

Full Access
Question # 60

You are increasing your usage of Cloud VPN between on-premises and GCP, and you want to support more traffic than a single tunnel can handle. You want to increase the available bandwidth using Cloud VPN.

What should you do?

A.

Double the MTU on your on-premises VPN gateway from 1460 bytes to 2920 bytes.

B.

Create two VPN tunnels on the same Cloud VPN gateway that point to the same destination VPN gateway IP address.

C.

Add a second on-premises VPN gateway with a different public IP address. Create a second tunnel on the existing Cloud VPN gateway that forwards the same IP range, but points at the new on-premises gateway IP.

D.

Add a second Cloud VPN gateway in a different region than the existing VPN gateway. Create a new tunnel on the second Cloud VPN gateway that forwards the same IP range, but points to the existing on-premises VPN gateway IP address.

Full Access
Question # 61

You have an application hosted on a Compute Engine virtual machine instance that cannot communicate with a resource outside of its subnet. When you review the flow and firewall logs, you do not see any denied traffic listed.

During troubleshooting you find:

• Flow logs are enabled for the VPC subnet, and all firewall rules are set to log.

• The subnetwork logs are not excluded from Stackdriver.

• The instance that is hosting the application can communicate outside the subnet.

• Other instances within the subnet can communicate outside the subnet.

• The external resource initiates communication.

What is the most likely cause of the missing log lines?

A.

The traffic is matching the expected ingress rule.

B.

The traffic is matching the expected egress rule.

C.

The traffic is not matching the expected ingress rule.

D.

The traffic is not matching the expected egress rule.

Full Access
Question # 62

You are the Organization Admin for your company. One of your engineers is responsible for setting up multiple host projects across multiple folders and sharing subnets with service projects. You need to enable the engineer's Identity and Access Management (IAM) configuration to complete their task in the fewest number of steps. What should you do?

A.

Set up the engineer with Compute Shared VPC Admin IAM role at the folder level.

B.

Set up the engineer with Compute Shared VPC Admin IAM role at the organization level.

C.

Set up the engineer with Compute Shared VPC Admin IAM role and Project IAM Admin role at the folder level.

D.

Set up the engineer with Compute Shared VPC Admin IAM role and Project IAM Admin role at the organization level.

Full Access
Question # 63

Your company is running out of network capacity to run a critical application in the on-premises data center. You want to migrate the application to GCP. You also want to ensure that the Security team does not lose their ability to monitor traffic to and from Compute Engine instances.

Which two products should you incorporate into the solution? (Choose two.)

A.

VPC flow logs

B.

Firewall logs

C.

Cloud Audit logs

D.

Stackdriver Trace

E.

Compute Engine instance system logs

Full Access
Question # 64

You have configured a Compute Engine virtual machine instance as a NAT gateway. You execute the following command:

gcloud compute routes create no-ip-internet-route \

--network custom-network1 \

--destination-range 0.0.0.0/0 \

--next-hop instance nat-gateway \

--next-hop instance-zone us-central1-a \

--tags no-ip --priority 800

You want existing instances to use the new NAT gateway. Which command should you execute?

A.

sudo sysctl -w net.ipv4.ip_forward=1

B.

gcloud compute instances add-tags [existing-instance] --tags no-ip

C.

gcloud builds submit --config=cloudbuild.waml --substitutions=TAG_NAME=no-ip

D.

gcloud compute instances create example-instance --network custom-network1 \

--subnet subnet-us-central \

--no-address \

--zone us-central1-a \

--image-family debian-9 \

--image-project debian-cloud \

--tags no-ip

Full Access
Question # 65

In your Google Cloud organization, you have two folders: Dev and Prod. You want a scalable and consistent way to enforce the following firewall rules for all virtual machines (VMs) with minimal cost:

Port 8080 should always be open for VMs in the projects in the Dev folder.

Any traffic to port 8080 should be denied for all VMs in your projects in the Prod folder.

What should you do?

A.

Create and associate a firewall policy with the Dev folder with a rule to open port 8080. Create and associate a firewall policy with the Prod folder with a rule to deny traffic to port 8080.

B.

Create a Shared VPC for the Dev projects and a Shared VPC for the Prod projects. Create a VPC firewall rule to open port 8080 in the Shared VPC for Dev. Create a firewall rule to deny traffic to port 8080 in the Shared VPC for Prod. Deploy VMs to those Shared VPCs.

C.

In all VPCs for the Dev projects, create a VPC firewall rule to open port 8080. In all VPCs for the Prod projects, create a VPC firewall rule to deny traffic to port 8080.

D.

Use Anthos Config Connector to enforce a security policy to open port 8080 on the Dev VMs and deny traffic to port 8080 on the Prod VMs.

Full Access
Question # 66

All the instances in your project are configured with the custom metadata enable-oslogin value set to FALSE and to block project-wide SSH keys. None of the instances are set with any SSH key, and no project-wide SSH keys have been configured. Firewall rules are set up to allow SSH sessions from any IP address range. You want to SSH into one instance.

What should you do?

A.

Open the Cloud Shell SSH into the instance using gcloud compute ssh.

B.

Set the custom metadata enable-oslogin to TRUE, and SSH into the instance using a third-party tool like putty or ssh.

C.

Generate a new SSH key pair. Verify the format of the private key and add it to the instance. SSH into the instance using a third-party tool like putty or ssh.

D.

Generate a new SSH key pair. Verify the format of the public key and add it to the project. SSH into the instance using a third-party tool like putty or ssh.

Full Access
Question # 67

You are adding steps to a working automation that uses a service account to authenticate. You need to drive the automation the ability to retrieve files from a Cloud Storage bucket. Your organization requires using the least privilege possible.

What should you do?

A.

Grant the compute.instanceAdmin to your user account.

B.

Grant the iam.serviceAccountUser to your user account.

C.

Grant the read-only privilege to the service account for the Cloud Storage bucket.

D.

Grant the cloud-platform privilege to the service account for the Cloud Storage bucket.

Full Access
Question # 68

Your company is planning a migration to Google Kubernetes Engine. Your application team informed you that they require a minimum of 60 Pods per node and a maximum of 100 Pods per node Which Pod per node CIDR range should you use?

A.

/24

B.

/25

C.

/26

D.

/28

Full Access
Question # 69

You have just deployed your infrastructure on Google Cloud. You now need to configure the DNS to meet the following requirements:

Your on-premises resources should resolve your Google Cloud zones.

Your Google Cloud resources should resolve your on-premises zones.

You need the ability to resolve “. internal” zones provisioned by Google Cloud.

What should you do?

A.

Configure an outbound server policy, and set your alternative name server to be your on-premises DNS resolver. Configure your on-premises DNS resolver to forward Google Cloud zone queries to Google's public DNS 8.8.8.8.

B.

Configure both an inbound server policy and outbound DNS forwarding zones with the target as the on-premises DNS resolver. Configure your on-premises DNS resolver to forward Google Cloud zone queries to Google Cloud's DNS resolver.

C.

Configure an outbound DNS server policy, and set your alternative name server to be your on-premises DNS resolver. Configure your on-premises DNS resolver to forward Google Cloud zone queries to Google Cloud's DNS resolver.

D.

Configure Cloud DNS to DNS peer with your on-premises DNS resolver. Configure your on-premises DNS resolver to forward Google Cloud zone queries to Google's public DNS 8.8.8.8.

Full Access